Klasifikace vzorů pomocí fuzzy neuronových sítí

| Kategorie: Diplomové, bakalářské práce  | Tento dokument chci!

Práce popisuje základy principu funkčnosti neuronů a vytvoření umělých neuronových sítí. Je zde důkladně popsána struktura a funkce neuronů a ukázán nejpoužívanější algoritmus pro učení neuronů. Základy fuzzy logiky, včetně jejich výhod a nevýhod, jsou rovněž prezentovány. Detailněji je popsán algoritmus zpětného šíření chyb a adaptivní neuro-fuzzy inferenční systém. Tyto techniky poskytují efektivní způsoby učení neuronových sítí.

Vydal: FCC Public s. r. o. Autor: Tamás Ollé

Strana 19 z 67

Vámi hledaný text obsahuje tato stránku dokumentu který není autorem určen k veřejnému šíření.

Jak získat tento dokument?






Poznámky redaktora
... During feedforward signal spreading, each neuron the input layer (Xi, = 1,.10 Backpropagation algorithm used approximately 80% all neural network applications.. This method adaptation the opposite direction the spread information from higher layers lower layers. Each neuron the inner layer calculates its activation (zj) and sends this signal all the neurons the output layer. The question then will be, how synaptic weights leading correct response the input signal are defined., Zp). Each training set pattern describes, how neurons are excited the input and output layers. In principle this way response neural net the input stimulus can be obtained, given excitation input layer neuron. What needed for learning the neural network? both the training set containing elements describing the solved problem and then method that can fix these samples the form neural network synaptic weight values, including the already mentioned ability generalize, possible. Algorithm itself includes three periods: feedforward spreading the input signal training pattern, backward spreading errors and actualization of weighted values connections. The process determining the synaptic weights linked again with the concept learning the neural networks. Signal spreading biological system proceeds such way too, where input layer can created e.g. Each neuron the output layer calculates its activation (yk), which matches its real output (kth neuron) after submission the input sample. Formally, for the training set can consider set elements (patterns) that are arranged pairs defined follows: T {{S1,T1} {S2,T2}…{Sq,Tq}} Si [s1 sn] ‹0, (3. Another issue the ability generalization over the learned material, other words, how the neural network able deduce the basis learned phenomea that were not part the learning process, but can somehow deduced from the learned. Stop first the training set.,n) receives input signal (xi) and mediates its transfer all neurons the inner layer (Z1.1) Ti [t1 tm] ‹0, 1› where number training set patterns Si excitation vector the input layer consisting neurons Ti excitation vector the output layer consisting neurons sj, excitation the jth neuron the input, respectively the output layer The method that allows the adaptation the neural network training set is called backpropagation.. with visual cells and the output layer the brain are then identified individual objects of watching