|
Kategorie: Diplomové, bakalářské práce |
Tento dokument chci!
Práce popisuje základy principu funkčnosti neuronů a vytvoření umělých neuronových sítí. Je zde důkladně popsána struktura a funkce neuronů a ukázán nejpoužívanější algoritmus pro učení neuronů. Základy fuzzy logiky, včetně jejich výhod a nevýhod, jsou rovněž prezentovány. Detailněji je popsán algoritmus zpětného šíření chyb a adaptivní neuro-fuzzy inferenční systém. Tyto techniky poskytují efektivní způsoby učení neuronových sítí.
Step Perform steps for each (bipolar) training pair s:t.2). 3., until get the local minimum of
the error function.
Step Repeat steps until the condition calculation termination not
executed.e. For example W+
= +
Current change (change previous iteration*constant), where constant [4]. Usually this process very time-consuming
(several days calculation with PC) for small multilayer networks (tens neurons)
as well.
3.
There are number solutions solve this problem.12
adaptation, frame this point w(0)
tangent vector (gradient)
0
w
w
E
and move in
the direction this vector down For sufficiently small then obtain the new
configuration w(1)
= w(0)
+ Δw(1)
, for which the error function smaller than for the
original configuration w(0)
, i. Another solution add „momentum“ the
weight change.
Presented adaptation process stops this low level (zero gradient) and the network
error does not decrease further. Although with appropriate choice the learning rate (α) this method
always converges some local minimum from any initial configuration, there no
guarantee that this happens real time.
Fig. E(w(0)
) E(w(1)
).
. multidimensional weighted space, this procedure exceeds our
imagination. Assigning the initialization values the learning coeficient α.2 Gradient method ([4])
The main problem with gradient method that when finds local minimum,
then this minimum does not need the global minimum (see Figure 3.1. The simplest and most
effective (can also solve several other problems) reset the weights different
random numbers and try training again. This means that the weight change this interpretation depends not
just the current error, but also previous changes. The entire process repeated for w(1)
and get w(2)
such that E(w(1)
) E(w(2)
) etc.1 Description the backpropagation algorithm
Step The weighting values and the bias are initialized small random
numbers