|
Kategorie: Diplomové, bakalářské práce |
Tento dokument chci!
Práce popisuje základy principu funkčnosti neuronů a vytvoření umělých neuronových sítí. Je zde důkladně popsána struktura a funkce neuronů a ukázán nejpoužívanější algoritmus pro učení neuronů. Základy fuzzy logiky, včetně jejich výhod a nevýhod, jsou rovněž prezentovány. Detailněji je popsán algoritmus zpětného šíření chyb a adaptivní neuro-fuzzy inferenční systém. Tyto techniky poskytují efektivní způsoby učení neuronových sítí.
, calculated. Based this comparison, the neural network
error defined, for which factor .11
During the neural network adaptation with backpropagation method, calculated
activation with defined output values for each neuron the output layer and for
each training pattern are compared.2)
Partial network error El(w) for the lth
training pattern (l=1, .
Weight value adjustment wjk the connections between neurons the inner
and output layers depends factor and the activation neuron the inner
layer. Weight value adjustment vij the connections between neurons the input
and inner layers depends factor and the activation neuron the input
layer. The most commonly used activation function is
therefore standard (logical) sigmoid and hyperbolic tangent. During the
.. Geometric conception will help in
better understanding. analogy with human learning, corresponds the initial settings synaptic
weights the newborn, who instead the desired behaviors such walking,
talking, etc. is, was already
mentioned, the part error that spreads back from the neuron all the neurons
of previous layers which are defined with neuron connections.
The activation function for neural neworks with adaptive backpropogation
method must have the following characteristics: must continuous, differentiable
and monotonically nondecreasing.. Factor .3)
The aim adaptation minimize network errors the weight space.. performs random movements and makes vague noises.
The error function E(w) schematically shown Figure 3. For its
solution, the basic model uses the simplest version gradient method, which
requires differentiability the error function.2 configuration,
which multidimensional vector weights projected the axis Error
function determines the network error due fixed training set, depending network
configuration.,q) proportional to
the sum squared deviations actual output values the network input for l-
training pattern from the required output values for this example:
Yk
kkl tywE 2
)(
2
1
)( (3.. Since
the fault the network directly depends complicated nonlinear complex function
of multilayer network, the goal presents non-trivial optimalization problem.., p)
can defined similarly, which part errors spreads back from neuron all
the input layer neurons, which are defined with the neuron connections. Network error E(w) is
due the training set defined the sum the partial network error El(w) due to
individual training patterns and depends the network confugiration w:
q
l
l wEwE
1
)()( (3. During the network adaptation, are looking for configuration, for
which the error function minimal.. start with randomly chosen configuration
w(0)
, where the corresponding network error from the desired network will probably be
large