Klasifikace vzorů pomocí fuzzy neuronových sítí

| Kategorie: Diplomové, bakalářské práce  | Tento dokument chci!

Práce popisuje základy principu funkčnosti neuronů a vytvoření umělých neuronových sítí. Je zde důkladně popsána struktura a funkce neuronů a ukázán nejpoužívanější algoritmus pro učení neuronů. Základy fuzzy logiky, včetně jejich výhod a nevýhod, jsou rovněž prezentovány. Detailněji je popsán algoritmus zpětného šíření chyb a adaptivní neuro-fuzzy inferenční systém. Tyto techniky poskytují efektivní způsoby učení neuronových sítí.

Vydal: FCC Public s. r. o. Autor: Tamás Ollé

Strana 20 z 67

Vámi hledaný text obsahuje tato stránku dokumentu který není autorem určen k veřejnému šíření.

Jak získat tento dokument?






Poznámky redaktora
Network error E(w) is due the training set defined the sum the partial network error El(w) due to individual training patterns and depends the network confugiration w:   q l l wEwE 1 )()( (3., calculated. The activation function for neural neworks with adaptive backpropogation method must have the following characteristics: must continuous, differentiable and monotonically nondecreasing.. During the network adaptation, are looking for configuration, for which the error function minimal. The most commonly used activation function is therefore standard (logical) sigmoid and hyperbolic tangent. Weight value adjustment vij the connections between neurons the input and inner layers depends factor and the activation neuron the input layer.11 During the neural network adaptation with backpropagation method, calculated activation with defined output values for each neuron the output layer and for each training pattern are compared. During the .2 configuration, which multidimensional vector weights projected the axis Error function determines the network error due fixed training set, depending network configuration. analogy with human learning, corresponds the initial settings synaptic weights the newborn, who instead the desired behaviors such walking, talking, etc. Geometric conception will help in better understanding. Factor . performs random movements and makes vague noises., p) can defined similarly, which part errors spreads back from neuron all the input layer neurons, which are defined with the neuron connections. Weight value adjustment wjk the connections between neurons the inner and output layers depends factor and the activation neuron the inner layer...3) The aim adaptation minimize network errors the weight space..2) Partial network error El(w) for the lth training pattern (l=1, . The error function E(w) schematically shown Figure 3. Since the fault the network directly depends complicated nonlinear complex function of multilayer network, the goal presents non-trivial optimalization problem.. start with randomly chosen configuration w(0) , where the corresponding network error from the desired network will probably be large. Based this comparison, the neural network error defined, for which factor . For its solution, the basic model uses the simplest version gradient method, which requires differentiability the error function. is, was already mentioned, the part error that spreads back from the neuron all the neurons of previous layers which are defined with neuron connections..,q) proportional to the sum squared deviations actual output values the network input for l- training pattern from the required output values for this example:   Yk kkl tywE 2 )( 2 1 )( (3