블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

SNNS (Stuttgart Neural Network Simulator)
is a software simulator for neural networks on Unix workstations developed at the Institute for Parallel and Distributed High Performance Systems (IPVR) at the University of Stuttgart. The goal of the SNNS project is to create an efficient and flexible simulation environment for research on and application of neural nets.

http://en.wikipedia.org/wiki/SNNS

Petron, E. 1999. Stuttgart Neural Network Simulator: Exploring connectionism and machine learning with SNNS. Linux J. 1999, 63es (Jul. 1999), 2.

Neural Network Toolbox™ extends MATLAB®
with tools for designing, implementing, visualizing, and simulating neural networks.



282p

6.1 Introduction

LMS (Least mean squares) algorithms

multilayer neural networks / multilayer Perceptrons
The parameters governing the nonlinear mapping are learned at the same time as those governing the linear discriminant.

http://en.wikipedia.org/wiki/Multilayer_perceptron

http://en.wikipedia.org/wiki/Neural_network

the number of layers of units
(eg. the two-layer networks = single layer networks)

Multilayer neural networks implement linear discriminants, but in a space where the inputs have been mapped nonlinearly.


backpropagation algorithm / generalized delta rule
: a natural extension of the LMS algorithm
- the intuitive graphical representation & the simplicity of design of models
- the conceptual and algorithmic simplicity

network architecture / topology
- neural net classification
- heuristic model selection (through choices in the number of hidden layers, units, feedback connections, and so on.

regularization
= complexity adjustment



284p

6.2 Feedforward Operation and Classification

 

http://en.wikipedia.org/wiki/Artificial_neural_network

bias unit
the function of units : "neurons"
the input units : the components of a feature vector
signals emiited by output units : the values of the discriminant functions used for classification
hidden units : the weighted sum of its inputs

-> net activation : the inner product of the inputs with the weights at the hidden unit

the input-to-hidden layer weights : "synapses" -> "synaptic weights"

activation function : "nonlinearity" of a unit

 

"Each hidden unit emits an output that is a nonlinear function of its activation."

"Each ouput unit computes its net activation based on the hidden unit signals."


http://en.wikipedia.org/wiki/Feedforward_neural_network


286p
6.2.1 General Feedforward Operation

"Given sufficient number of hidden units of a general type, any function can be so represented." -> expressive power

287p
6.2.2 Expressive Power of Multilayer Networks

"Any desired function can be implemented by a three-layer network."
-> but, the problems of designing and training neural networks


288p
6.3 Backpropagation Algorithm

http://en.wikipedia.org/wiki/Backpropagation

- the problem of setting the weights (based on training patterns and the desired output)
- supervised training of multilayer neural networks
- the natural extension of the LMS algorithm for linear systems
- based on gradient descent

credit assignment problem
: no explicit teacher to state what the hidden unit's output should be

=> The power of backpropagation to calculate an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights

sigmoid


6.3.1 Network Learning

The final signals emitted by the network are used as discriminant functions for classification. During network training, these output signals are compared with a teaching or target vector t, and any difference is used in training the weights throughout the network

training error -> LMS algorithm
gradient descent -> the weights are changed in a direction that will reduce the error


6.3.2 Training Protocol


6.3.3 Learning Curves

6.4 Error Surfaces




6.8.1 Activation Function

posted by maetel