블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30
  • total
  • today
  • yesterday

Category

2010. 12. 11. 01:58 Computer Vision
Michael I. Jordan, Generic constraints on underspecified target trajectories, Proceedings of international conference on neural networks, (1989), 217-225
http://dx.doi.org/10.1109/IJCNN.1989.118584


informed by 함교수님

cf.

Michael I. Jordan



Introduction

connectionist networks

feedforward controller

forward model


activation patterns (in a network)

motor learning

interpretation


visible units & hidden units

The output units of such a network are hidden with respect to learning and yet are visible given their direct connection to the environment.

task space 
articulatory space



Forward models of the environment


The forward modeling approach assumes that the solution to this minimization problem is based on the computation of a gradient.


'Computer Vision' 카테고리의 다른 글

Seam Carving  (0) 2011.01.17
Jordan & Bishop "Neural Networks"  (0) 2010.12.14
Donald M. Wiberg [Schaum's Outline of Theory and Problems of State Space and Linear System]  (0) 2010.12.09
Winsock  (0) 2010.11.26
camera Firefly (FFMV-03M2M)  (0) 2010.11.25
posted by maetel
2010. 12. 9. 19:00 Footmarks
서강대 수학과 세미나: Neural Network Approximation 
강연: 함남우 교수님 (인천대학교) 

2010-12-09 나무 16:30-17:30 @R1418


함남우 교수님 주전공: 수치해석, 응용수학, 해석학...


> Approximation Theory

- To construct a model for an input/output process
x -> actual process -> f(x)
x -> computed model -> P_f(x)   다항함수

- Error 
Intrinsic Error <= model design
Noise Error <= observation


> Density Problem (접근가능성을 결정)
En(f) ~ inf { || f - p || : p, a subset of Pu }
To decide if En(f) -> 0 as n -> inf
eg. 중간값 정리
n이 얼마나 커야 하는지의 문제


> Complexity Problem
the rate at En(f) -> 0 ( when target fn. f(x) is generally unknown )


* Theory of Best Approximation




ref. G. G. Lorentz, Approximation of Functions


> Brief History of NN
- McCulloch & Pitts, 1943
- Rosenblatt, 1958 "perceptron"
- Minsky & Papert, 1969 "limitation of perceptron"
- Rumelhart, 1986 "hidden layer" (<- 연사님  전공)


> Neural Network (with One Hidden Layer)

x -> Sigma(i=1,...,n) c_i * sigma * (a_i*x + b_i)
a: weight, b: threshold or bias


algebraic polynomial -- approx. --> any continuous fn.


> Complexity Result


* Simultaneous Approximation  (cf. Whitney extension Theorem)

ref. Constructive Function Theory



  


posted by maetel

SNNS (Stuttgart Neural Network Simulator)
is a software simulator for neural networks on Unix workstations developed at the Institute for Parallel and Distributed High Performance Systems (IPVR) at the University of Stuttgart. The goal of the SNNS project is to create an efficient and flexible simulation environment for research on and application of neural nets.

http://en.wikipedia.org/wiki/SNNS

Petron, E. 1999. Stuttgart Neural Network Simulator: Exploring connectionism and machine learning with SNNS. Linux J. 1999, 63es (Jul. 1999), 2.

Neural Network Toolbox™ extends MATLAB®
with tools for designing, implementing, visualizing, and simulating neural networks.



282p

6.1 Introduction

LMS (Least mean squares) algorithms

multilayer neural networks / multilayer Perceptrons
The parameters governing the nonlinear mapping are learned at the same time as those governing the linear discriminant.

http://en.wikipedia.org/wiki/Multilayer_perceptron

http://en.wikipedia.org/wiki/Neural_network

the number of layers of units
(eg. the two-layer networks = single layer networks)

Multilayer neural networks implement linear discriminants, but in a space where the inputs have been mapped nonlinearly.


backpropagation algorithm / generalized delta rule
: a natural extension of the LMS algorithm
- the intuitive graphical representation & the simplicity of design of models
- the conceptual and algorithmic simplicity

network architecture / topology
- neural net classification
- heuristic model selection (through choices in the number of hidden layers, units, feedback connections, and so on.

regularization
= complexity adjustment



284p

6.2 Feedforward Operation and Classification

 

http://en.wikipedia.org/wiki/Artificial_neural_network

bias unit
the function of units : "neurons"
the input units : the components of a feature vector
signals emiited by output units : the values of the discriminant functions used for classification
hidden units : the weighted sum of its inputs

-> net activation : the inner product of the inputs with the weights at the hidden unit

the input-to-hidden layer weights : "synapses" -> "synaptic weights"

activation function : "nonlinearity" of a unit

 

"Each hidden unit emits an output that is a nonlinear function of its activation."

"Each ouput unit computes its net activation based on the hidden unit signals."


http://en.wikipedia.org/wiki/Feedforward_neural_network


286p
6.2.1 General Feedforward Operation

"Given sufficient number of hidden units of a general type, any function can be so represented." -> expressive power

287p
6.2.2 Expressive Power of Multilayer Networks

"Any desired function can be implemented by a three-layer network."
-> but, the problems of designing and training neural networks


288p
6.3 Backpropagation Algorithm

http://en.wikipedia.org/wiki/Backpropagation

- the problem of setting the weights (based on training patterns and the desired output)
- supervised training of multilayer neural networks
- the natural extension of the LMS algorithm for linear systems
- based on gradient descent

credit assignment problem
: no explicit teacher to state what the hidden unit's output should be

=> The power of backpropagation to calculate an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights

sigmoid


6.3.1 Network Learning

The final signals emitted by the network are used as discriminant functions for classification. During network training, these output signals are compared with a teaching or target vector t, and any difference is used in training the weights throughout the network

training error -> LMS algorithm
gradient descent -> the weights are changed in a direction that will reduce the error


6.3.2 Training Protocol


6.3.3 Learning Curves

6.4 Error Surfaces




6.8.1 Activation Function

posted by maetel