블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

2009. 4. 12. 22:02 Computer Vision
Filtering is refining the knowledge we have about the state of a system from the information provided by the measurements we make on it.


Filtering problem --> Systems' dynamics + Measurements of the system's current state
--> evolution equation (process noise) + measurement equation (measurement noise)
: deterministic or certain & stochastic or uncertain



Perturbations

Perturbation theory, mathematical methods that give approximate solutions to problems that cannot be solved exactly


evolution equation
> discrete-time, state-space formalism


Markovian evolution

http://en.wikipedia.org/wiki/Markov_process
a mathematical model for the random evolution of a memoryless system, that is, one for which the likelihood of a given future state, at any given moment, depends only on its present state, and not on any past states.
In a common description, a stochastic process with the Markov property, or memorylessness, is one for which conditional on the present state of the system, its future and past are independent[citation needed].
Often, the term Markov chain is used to mean a Markov process which has a discrete (finite or countable) state-space. Usually a Markov chain would be defined for a discrete set of times (i.e. a discrete-time Markov Chain)

http://en.wikipedia.org/wiki/Markov_chain
A Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that, given the present state, the future and past states are independent. Formally,
\Pr(X_{n+1}=x|X_n=x_n, \ldots, X_1=x_1) = \Pr(X_{n+1}=x|X_n=x_n).\,

The possible values of Xi form a countable set S called the state space of the chain.


http://en.wikipedia.org/wiki/State_space
In computer science, a state space is a description of a configuration of discrete states used as a simple model of machines.
The state space is what state space search searches in. Graph theory is helpful in understanding and reasoning about state spaces.

http://en.wikipedia.org/wiki/State_space_(controls)
In control engineering, a state space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the last one can be done when the dynamical system is linear and time invariant). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs.

http://en.wikipedia.org/wiki/Probability_space
A probability space, in probability theory, is the conventional mathematical model of randomness. This mathematical object, sometimes called also probability triple, formalizes three interrelated ideas by three mathematical notions. First, a sample point (called also elementary event), --- something to be chosen at random (outcome of experiment, state of nature, possibility etc.) Second, an event, --- something that will occur or not, depending on the chosen elementary event. Third, the probability of this event. The definition (see below) was introduced by Kolmogorov in the 1930s.


uncertain control action
: known control actions & random perturbations


measurement equation

process noise
: nature of the perturbations entering our system

measurement noise

initial prior
: our previous knowledge about the initial state 


The filtering problem:
Find at each time k the 'best' estimate of the state of the system conditioned to the whole historic of measurements (from initial time up to current time k).

The solution:
Compute the posterior density of the system's state conditioned to the set of measurements.

> estimator
i) a-posteriori maximum of likelihood
ii) minimum of variance = conditioned mean


data mining
post-processing


incremental filtering
: processing at each time a bounded amount of information
--> recursive formulations (prediction step + correction step)


1) prediction step (time update)
: to propagate the posterior density at time k-1 into a prior density at time k by using the knowledge on the system dynamics and the perturbations

Chapman-Kolmogorov equation

http://en.wikipedia.org/wiki/Chapman-Kolmogorov_equation
an identity relating the joint probability distributions of different sets of coordinates on a stochastic process

transition density

"The prediction is a convolution of pdfs of the posterior at previous time and the perturbation on the dynamics."

linear-Gaussian --> Kalman filter

http://en.wikipedia.org/wiki/Gaussian_process



2) correction step (measurement update)
: to refine the predicted prior onto a corrected posterior by using the evidence provided by the measurements

Bayes rule
http://en.wikipedia.org/wiki/Bayes_rule

"The correction is a product of pdfs of the prior from the prediction step and the observed evidence."

additive observation noise




the Kalman filter

linear Gaussian system



http://en.wikipedia.org/wiki/Kalman_filter

The Kalman filter exploits the dynamics of the target, which govern its time evolution, to remove the effects of the noise and get a good estimate of the location of the target at the present time (filtering), at a future time (prediction), or at a time in the past (interpolation or smoothing).

Kalman filters are based on linear dynamical systems discretised in the time domain. They are modelled on a Markov chain built on linear operators perturbed by Gaussian noise. The state of the system is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the visible outputs from the hidden state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the key difference that the hidden state variables take values in a continuous space (as opposed to a discrete state space as in the hidden Markov model). Additionally, the hidden Markov model can represent an arbitrary distribution for the next value of the state variables, in contrast to the Gaussian noise model that is used for the Kalman filter. There is a strong duality between the equations of the Kalman Filter and those of the hidden Markov model.


 \textbf{x}_{k} = \textbf{F}_{k} \textbf{x}_{k-1} + \textbf{B}_{k} \textbf{u}_{k} + \textbf{w}_{k}

where

  • Fk is the state transition model which is applied to the previous state xk−1;
  • Bk is the control-input model which is applied to the control vector uk;
  • wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Qk.
\textbf{w}_{k} \sim N(0, \textbf{Q}_k)

At time k an observation (or measurement) zk of the true state xk is made according to

\textbf{z}_{k} = \textbf{H}_{k} \textbf{x}_{k} + \textbf{v}_{k}

where Hk is the observation model which maps the true state space into the observed space and vk is the observation noise which is assumed to be zero mean Gaussian white noise with covariance Rk.

\textbf{v}_{k} \sim N(0, \textbf{R}_k)
The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed to be mutually independent.


Model underlying the Kalman filter. Circles are vectors, squares are matrices, and stars represent Gaussian noise with the associated covariance matrix at the lower right.






http://en.wikipedia.org/wiki/Linear_dynamical_system

In a linear dynamical system, the variation of a state vector (an N-dimensional vector denoted \mathbf{x}) equals a constant matrix (denoted \mathbf{A}) multiplied by \mathbf{x}.


\frac{d}{dt} \mathbf{x}(t) = \mathbf{A} \cdot \mathbf{x}(t)

 


\mathbf{x}_{m+1} = \mathbf{A} \cdot \mathbf{x}_{m}






http://en.wikipedia.org/wiki/Extended_Kalman_filter

 

Gaussian Sum Filter (GSF)



Particle Filter = sequential Monte Carlo methods (SMC)

http://en.wikipedia.org/wiki/Particle_filter



http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo



http://en.wikipedia.org/wiki/Convolution




 

posted by maetel