블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2009. 11. 5. 16:12 Computer Vision
Kalman filter 연습 코딩

1차원 간단 예제
// 1-D Kalman filter algorithm exercise
// VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <iostream>
using namespace std;

int main (int argc, char * const argv[]) {
   
    double groundtruth[] = {1.0, 2.0, 3.5, 5.0, 7.0, 8.0, 10.0};
    double measurement[] = {1.0, 2.1, 3.2, 5.3, 7.4, 8.1, 9.6};
    double transition_noise = 0.1; // covariance of Gaussian noise to control
    double measurement_noise = 0.3; // covariance of Gaussian noise to measurement
   
    double x = 0.0, v = 1.0;    double cov = 0.5;
   
    double x_p, c_p; // prediction of x and cov
    double gain; // Kalman gain
    double x_pre, m;
   
    for (int t=0; t<7; t++)
    {
        // prediction
        x_pre = x;
        x_p = x + v;
        c_p = cov + transition_noise;
        m = measurement[t];
        // update
        gain = c_p / (c_p + measurement_noise);
        x = x_p + gain * (m - x_p);
        cov = ( 1 - gain ) * c_p;
        v = x - x_pre;

        cout << t << endl;
        cout << "estimation  = " << x << endl;
        cout << "measurement = " << measurement[t] << endl;   
        cout << "groundtruth = " << groundtruth[t] << endl;
    }
    return 0;
}

실행 결과:
0
estimation  = 1
measurement = 1
groundtruth = 1
1
estimation  = 2.05
measurement = 2.1
groundtruth = 2
2
estimation  = 3.14545
measurement = 3.2
groundtruth = 3.5
3
estimation  = 4.70763
measurement = 5.3
groundtruth = 5
4
estimation  = 6.76291
measurement = 7.4
groundtruth = 7
5
estimation  = 8.50584
measurement = 8.1
groundtruth = 8
6
estimation  = 9.9669
measurement = 9.6
groundtruth = 10
logout

[Process completed]


2차원 연습
// 2-D Kalman filter algorithm exercise
// lym, VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

int main (int argc, char * const argv[]) {
    int step = 7;
   
    IplImage *iplImg = cvCreateImage(cvSize(150, 150), 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("Kalman-2d", 0);
   
    //ground truth of real states
    double groundtruth[] = {10.0, 20.0, 35, 50.0, 70.0, 80.0, 100.0, //x-value
                            10.0, 20.0, 40.0, 55, 65, 80.0, 90.0}; //y-value
    //measurement of observed states
    double measurement_set[] = {10.0, 21, 32, 53, 74, 81, 96,  //x-value
                            10.0, 19, 42, 56, 66, 78, 88};    //y-value
    //covariance of Gaussian noise to control
//    double transition_noise[] = { 0.1, 0.0, 
//                                  0.0, 0.1 }; 
    CvMat* transition_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(transition_noise, 0, 0, 0.1); //set transition_noise(0,0) to 0.1
    cvmSet(transition_noise, 0, 1, 0.0);
    cvmSet(transition_noise, 1, 0, 0.0);
    cvmSet(transition_noise, 1, 1, 0.1);    
    //covariance of Gaussian noise to measurement
//    double measurement_noise[] = { 0.3, 0.0, 
//                                   0.0, 0.2 };
    CvMat* measurement_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(measurement_noise, 0, 0, 0.3); //set measurement_noise(0,0) to 0.3
    cvmSet(measurement_noise, 0, 1, 0.0);
    cvmSet(measurement_noise, 1, 0, 0.0);
    cvmSet(measurement_noise, 1, 1, 0.2);        
   
    CvMat* state = cvCreateMat(2, 1, CV_64FC1);    //states to be estimated   
    CvMat* state_p = cvCreateMat(2, 1, CV_64FC1);  //states to be predicted
    CvMat* velocity = cvCreateMat(2, 1, CV_64FC1); //motion controls to change states
    CvMat* measurement = cvCreateMat(2, 1, CV_64FC1); //measurement of states
   
    CvMat* cov = cvCreateMat(2, 2, CV_64FC1);     //covariance to be updated
    CvMat* cov_p = cvCreateMat(2, 2, CV_64FC1); //covariance to be predicted
    CvMat* gain = cvCreateMat(2, 2, CV_64FC1);     //Kalman gain to be updated
   
    // temporary matrices to be used for estimation
    CvMat* Kalman = cvCreateMat(2, 2, CV_64FC1); //
    CvMat* invKalman = cvCreateMat(2, 2, CV_64FC1); //

    CvMat* I = cvCreateMat(2,2,CV_64FC1);
    cvSetIdentity(I); // does not seem to be working properly   
//  cvSetIdentity (I, cvRealScalar (1));   
    // check matrix
    for(int i=0; i<2; i++)
    {
        for(int j=0; j<2; j++)
        {
            cout << cvmGet(I, i, j) << "\t";           
        }
        cout << endl;
    }
 
    // set the initial state
    cvmSet(state, 0, 0, 0.0); //x-value //set state(0,0) to 0.0
    cvmSet(state, 1, 0, 0.0); //y-value //set state(1,0) to 0.0
    // set the initital covariance of state
    cvmSet(cov, 0, 0, 0.5); //set cov(0,0) to 0.5
    cvmSet(cov, 0, 1, 0.0); //set cov(0,1) to 0.0
    cvmSet(cov, 1, 0, 0.0); //set cov(1,0) to 0.0
    cvmSet(cov, 1, 0, 0.4); //set cov(1,1) to 0.4   
    // set the initial control
    cvmSet(velocity, 0, 0, 10.0); //x-direction //set velocity(0,0) to 1.0
    cvmSet(velocity, 1, 0, 10.0); //y-direction //set velocity(0,0) to 1.0
   
    for (int t=0; t<step; t++)
    {
        // retain the current state
        CvMat* state_out = cvCreateMat(2, 1, CV_64FC1); // temporary vector   
        cvmSet(state_out, 0, 0, cvmGet(state,0,0)); 
        cvmSet(state_out, 1, 0, cvmGet(state,1,0));        
        // predict
        cvAdd(state, velocity, state_p); // state + velocity -> state_p
        cvAdd(cov, transition_noise, cov_p); // cov + transition_noise -> cov_p
        // measure
        cvmSet(measurement, 0, 0, measurement_set[t]); //x-value
        cvmSet(measurement, 1, 0, measurement_set[step+t]); //y-value
        // estimate Kalman gain
        cvAdd(cov_p, measurement_noise, Kalman); // cov_p + measure_noise -> Kalman
        cvInvert(Kalman, invKalman); // inv(Kalman) -> invKalman
        cvMatMul(cov_p, invKalman, gain); // cov_p * invKalman -> gain       
        // update the state
        CvMat* err = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvSub(measurement, state_p, err); // measurement - state_p -> err
        CvMat* adjust = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvMatMul(gain, err, adjust); // gain*err -> adjust
        cvAdd(state_p, adjust, state); // state_p + adjust -> state
        // update the covariance of states
        CvMat* cov_up = cvCreateMat(2, 2, CV_64FC1); // temporary matrix   
        cvSub(I, gain, cov_up); // I - gain -> cov_up       
        cvMatMul(cov_up, cov_p, cov); // cov_up *cov_p -> cov
        // update the control
        cvSub(state, state_out, velocity); // state - state_p -> velocity
       
        // result in colsole
        cout << "step " << t << endl;
        cout << "estimation  = " << cvmGet(state,0,0) << setw(10) << cvmGet(state,1,0) << endl;
        cout << "measurement = " << cvmGet(measurement,0,0) << setw(10) << cvmGet(measurement,1,0) << endl;   
        cout << "groundtruth = " << groundtruth[t] << setw(10) << groundtruth[t+step] << endl;
        // result in image
        cvCircle(iplImg, cvPoint(cvRound(groundtruth[t]), cvRound(groundtruth[t + step])), 3, cvScalarAll(255));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(measurement,0,0)), cvRound(cvmGet(measurement,1,0))), 2, cvScalar(255, 0, 0));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0))), 2, cvScalar(0, 0, 255));
       
        cvShowImage("Kalman-2d", iplImg);
        cvWaitKey(500);   
   
    }
    cvWaitKey();   
   
    return 0;
}


실행 결과:


step 0
estimation  = 10        10
measurement = 10        10
groundtruth = 10        10
step 1
estimation  = 20.5   19.6263
measurement = 21        19
groundtruth = 20        20
step 2
estimation  = 31.4545   35.5006
measurement = 32        42
groundtruth = 35        40
step 3
estimation  = 47.0763   53.7411
measurement = 53        56
groundtruth = 50        55
step 4
estimation  = 67.6291   69.0154
measurement = 74        66
groundtruth = 70        65
step 5
estimation  = 85.0584   81.1424
measurement = 81        78
groundtruth = 80        80
step 6
estimation  = 99.669    90.634
measurement = 96        88
groundtruth = 100        90



posted by maetel
Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 9 Estimation of a Random Variable

prediction
A predictor uses random variables produced in early subexperiments to estimate a random variable produced by a future subexperiment.


9.1 Optimum Estimation Given Another Random Variable


The estimate of X that produces the minimum mean square error is the expected value (or conditional expected value) of X calculated with the probability model that incorporates the available information.

Bind estimation of X

Estimation of X given an event

Minimum Mean Square Estimation of X given Y



9.2 Linear Estimation of X given Y


9.3 MAP and ML Estimation


9.4 Linear Estimation of Random Varaiables from Random Vectors




posted by maetel
Yates and Goodman
Chapter 7 Parameter Estimation Using the Sample Mean

statistical inference

http://en.wikipedia.org/wiki/Statistical_inference
Statistical inference or statistical induction comprises the use of statistics and random sampling to make inferences concerning some unknown aspect of a population


7.1  Sample Mean: Expected Value and Variance

The sample mean converges to a constant as the number of repetitions of an experiment increases.

Althouth the result of a single experiment is unpredictable, predictable patterns emerge as we collect more and more data.


sample mean
= numerical average of the observations
: the sum of the sample values divided by the number of trials


7.2 Deviation of a Random Variable from the Expected Value

Markov Inequality
: an upper bound on thte probability that a sample value of a nonnegative random variable exceeds the expected value by any arbitrary factor

http://en.wikipedia.org/wiki/Markov_inequality


Chebyshev Inequality 
: The probability of a large deviation from the mean is inversely proportional to the square of the deviation

http://en.wikipedia.org/wiki/Chebyshev_inequality


7.3 Point Estimates of Model Parameters

http://en.wikipedia.org/wiki/Estimation_theory
estimating the values of parameters based on measured/empirical data. The parameters describe an underlying physical setting in such a way that the value of the parameters affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

http://en.wikipedia.org/wiki/Point_estimation
the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" for an unknown (fixed or random) population parameter


relative frequency (of an event)

point estimates :
bias
consistency
accuracy

consistent estimator
: sequence of estimates which converges in probability to a parameter of the probability model.



The sample mean is an unbiased, consistent estimator of the expected value of a random variable.

The sample variance is a biased estimate of the variance of a random variable.

mean square error
: expected squared difference between an estimate and the estimated parameter

 The standard error of the estimate of the expected value converges to zero as n grows without bound.

http://en.wikipedia.org/wiki/Law_of_large_numbers


7.4 Confidence Intervals

accuracy of estimate

confidence interval
: difference between a random variable and its expected value

confidence coefficient
: probability that a sample value of the random variable will be within the confidence interval

posted by maetel



3.2 Maximum-Likelihood Estimation

maximum-likelihood

http://en.wikipedia.org/wiki/Maximum_likelihood

a popular statistical method used for fitting a mathematical model to some data. The modeling of real world data using estimation by maximum likelihood offers a way of tuning the free parameters of the model to provide a good fit.
The method was pioneered by geneticist and statistician Sir R. A. Fisher between 1912 and 1922.

For a fixed set of data and underlying probability model, maximum likelihood picks the values of the model parameters that make the data "more likely" than any other values of the parameters would make them.

 

http://www.aistudy.com/math/likelihood.htm
어떤 가설 (hypothesis) H 에 대한 우도 (尤度, likelihood) 란, 어떤 시행의 결과 (Evidence) E 가 주어졌다 할 때, 만일 주어진 가설 H 가 참이라면, 그러한 결과 E 가 나올 정도는 얼마나 되겠느냐 하는 것이다. 즉  결과 E 가 나온 경우, 그러한 결과가 나올 수 있는 여러 가능한 가설들을 평가할 수 있는 측도가 곧 우도인 셈이다.

전문가시스템의 불확실성 (Uncertainty) 을 평가하기 위해 흔히 사용하는 베이즈 정리 (Bayes' Theorem) 에서는 사전확률에 새로운 증거를 대입하여 사후확률을 얻게 되는데, 사전확률을 부여함에 있어 자의성을 배제하기 어렵지만, 우도를 사용하여 그 자의성을 벗어나 훨씬 용이하게 사전확률을 계산해 내는 것이 가능하다 (전영삼 1993).

만일 어떤 가설에 대한 우도를 주어진 데이터가 그 가설을지지하는 정도로 해석을 한다 하면, 여러 가설 중 그 우도가 최대가 되는 가설을 선호함은 자연스러운 일이다. 즉 만일 그 가설이 어떤 모집단의 모수 (population parameter) 에 관한 가설이라고 하면, 바로 그 추정치를 해당 모집단에 관한 가장 적절한 추정치로서 선호할 수 있다는 것이다. 피셔에 있어 이와같은 원리를 이른 바 "최대우도의 원리 (Principle of Maximum Likelihood)" 라 부르며, 이와같은 원리에 따라 어떤 모수에 관한 가장 적절한 추정치 (Estimate) 를 구하는 방법을 이른 바 "최대우도의 방법 (Method of Maximum Likelihood) 이라 부른다 (전영삼 1990).



likelihood function

http://en.wikipedia.org/wiki/Likelihood_function

Informally, if "probability" allows us to predict unknown outcomes based on known parameters, then "likelihood" allows us to estimate unknown parameters based on known outcomes.
In a sense, likelihood works backwards from probability: given parameter B, we use the conditional probability P(A|B) to reason about outcome A, and given outcome A, we use the likelihood function L(B|A) to reason about parameter B. This mode of reasoning is formalized in Bayes' theorem:


probability density function
http://en.wikipedia.org/wiki/Probability_density_function
a function that represents a probability distribution in terms of integrals.


maximum a posteriori (MAP, posterior mode)

http://en.wikipedia.org/wiki/Maximum_a_posteriori
The method to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to Fisher's method of maximum likelihood (ML), but employs an augmented optimization objective which incorporates a prior distribution over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of ML estimation.



covariance matrix

http://en.wikipedia.org/wiki/Covariance_matrix

http://mathworld.wolfram.com/Covariance.html
Covariance provides a measure of the strength of the correlation between two or more sets of random variates.


http://en.wikipedia.org/wiki/Estimation_of_covariance_matrices



3.3 Bayesian Estimation


Bayesian Estimator

http://en.wikipedia.org/wiki/Bayesian_estimation
a Bayes estimator is an estimator or decision rule that maximizes the posterior expected value of a utility function or minimizes the posterior expected value of a loss function (also called posterior expected loss).

i) Parameter vector is considered to be a random variable.
ii) Training data allow us to convert a distribution on this variable into a posterior probability density.



Monte-Carlo simulation
http://en.wikipedia.org/wiki/Monte_Carlo_method#Monte_Carlo_Simulation_versus_.E2.80.9CWhat_If.E2.80.9D_Scenarios


Dirac delta function
http://en.wikipedia.org/wiki/Dirac_delta_function


expectation-maximization (EM)


http://en.wikipedia.org/wiki/Expectation-maximization_algorithm




3.10 Hidden Markov Model



http://en.wikipedia.org/wiki/Hidden_Markov_model


posted by maetel