블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31
  • total
  • today
  • yesterday

Category

'글타래'에 해당되는 글 750건

  1. 2010.01.22 Paul Michael Newman "On the Structure and Solution of the Simultaneous Localisation and Map Building Problem"
  2. 2010.01.21 Randall C. Smith and Peter Cheeseman "On the representation and estimation of spatial uncertainly"
  3. 2010.01.18 1강: 『장자莊子』「내편內篇」의 구성
  4. 2010.01.18 [이진용] 장자 자세히 읽어가기
  5. 2010.01.15 Kalman filtering for SLAM 연습
  6. 2010.01.14 RoSEC 2010 winter school
  7. 2009.12.22 RF Design House (RF 엔지니어 커뮤니티)
  8. 2009.12.02 Joan Solà - 6DOF SLAM toolbox for Matlab
  9. 2009.11.30 open source Bayesian Filtering C++ library (test log)
  10. 2009.11.23 Particle filter for 2D object tracking 연습 5
  11. 2009.11.19 MathWorks Korea 세미나 2009년 11월
  12. 2009.11.17 Dan Simon "Kalman Filtering"
  13. 2009.11.16 A. J. Haug "A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes" 1
  14. 2009.11.10 Particle filter 연습 1
  15. 2009.11.08 Branislav Kisačanin & Vladimir Pavlović & Thomas S. Huang <Real-Time Vision for Human-Computer Interaction>
  16. 2009.11.05 Kalman filter 연습 1
  17. 2009.10.29 M. Armstrong & A. Zisserman <Robust object tracking>
  18. 2009.10.27 R. L. Thompson et al. <Providing synthetic views for teleoperation using visual pose tracking in multiple cameras> 1
  19. 2009.10.27 C. Harris & C. Stennett, <Rapid - a video rate object tracker>
  20. 2009.10.26 Somkiat Wangsiripitak & David W Murray <Avoiding moving outliers in visual SLAM by tracking moving objects>
  21. 2009.10.22 Sebastian Thrun & Wolfram Burgard & Dieter Fox <Probabilistic Robotics>
  22. 2009.10.21 Machine Learning Summer Schools
  23. 2009.10.21 University of Cambridge <Augmented Maps>
  24. 2009.10.21 University of Cambridge <Natural Feature Tracking for Mobile Phones>
  25. 2009.10.12 AC ch3: Roger F. Malina "A Forty-Year Perspective on Aesthetic Computing in the Leonardo Journal"
2010. 1. 22. 00:10 Computer Vision
On the Structure and Solution of the Simultaneous Localisation and Map Building Problem.
Paul Michael Newman.
1999. Ph. D. thesis, Australian Centre for Field Robotics - The University of Sydney


출처: http://cogvis.nada.kth.se/~hic/SLAM/

posted by maetel
2010. 1. 21. 23:39 Computer Vision
(Sola: "the first consistent SLAM algorithm")

Randall C. Smith and Peter Cheeseman. 1986. On the representation and estimation of spatial uncertainly. Int. J. Rob. Res. 5, 4 (December 1986), 56-68.
DOI=10.1177/027836498600500404 http://dx.doi.org/10.1177/027836498600500404


posted by maetel
1강.『장자莊子』「내편內篇」의 구성 및「소요유逍遙遊」와「제물론齊物論」의 문제의식
첫 시간에는『 장자』「내편」의 전체구성과「소요유」,「제물론」 두 편에 보이는 장자의 철학적 문제의식을 알아보기로 한다. 이와 더불어 장자의 글쓰기 방식을 간략하게 설명한다.

2010-01-14 나무 늦은 7시 @사이 G강의실


posted by maetel
문지문화원 사이 2010년 겨울 강좌
장자 읽기: 장자내편 자세히 읽어가기



강좌 소개


『장자』는 도가 철학을 대표하는 장자의 철학사상을 담고 있으며, 현재까지 수많은 주석서와 그에 따른 다양한 이해 방식이 공존하는 저작이다. 특히 장자 본인의 저작이라고 전해지는『장자』 「내편」은 장자의 철학을 있는 그대로 파악할 수 있는 가치 있는 글로 구성되어 있다. 따라서 본 강좌는「내편」가운데「소요유」와「제물론」 편의 내용을 차근차근 살펴보며 장자의 철학적 문제의식과 삶의 지혜가 무엇인지를 파악하고자 한다.

* 참고사항
동양고전을 처음 접하는 수강생들이 한문으로 쓰여진 원문을 직접 강독하기에는 상당한 어려움이 뒤따를 것이다. 따라서 원문과 함께 상세한 번역문을 강의 전 나누어 함께 읽으며 강좌를 진행한다. 또한 매 주 강의시간마다 그 시간에 읽은 원문의 내용을 바탕으로 질문과 답변 및 간단한 토론의 시간을 가질 예정이다. (강의안은 매주 강좌 이전에 수강생들에 메일로 직접 보낼 예정)


강사 소개


이진용

연세대학교 철학과와 대학원 석사과정을 졸업하고 중국 북경대학교 철학과 대학원에서 갈홍의 <<포박자>>에 관한 논문으로 철학박사 학위를 받았다. 현재 건국대학교 철학과 강의교수로 재직 중이다. 중국 도가철학과 위진현학을 비롯해 위진시기의 신선도교 철학에 관심을 두고 연구를 진행하고 있다. 논문으로 <갈홍 <<포박자내편>>과 <<신선전>>의 신선사상 연구>, <갈홍 <<포박자내편>>의 현,도,일에 대한 이해>, <위진시기 유가사상 속의 隱逸사상>, <도교학자의 노자이해> 등과 中文 논문 수 편이 있으며, 공저로 <동양철학의 세계>, <東亞倫理與現代文明> 등이 있다.


강의 계획


1강.『장자莊子』「내편內篇」의 구성 및「소요유逍遙遊」와「제물론齊物論」의 문제의식
첫 시간에는『 장자』「내편」의 전체구성과「소요유」,「제물론」 두 편에 보이는 장자의 철학적 문제의식을 알아보기로 한다.
이와 더불어 장자의 글쓰기 방식을 간략하게 설명한다.

2강.「소요유」 자세히 읽어가기-1
「소요유」편의 첫 번째 우화인 대붕이야기를 통해 장자가 추구하고자 했던 소요逍遙와 자유로움의 경지에 대해 알아보기로 한다.

3강.「소요유」 자세히 읽어가기-2
요堯 임금과 허유許由, 견오肩吾와 연숙連叔의 대화를 중심으로 무명無名, 무공無功, 무기無己의 궁극적 함의가 무엇인지를 함께 고민해 보기로 한다.

4강.「제물론」자세히 읽어가기-1
장자는 「제물론」에서 다양한 차별성을 하나의 관점으로 통일할 수 있는지를 고민한다. 그 출발점으로 먼저 갖가지 시비를 불러일으키는 성심成心과 피리의 비유를 살펴보고, 그가 추구하고자 한 상아喪我의 경지를 알아보기로 한다.

5강.「제물론」자세히 읽어가기-2
갖가지 시비에서 벗어나기 위해서 장자는 도추道樞의 관점을 제시한다. 즉 도의 관점에서 사물을 보아야 제물齊物할 수 있는 것이다. 장자가 제시한 도의 함의와 도의 관점에 따라 사물을 바라볼 수 있는 방법에 대해 살펴보기로 한다.

6강.「제물론」 자세히 읽어가기-3
도의 관점에 따라 사물을 바라본다는 것은 과연 모든 사람들의 의견을 저버리는 것일까? 장자는 사람들의 판단기준에 따라도 시시비비를 가리는 상태에서 벗어날 수 있는 양행兩行의 방법을 제시한다. 이를 통해 장자가 추구한 물아의 대립을 넘어선 정신경지와 만물일체관을 엿볼 수 있다.

7강.「제물론」 자세히 읽어가기-4
모든 사물들은 각기 저마다의 이해방식과 판단기준을 가지기 때문에 만물을 평등하게 보아내기 힘들다. 장자는 이러한 문제를 해결하기 위해 이해와 시비를 넘어서 겸허한 마음으로 사물들의 자연스러운 성향을 따를 것을 요구한다. 설결齧缺과 왕예王倪의 대화를 통해 이를 살펴보기로 한다.

8강.「제물론」 자세히 읽어가기-5
장자는 호접몽胡蝶夢의 비유를 통해 물화物化의 개념을 제시한다. 즉 모든 사물들은 표면적으로 서로 의존하는 듯 보이나, 사실 사물들은 각자 스스로 생겨나서 변화할 수 있다고 한다. 따라서 상이한 모든 사물들은 결국 도의 측면에서는 서로 동일한 것이다. 뒤이어 마지막으로 「제물론」과 「소요유」의 상관관계에 대해 알아보기로 한다.
posted by maetel
2010. 1. 15. 11:55 Computer Vision
1차원 SLAM을 위한 Kalman filter 간단 예제




void cvGEMM(const CvArr* src1, const CvArr* src2, double alpha, const CvArr* src3, double beta, CvArr* dst, int tABC=0)

\texttt{dst} = \texttt{alpha} \, op(\texttt{src1}) \, op(\texttt{src2}) + \texttt{beta} \, op(\texttt{src3}) \quad \text {where $op(X)$ is $X$ or $X^ T$}


define cvMatMulAdd(src1, src2, src3, dst ) cvGEMM(src1, src2, 1, src3, 1, dst, 0 )define cvMatMul(src1, src2, dst ) cvMatMulAdd(src1, src2, 0, dst)




// 1-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-14
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;

#define num_landmarks 10
#define num_dim (num_landmarks + 2)

#define step 100
#define width 1000
#define height 200

int main (int argc, char * const argv[]) {
   
    srand(time(NULL));
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[num_landmarks];//    = { 200, 600, 400 }; // x-position
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[n] = width * uniform_random();
    }
   
    // set the initial state
    double rob_pos = 25.0; // initial robot position
    double rob_vel = 10.0; // initial robot velocity
    // set the initial covariance of the state
    double rob_pos_cov = 0.01; // covariance of noise to robot position
    double rob_vel_cov = 0.01; // covariance of noise to robot velocity
    double obs_cov = 900; // covarriance of noise to measurement of landmarks
    double xGroundTruth, vGroundTruth;
    xGroundTruth = rob_pos;
    vGroundTruth = rob_vel;
   

   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-1d", 0);
   
    // H matrix
    int Hrow = num_landmarks;
    int Hcol = num_landmarks + 2;
    CvMat* H = cvCreateMat(Hrow, Hcol, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int row = 0; row < Hrow; row++)
    {
        cvmSet(H, row, 0, -1.0);
        cvmSet(H, row, row+2, 1.0);
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H ; uncertainty of control
    CvMat* Q = cvCreateMat(Hrow, Hrow, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    int Grow = num_landmarks + 2;
    int Gcol = Grow;
    CvMat* G = cvCreateMat(Grow, Gcol, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous position
    cvmSet(G, 0, 1, 1.0); // velocity   
    for (int row = 1; row < Grow; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to H ; uncertainty of observation
    CvMat* R = cvCreateMat(Grow, Grow, CV_64FC1); // 5x5
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_pos_cov);
    cvmSet(R, 1, 1, rob_vel_cov);   
    displayMatrix(R, "R matrix");   
   
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state to be predicted
    CvMat* u = cvCreateMat(1, 1, CV_64FC1); // control vector    
    cvmSet(u, 0, 0, 1.0); // set u(0,0) to 1.0, the constant velocity here
    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))   
    CvMat* obs = cvCreateMat(num_landmarks, 1, CV_64FC1); // observation for each landmark
   
    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_pos + sqrt(rob_pos_cov)*gaussian_random()); // set mu(0,0) to "rob_pos"
    cvmSet(mu, 1, 0, rob_vel + sqrt(rob_vel_cov)*gaussian_random()); // set mu(0,0) to "rob_vel"   
    for(int n = 0; n < num_landmarks; n++)
    {
//        cvmSet(mu, n+2, 0, landmark[n] + sqrt(obs_cov)*gaussian_random());
        cvmSet(mu, n+2, 0, landmark[n]);       
    }   
    displayMatrix(mu, "mu vector");
   
    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1); // 5x5
    cvTranspose(G, Gt); // transpose(G) -> Gt   
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1); // 5x5 * 5x5
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 5x5
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 5xnum_landmarks
    cvTranspose(H, Ht); // transpose(H) -> Ht
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 5x5 * 5xnum_landmarks
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 5x5 * 5xnum_landmarks    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // num_landmarksx1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 5x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 5xnum_landmarks * num_landmarksx5
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 5x5 identity matrix
    cvSetIdentity(I);       
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 5x5

   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;       
        cvZero(iplImg);
   
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
//        displayMatrix(mu_p, "mu_p vector");   
       
        // predict the covariance of the state (ref. L3, KF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
//        displayMatrix(sigma_p, "sigma_p matrix");
       
        // estimate Kalman gain (ref. L4, KF algorithm, 42p)
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
    //    displayMatrix(HsigmaHtplusQ, "H*sigma*Ht + Q matrix");
       
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        displayMatrix(invGain, "invGain matrix");
       
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
    //    displayMatrix(K, "K matrix");       

        // measure
        xGroundTruth += vGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, vGroundTruth);
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(rob_ground, n + 2, 0, landmark[n]);
        }

        for(int n = 0; n < num_landmarks; n++)
        {
            double rn = sqrt(obs_cov) * gaussian_random();
            cvmSet(delta, n, 0, rn);
        }
    //    displayMatrix(delta, "delta vector; measurement noise");
        displayMatrix(rob_ground, "rob_ground");

        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // update the state with Kalman gain (ref. L5, KF algorithm, 42p)
        cvMatMul(H, mu_p, Hmu); // H * mu_p -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        // update the coariance of the state (ref. L6, KF algorith, 42p)
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");

        // result in console
        cout << "landmarks  = " << landmark[0] << setw(10) << landmark[1] << setw(10) << landmark[2] << setw(10) << endl;
        cout << "robot position = " << cvmGet(mu, 0, 0) << endl;
//        cout << "measurement = " << cvmGet(z,0,0) << setw(10) << cvmGet(z,1,0) << setw(10) << cvmGet(z,2,0) << endl;   
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
        cout << "observation = " << cvmGet(obs,0,0) << setw(10) << cvmGet(obs,1,0) << setw(10) << cvmGet(obs,2,0) << endl;
        cout<< "estimation = " << cvmGet(mu,2,0) << setw(10) << cvmGet(mu,3,0) << setw(10) << cvmGet(mu,4,0) << endl;

        // result in image
        // ground truth of robot position       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(height/2)), 1, cvScalar(100, 0, 255));
        // robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(height/2)), 3, cvScalar(255, 0, 100));
        // uncertainty of robot position, purple line
        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
                       cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[index]), cvRound(height/2)), 3, cvScalarAll(255));
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,index,0)), cvRound(height/2)), 2, cvScalar(0, 200, 255));
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0)), cvRound(height/2)), 2, cvScalar(50, 255, 0));
            // uncertainty of estimation, green line
            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);

        }
   
        cvShowImage("SLAM-1d", iplImg);
        cvWaitKey(0);
       
    }
    cvWaitKey();   
   
    return 0;
}



console:



2차원 SLAM을 위한 Kalman filter 간단 예제

// 2-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-18
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

#define num_landmarks 20
#define num_dim ( 2 * (num_landmarks + 2) )

#define step 200
#define width 800
#define height 600


// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}


void displayMatrix(CvMat *mat, char *title = NULL) {
    if(title) cout << title << endl;
    for(int iR = 0; iR < mat->rows; iR++) {
        for(int iC = 0; iC < mat->cols; iC++) {
            printf("%.2f ", cvmGet(mat, iR, iC));
        }
        printf("\n");
    }
    printf("\n");
    return;



void draw2DEllipseFromCovariance
(CvMat* cov, CvPoint* cnt, IplImage *iplImg, CvScalar* curveColor /*= cvScalarAll(255)*/, CvScalar* centerColor /*= cvScalarAll(128) */, int thickness /*= 1*/)
{
   
    if(NULL == cov || 2 != cov->rows || 2 != cov->cols) {
        printf("covariance matrix is not 2x2 !! \n");
        exit(0);
    }
    double eigenvalues[6], eigenvectors[36]; 
    float ev1, ev2, vx, vy, angle;
   
    CvSize axes;
    CvMat evals = cvMat(1, 2, CV_64F, eigenvalues), evecs = cvMat(2, 2, CV_64F, eigenvectors);
   
    cvSVD(cov, &evals, &evecs, 0, CV_SVD_MODIFY_A + CV_SVD_U_T ); 
   
    ev1 = cvmGet(&evals, 0, 0);        ev2 = cvmGet(&evals, 0, 1);
   
    if( ev1 < 0 && ev2 < 0 ) {
        ev1 = -ev1;
        ev2 = -ev2;
    }
    if( ev1 < ev2 ) {
        float tmp = ev1;
        ev1 = ev2;
        ev2 = tmp;
    }
    if( ev1 <= 0 || ev2 <= 0 ) {
        printf("COV Eigenvalue is negativ or zero(!)\n");
        exit(0);
    }
   
    // calc angle 
    angle = (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI); 
   
    axes = cvSize(cvRound(sqrt(ev1)), cvRound(sqrt(ev2)));
    (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI);
    cvEllipse(iplImg, *cnt, axes, angle, 0, 360, *curveColor, thickness);
   
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y - 1), cvPoint(cnt->x + 2, cnt->y + 1), *centerColor, 1);
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y + 1), cvPoint(cnt->x + 2, cnt->y - 1), *centerColor, 1);
   
}


int main (int argc, char * const argv[]) {

    srand(time(NULL));
   
    // set the initial state
    double rob_x = width * 0.1; // robot's initial x-position
    double rob_y = height * 0.4; // robot's initial y-position   
    double rob_vx = 10.0; // robot's initial x-velocity
    double rob_vy = 10.0; // robot's initial y-velocity   
   
    // set the initial covariance of the state uncertainty
    double rob_px_cov = 0.01; // covariance of noise to robot's x-position
    double rob_py_cov = 0.01; // covariance of noise to robot's y-position   
    double rob_vx_cov = 0.01; // covariance of noise to robot's x-velocity
    double rob_vy_cov = 0.01; // covariance of noise to robot's y-velocity   
   
    // set the initial covariance of the measurement noise
    double obs_x_cov = 900; // covarriance of noise to x-measurement of landmarks
    double obs_y_cov = 900; // covarriance of noise to y-measurement of landmarks
   
    // ground truth of state
    double xGroundTruth = rob_x;
    double yGroundTruth = rob_y;
    double vxGroundTruth = rob_vx;
    double vyGroundTruth = rob_vy;
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[2*num_landmarks];  
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[2*n] = width * uniform_random();
        landmark[2*n+1] = height * uniform_random();
    }   
   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-2d");
   
    // H matrix, measurement matrix
    CvMat* H = cvCreateMat(2*num_landmarks, num_dim, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int r = 0; r < num_landmarks; r++)
    {
        cvmSet(H, 2*r, 0, -1.0); // robot's x-position
        cvmSet(H, 2*r, 2*r+4, 1.0); // landmark's x-position
        cvmSet(H, 2*r+1, 1, -1.0); // robot's y-position
        cvmSet(H, 2*r+1, 2*r+5, 1.0); // landmarks's y-position        
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H; uncertainty of control
    CvMat* Q = cvCreateMat(2*num_landmarks, 2*num_landmarks, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_x_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    CvMat* G = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous x-position
    cvmSet(G, 0, 2, 1.0); // x-velocity
    cvmSet(G, 1, 1, 1.0); // previous y-position
    cvmSet(G, 1, 3, 1.0); // y-velocity   
    for (int row = 2; row < G->rows; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to G; uncertainty of observation
    CvMat* R = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_px_cov);
    cvmSet(R, 1, 1, rob_py_cov);
    cvmSet(R, 2, 2, rob_vx_cov);
    cvmSet(R, 3, 3, rob_vy_cov);   
    displayMatrix(R, "R matrix");   
       
   
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // ground truth of state        
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be predicted

    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, 2*num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))  
    CvMat* obs = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // observation for each landmark

    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_x); // set mu(0,0) to "rob_x"
    cvmSet(mu, 1, 0, rob_y); // set mu(1,0) to "rob_y"
    cvmSet(mu, 2, 0, rob_vx); // set mu(2,0) to "rob_vx"
    cvmSet(mu, 3, 0, rob_vy); // set mu(3,0) to "rob_vy"
    for (int n = 0; n < 2*num_landmarks; n++)
    {
        cvmSet(mu, n+4, 0, landmark[n]); // set mu(4,0) to "landmark[0]", ...
    }
    displayMatrix(mu, "mu vector");
   
/*    // initialize "sigma" matrix
    cvmSet(sigma, 0, 0, rob_px_cov);
    cvmSet(sigma, 1, 1, rob_py_cov);
    cvmSet(sigma, 2, 2, rob_vx_cov);
    cvmSet(sigma, 3, 3, rob_vy_cov);
    for (int r = 4; r < sigma->rows; r=r*2)
    {
        cvmSet(sigma, r, r, obs_x_cov);
        cvmSet(sigma, r+1, r+1, obs_y_cov);        
    }
*/    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_x_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1);
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1);
    cvTranspose(G, Gt); // transpose(G) -> Gt
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1);
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 10x10
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 10x6
    cvTranspose(H, Ht); // transpose(H) -> Ht       
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 10x10 * 10x6
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 10x10 * 10x6    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // 6x10 * 10x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // 6x1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 10x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 10x6 * 6x10
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 10x10 identity matrix
    cvSetIdentity(I); // does not seem to be working properly      
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 10x10
   
    CvPoint trajectory[step];
    CvPoint robot_ground[step];
   
    int frame = int(0.9*step);
   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;      
        cvZero(iplImg);
       
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
       
        // predict the covariance of the state (ref. L3, EKF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
               
        // estimate Kalman gain (ref. L4, EKF algorithm, 42p)   
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
        displayMatrix(K, "K matrix");  
   
       
        // measure
        // set ground truths
        if ( xGroundTruth >= width || xGroundTruth <= 0)
        {
            vxGroundTruth = - vxGroundTruth;
        }   
        if ( yGroundTruth >= height || yGroundTruth <= 0 )
        {
            vyGroundTruth = - vyGroundTruth;
        }   
        xGroundTruth += vxGroundTruth;
        yGroundTruth += vyGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, yGroundTruth);
        cvmSet(rob_ground, 2, 0, vxGroundTruth);
        cvmSet(rob_ground, 3, 0, vyGroundTruth);
       
        robot_ground[t] = cvPoint(cvRound(xGroundTruth),cvRound(yGroundTruth)); 
       
        for (int dim = 0; dim < 2*num_landmarks; dim++)
        {
            cvmSet(rob_ground, dim+4, 0, landmark[dim]);
        }
        displayMatrix(rob_ground, "rob_ground");
        // set measurement noise
        for(int n = 0; n < num_landmarks; n++)
        {
            double rn_x = sqrt(obs_x_cov) * gaussian_random();
            double rn_y = sqrt(obs_y_cov) * gaussian_random();           
            cvmSet(delta, 2*n, 0, rn_x);
            cvmSet(delta, 2*n+1, 0, rn_y);
           
        }
//      displayMatrix(delta, "delta vector; measurement noise");
       
        // define z, measurement, vector
        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // observation relative to robot's position
        for( int n = 0; n < 2*num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
       
        // update the state with Kalman gain (ref. L5, EKF algorithm, 42p)
        cvMatMul(H, mu, Hmu); // H * mu -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        trajectory[t] = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
       
       
        // update the covariance of the state
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");
       
        // result in console
        cout << "robot position: " << "px = " << cvmGet(mu, 0, 0) << "  py = " << cvmGet(mu, 1, 0) << endl;
        for (int n = 0; n < num_landmarks; n++)
        {
            cout << setw(10) << "landmark" << n+1 << " (" << landmark[2*n] << ", " << landmark[2*n+1] << ") "
            << setw(10) << "observation" << n+1 << " (" << cvmGet(obs,2*n,0) << ", " << cvmGet(obs,2*n+1,0) << ") "
            << setw(10) << "estimation" << n+1 << " (" << cvmGet(mu,4+2*n,0) << ", " << cvmGet(mu,4+2*n+1,0) << ") " << endl;
        }       
       
       
        CvMat* local_uncertain = cvCreateMat(2, 2, CV_64FC1);
        CvMat* map_uncertain [num_landmarks];
        for (int n = 0; n < num_landmarks; n++)
        {
            map_uncertain [n] = cvCreateMat(2, 2, CV_64FC1);
        }
        cvmSet(local_uncertain, 0, 0, cvmGet(sigma,0,0));
        cvmSet(local_uncertain, 0, 1, cvmGet(sigma,0,1));
        cvmSet(local_uncertain, 1, 0, cvmGet(sigma,1,0));
        cvmSet(local_uncertain, 1, 1, cvmGet(sigma,1,1));
       
        displayMatrix(local_uncertain, "local_uncertain");       
       
        for (int n = 0; n < num_landmarks; n++)
        {
            cvmSet(map_uncertain[n], 0, 0, cvmGet(sigma,n+4,n+4));
            cvmSet(map_uncertain[n], 0, 1, cvmGet(sigma,n+4,n+5));
            cvmSet(map_uncertain[n], 1, 0, cvmGet(sigma,n+5,n+4));
            cvmSet(map_uncertain[n], 1, 1, cvmGet(sigma,n+5,n+5));

            displayMatrix(map_uncertain[n], "map_uncertain");
        } 
       
        // result in image
        // ground truth of robot position, red       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(cvmGet(rob_ground,1,0))), 2, cvScalar(60, 0, 255), 2);
        // estimated robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(cvmGet(mu,1,0))), 5, cvScalar(255, 0, 100), 2);
        // uncertainty of robot position, purple line
//        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
//               cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
   
        CvPoint local = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
        CvScalar local_center_color = cvScalar(255, 0, 100);
        CvScalar local_curve_color = cvScalarAll(128);
       
        draw2DEllipseFromCovariance(local_uncertain, &local, iplImg, &local_center_color, &local_curve_color, 1);
       
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[2*index]), cvRound(landmark[2*index+1])), 4, cvScalarAll(255), 2);
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,2*index,0)), cvRound(cvmGet(obs,2*index+1,0))), 4, cvScalar(0, 200, 255), 1);
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0))), 3, cvScalar(50, 255, 0), 1);
            // uncertainty of estimation, green line
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
//                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);
       
            CvPoint observed = cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0)));
            CvScalar observed_center_color = cvScalar(50, 255, 0);
            CvScalar observed_curve_color = cvScalar(50, 255, 0);
           
            draw2DEllipseFromCovariance(map_uncertain[index], &observed, iplImg, &observed_center_color, &observed_curve_color, 1);    
        }
       
        for( int p = 1; p <= t; p++ )
        {
            cvLine(iplImg, robot_ground[p-1], robot_ground[p], cvScalar(60, 0, 255), 1);           
            cvLine(iplImg, trajectory[p-1], trajectory[p], cvScalar(255, 0, 100), 1);
        }

        if ( t == frame )
        {
            cvSaveImage("2D SLAM test.bmp", iplImg);
        }
       
        cvShowImage("SLAM-2d", iplImg);
        cvWaitKey(100);   
       
    }
    cout << endl << endl << "process finished" << endl;
    cvWaitKey();   
   
    return 0;
}












posted by maetel
2010. 1. 14. 17:27 Footmarks
RoSEC international summer/winter school
Robotics-Specialized Education Consortium for Graduates sponsored by MKE

로봇 특성화 대학원 사업단 주관
2010 RoSEC International Winter School
2010년 1월 11일(월)부터 1월 16일(토)
한양대학교 HIT(한양종합기술연구원) 6층 제1세미나실(606호)



Robot mechanism
Byung-Ju Yi (Hanyang University, Korea)
한양대 휴먼로보틱스 연구실 이병주 교수님  bj@hanyang.ac.kr
- Classification of robotic mechanism and Design consideration of robotic mechanism
- Design Issue and application examples of master slave robotic system
- Trend of robotic mechanism research

Actuator and Practical PID Control
Youngjin Choi (Hanyang University, Korea)
한양대 휴먼로이드 연구실 최영진 교수님 cyj@hanyang.ac.kr
- Operation Principle of DC/RC/Stepping Motors & Its Practice
- PID Control and Tuning
- Stability of PID Control and Application Examples

Coordination of Robots and Humans
Kazuhiro Kosuge (Tohoku University, Japan)
일본 도호쿠 대학 시스템 로보틱스 연구실 고스게 카즈히로 교수님
- Robotics as systems integration
- Multiple Robots Coordination
- Human Robot Coordination and Interaction

Robot Control
Rolf Johansson (Lund University, Sweden)
스웨덴 룬드 대학 로보틱스 연구실 Rolf.Johansson@control.lth.se
- Robot motion and force control
- Stability of motion
- Robot obstacle avoidance

Lecture from Industry or Government
(S. -R. Oh, KIST)

Special Talk from Government
(Y. J. Weon, MKE)

Mobile Robot Navigation
Jae-Bok Song (Korea University, Korea)
고려대 지능로봇 연구실 송재복 교수님 jbsong@korea.ac.kr
- Mapping
- Localization
- SLAM

3D Perception for Robot Vision
In Kyu Park (Inha University, Korea)
인하대 영상미디어 연구실 박인규 교수님 pik@inha.ac.kr
- Camera Model and Calibration
- Shape from Stereo Views
- Shape from Multiple Views

Lecture from Industry or Government
(H. S. Kim, KITECH)

Roboid Studio
Kwang Hyun Park (Kwangwoon University, Korea)
광운대 정보제어공학과 박광현 교수님 akaii@kw.ac.kr
- Robot Contents
- Roboid Framework
- Roboid Component

Software Framework for LEGO NXT
Sanghoon Lee (Hanyang University, Korea)
한양대 로봇 연구실 이상훈 교수님
- Development Environments for LEGO NXT
- Programming Issues for LEGO NXT under RPF of OPRoS
- Programming Issues for LEGO NXT under Roboid Framework

Lecture from Industry or Government
(Robomation/Mobiletalk/Robotis)

Robot Intelligence : From Reactive AI to Semantic AI
Il Hong Suh (Hanyang University, Korea)
한양대 로봇 지능/통신 연구실 서일홍 교수님
- Issues in Robot Intelligence
- Behavior Control: From Reactivity to Proactivity
- Use of Semantics for Robot Intelligence

AI-Robotics
Henrik I. Christensen (Georgia Tech., USA)

-
Semantic Mapping
- Physical Interaction with Robots
- Efficient object recognition for robots

Lecture from Industry or Government
(M. S. Kim, Director of CIR, 21C Frontier Program)

HRI
Dongsoo Kwon (KAIST, Korea)

- Introduction to human-robot interaction
- Perception technologies of HRI
- Cognitive and emotional interaction

Robot Swarm for Environmental Monitoring
Nak Young Chong (JAIST, Japan)

- Self-organizing Mobile Robot Swarms: Models
- Self-organizing Mobile Robot Swarms: Algorithms
- Self-organizing Mobile Robot Swarms: Implementation


posted by maetel
2009. 12. 22. 13:38 Hardware

'Hardware' 카테고리의 다른 글

[RISD] Sensing, Materials, and Embodied Interaction  (0) 2008.08.08
TLV320AIC23BRHDR  (0) 2008.05.20
[Noah Shibley] eagle PCB  (0) 2008.04.01
smd toast oven  (0) 2008.03.24
Wiring board + 가속도 센서 시리얼 통신 테스트  (0) 2007.01.03
posted by maetel
2009. 12. 2. 21:33 Computer Vision
Joan Solà

6DOF SLAM toolbox for Matlab http://homepages.laas.fr/jsola/JoanSola/eng/toolbox.html

References

[1] J. Civera, A.J. Davison, and J.M.M Montiel. Inverse depth parametrization for monocular SLAM. IEEE Trans. on Robotics, 24(5), 2008.

[2] J. Civera, Andrew Davison, and J. Montiel. Inverse Depth to Depth Conversion for Monocular SLAM. In IEEE Int. Conf. on Robotics and Automation, pages 2778 –2783, April 2007.

[3] A. J. Davison. Real-time simultaneous localisation and mapping with a single camera. In Int. Conf. on Computer Vision, volume 2, pages 1403–1410, Nice, October 2003.

[4] Andrew J. Davison. Active search for real-time vision. Int. Conf. on Computer Vision, 1:66–73, 2005.

[5] Andrew J. Davison, Ian D. Reid, Nicholas D. Molton, and Olivier Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence, 29(6):1052–1067, June 2007.

[6] Ethan Eade and Tom Drummond. Scalable monocular SLAM. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 1:469–476, 2006.

[7] Thomas Lemaire and Simon Lacroix. Monocular-vision based SLAM using line segments. In IEEE Int. Conf. on Robotics and Automation, pages 2791–2796, Rome, Italy, 2007.

[8] Nicholas Molton, Andrew J. Davison, and Ian Reid. Locally planar patch features for real-time structure from motion. In British Machine Vision Conference, 2004.

[9] J. Montiel, J. Civera, and A. J. Davison. Unified inverse depth parametrization for monocular SLAM. In Robotics: Science and Systems, Philadelphia, USA, August 2006.

[10] L. M. Paz, P. Pini´es, J. Tard´os, and J. Neira. Large scale 6DOF SLAM with stereo-in-hand. IEEE Trans. on Robotics, 24(5), 2008.

[11] J. Sol`a, Andr´e Monin, Michel Devy, and T. Vidal-Calleja. Fusing monocular information in multi-camera SLAM. IEEE Trans. on Robotics, 24(5):958–968, 2008.

[12] Joan Sol`a. Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach. PhD thesis, Institut National Polytechnique de Toulouse, 2007.

[13] Joan Sol`a, Andr´e Monin, and Michel Devy. BiCamSLAM: Two times mono is more than stereo. In IEEE Int. Conf. on Robotics and Automation, pages 4795–4800, Rome, Italy, April 2007.

[14] Joan Sol`a, Andr´e Monin, Michel Devy, and Thomas Lemaire. Undelayed initialization in bearing only SLAM. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 2499–2504, Edmonton, Canada, 2005.

[15] Joan Sol`a, Teresa Vidal-Calleja, and Michel Devy. Undelayed initialization of line segments in monocular SLAM. In IEEE Int. Conf. on Intelligent Robots and Systems, Saint Louis, USA, 2009. To appear.



slamtb.m



Plucker line (HighLevel/userDataLin.m) http://en.wikipedia.org/wiki/Pl%C3%BCcker_coordinates http://www.cgafaq.info/wiki/Plucker_line_coordinates


direct observation model
http://vismod.media.mit.edu/tech-reports/TR-451/node8.html
inverse observation model
http://vismod.media.mit.edu/tech-reports/TR-451/node9.html
( source: MIT Media Laboratory's Vision and Modeling group )
posted by maetel
2009. 11. 30. 16:06 Computer Vision
http://bayesclasses.sourceforge.net/

내용은 다음과 같다.
http://bayesclasses.sourceforge.net/Bayesian%20Filtering%20Classes.html


test log 2009-12-02


0. 랩 공용 노트북 사양
OS: 32bit Windows Vista Home Basic K Service Pack 1
processor: Intel(R) Core(TM)2 Duo CPU T5750 @ 2.00GHz 2.00 GHz
RAM: 2.00GM

Microsoft Visual Studio 2005 버전 8.050727.867 (vsvista.050727-8600
Microsoft.Net Framework 버전 2.0.50727 서비스 팩



1. 다운로드
1-1. Boost 다운로드 및 설치
http://www.boost.org/에서 직접 다운로드하여 설치할 수도 있으나 복잡하다고 한다. 
http://www.boostpro.com/에서 BoostPro 1.40.0 Installer를  다운로드 후 실행하여
자동으로 설치 (VS 8버전에 맞는 모든 옵션 선택)

1-2. Bayes++  다운로드
bf-C++source-2003.8-6.zip   141.5 KB 2006-10-04
Bayes++.sln 파일 실행, Visual Studio 자동 로딩, 버전 문제로 백업 후 업그레이드

1-3. Bayes++ solution에 path 추가
VS 메뉴 > 도구 > 옵션 >


2. 디버깅



posted by maetel
2009. 11. 23. 11:48 Computer Vision
to apply Particle filter to object tracking
3차원 파티클 필터를 이용한 물체(공) 추적 (contour tracking) 알고리즘 연습


IplImage* cvRetrieveFrame(CvCapture* capture)

Gets the image grabbed with cvGrabFrame.

Parameter: capture – video capturing structure.

The function cvRetrieveFrame() returns the pointer to the image grabbed with the GrabFrame function. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.



Canny edge detection

Canny edge detection in OpenCV image processing functions
void cvCanny(const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3)

Implements the Canny algorithm for edge detection.

Parameters:
  • image – Single-channel input image
  • edges – Single-channel image to store the edges found by the function
  • threshold1 – The first threshold
  • threshold2 – The second threshold
  • aperture_size – Aperture parameter for the Sobel operator (see cvSobel())

The function cvCanny() finds the edges on the input image image and marks them in the output image edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking, the largest value is used to find the initial segments of strong edges.



source cod...ing
// 3-D Particle filter algorithm + Computer Vision exercise
// : object tracking - contour tracking
// lym, VIP Lab, Sogang Univ.
// 2009-11-23
// ref. Probabilistic Robotics: 98p

#include <OpenCV/OpenCV.h> // matrix operations & Canny edge detection
#include <iostream>
#include <cstdlib> // RAND_MAX
#include <ctime> // time as a random seed
#include <cmath>
#include <algorithm>
using namespace std;

#define PI 3.14159265
#define N 100 //number of particles
#define D 3 // dimension of the state

// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}

double normal_distribution(double mean, double standardDeviation, double state) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * (state - mean) * (state - mean) / variance ) / sqrt(2 * PI * variance);
}
////////////////////////////////////////////////////////////////////////////

// distance between measurement and prediction
double distance(CvMat* measurement, CvMat* prediction)
{
    double distance2 = 0;
    double differance = 0;
    for (int u = 0; u < 3; u++)
    {
        differance = cvmGet(measurement,u,0) - cvmGet(prediction,u,0);
        distance2 += differance * differance;
    }
    return sqrt(distance2);
}

double distanceEuclidean(CvPoint2D64f a, CvPoint2D64f b)
{
    double d2 = (a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y);
    return sqrt(d2);
}

// likelihood based on multivariative normal distribution (ref. 15p eqn(2.4))
double likelihood(CvMat *mean, CvMat *covariance, CvMat *sample) {
   
    CvMat* diff = cvCreateMat(3, 1, CV_64FC1);
    cvSub(sample, mean, diff); // sample - mean -> diff
    CvMat* diff_t = cvCreateMat(1, 3, CV_64FC1);
    cvTranspose(diff, diff_t); // transpose(diff) -> diff_t
    CvMat* cov_inv = cvCreateMat(3, 3, CV_64FC1);
    cvInvert(covariance, cov_inv); // transpose(covariance) -> cov_inv
    CvMat* tmp = cvCreateMat(3, 1, CV_64FC1);
    CvMat* dist = cvCreateMat(1, 1, CV_64FC1);
    cvMatMul(cov_inv, diff, tmp); // cov_inv * diff -> tmp   
    cvMatMul(diff_t, tmp, dist); // diff_t * tmp -> dist
   
    double likeliness = exp( -0.5 * cvmGet(dist, 0, 0) );
    double bound = 0.0000001;
    if ( likeliness < bound )
    {
        likeliness = bound;
    }
    return likeliness;
//    return exp( -0.5 * cvmGet(dist, 0, 0) );
//    return max(0.0000001, exp(-0.5 * cvmGet(dist, 0, 0)));   
}

// likelihood based on normal distribution (ref. 14p eqn(2.3))
double likelihood(double distance, double standardDeviation) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * distance*distance / variance ) / sqrt(2 * PI * variance);
}

int main (int argc, char * const argv[]) {
   
    srand(time(NULL));
   
    IplImage *iplOriginalColor; // image to be captured
    IplImage *iplOriginalGrey; // grey-scale image of "iplOriginalColor"
    IplImage *iplEdge; // image detected by Canny edge algorithm
    IplImage *iplImg; // resulting image to show tracking process   
    IplImage *iplEdgeClone;

    int hours, minutes, seconds;
    double frame_rate, Codec, frame_count, duration;
    char fnVideo[200], titleOriginal[200], titleEdge[200], titleResult[200];
   
    sprintf(titleOriginal, "original");
    sprintf(titleEdge, "Edges by Canny detector");
//    sprintf(fnVideo, "E:/AVCHD/BDMV/STREAM/00092.avi");   
    sprintf(fnVideo, "/Users/lym/Documents/VIP/2009/Nov/volleyBall.mov");
    sprintf(titleResult, "3D Particle filter for contour tracking");
   
    CvCapture *capture = cvCaptureFromAVI(fnVideo);
   
    // stop the process if capture is failed
    if(!capture) { printf("Can NOT read the movie file\n"); return -1; }
   
    frame_rate = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
//    Codec = cvGetCaptureProperty( capture, CV_CAP_PROP_FOURCC );
    frame_count = cvGetCaptureProperty( capture, CV_CAP_PROP_FRAME_COUNT);
   
    duration = frame_count/frame_rate;
    hours = duration/3600;
    minutes = (duration-hours*3600)/60;
    seconds = duration-hours*3600-minutes*60;
   
    //  stop the process if grabbing is failed
    //    if(cvGrabFrame(capture) == 0) { printf("Can NOT grab a frame\n"); return -1; }
   
    cvSetCaptureProperty(capture, CV_CAP_PROP_POS_FRAMES, 0); // go to frame #0
    iplOriginalColor = cvRetrieveFrame(capture);
    iplOriginalGrey = cvCreateImage(cvGetSize(iplOriginalColor), 8, 1);
    iplEdge = cvCloneImage(iplOriginalGrey);
    iplEdgeClone = cvCreateImage(cvSize(iplOriginalColor->width, iplOriginalColor->height), 8, 3);
    iplImg = cvCreateImage(cvSize(iplOriginalColor->width, iplOriginalColor->height), 8, 3);   
   
    int width = iplOriginalColor->width;
    int height = iplOriginalColor->height;
   
    cvNamedWindow(titleOriginal);
    cvNamedWindow(titleEdge);
   
    cout << "image width : height = " << width << "  " << height << endl;
    cout << "# of frames = " << frame_count << endl;   
    cout << "capture finished" << endl;   
   
   
    // set the system   
   
    // set the process noise
    // covariance of Gaussian noise to control
    CvMat* transition_noise = cvCreateMat(D, D, CV_64FC1);
    // assume the transition noise
    for (int row = 0; row < D; row++)
    {   
        for (int col = 0; col < D; col++)
        {
            cvmSet(transition_noise, row, col, 0.0);
        }
    }
    cvmSet(transition_noise, 0, 0, 3.0);
    cvmSet(transition_noise, 1, 1, 3.0);
    cvmSet(transition_noise, 2, 2, 0.3);
   
    // set the measurement noise
/*
    // covariance of Gaussian noise to measurement
     CvMat* measurement_noise = cvCreateMat(D, D, CV_64FC1);
     // initialize the measurement noise
     for (int row = 0; row < D; row++)
     {   
        for (int col = 0; col < D; col++)
        {
            cvmSet(measurement_noise, row, col, 0.0);
        }
     }
     cvmSet(measurement_noise, 0, 0, 5.0);
     cvmSet(measurement_noise, 1, 1, 5.0);
     cvmSet(measurement_noise, 2, 2, 5.0); 
 */
    double measurement_noise = 2.0; // standrad deviation of Gaussian noise to measurement
   
    CvMat* state = cvCreateMat(D, 1, CV_64FC1);    // state of the system to be estimated   
//    CvMat* measurement = cvCreateMat(2, 1, CV_64FC1); // measurement of states
   
    // declare particles
    CvMat* pb [N]; // estimated particles
    CvMat* pp [N]; // predicted particles
    CvMat* pu [N]; // temporary variables to update a particle
    CvMat* v[N]; // estimated velocity of each particle
    CvMat* vu[N]; // temporary varialbe to update the velocity   
    double w[N]; // weight of each particle
    for (int n = 0; n < N; n++)
    {
        pb[n] = cvCreateMat(D, 1, CV_64FC1);
        pp[n] = cvCreateMat(D, 1, CV_64FC1);
        pu[n] = cvCreateMat(D, 1, CV_64FC1);   
        v[n] = cvCreateMat(D, 1, CV_64FC1);   
        vu[n] = cvCreateMat(D, 1, CV_64FC1);           
    }   
   
    // initialize the state and particles    
    for (int n = 0; n < N; n++)
    {
        cvmSet(state, 0, 0, 258.0); // center-x
        cvmSet(state, 1, 0, 406.0); // center-y       
        cvmSet(state, 2, 0, 38.0); // radius   
       
//        cvmSet(state, 0, 0, 300.0); // center-x
//        cvmSet(state, 1, 0, 300.0); // center-y       
//        cvmSet(state, 2, 0, 38.0); // radius       
       
        cvmSet(pb[n], 0, 0, cvmGet(state,0,0)); // center-x
        cvmSet(pb[n], 1, 0, cvmGet(state,1,0)); // center-y
        cvmSet(pb[n], 2, 0, cvmGet(state,2,0)); // radius
       
        cvmSet(v[n], 0, 0, 2 * uniform_random()); // center-x
        cvmSet(v[n], 1, 0, 2 * uniform_random()); // center-y
        cvmSet(v[n], 2, 0, 0.1 * uniform_random()); // radius       
       
        w[n] = (double) 1 / (double) N; // equally weighted particle
    }
   
    // initialize the image window
    cvZero(iplImg);   
    cvNamedWindow(titleResult);
   
    cout << "start filtering... " << endl << endl;
   
    float aperture = 3,     thresLow = 50,     thresHigh = 110;   
//    float aperture = 3,     thresLow = 80,     thresHigh = 110;   
    // for each frame
    int frameNo = 0;   
    while(frameNo < frame_count && cvGrabFrame(capture)) {
        // retrieve color frame from the movie "capture"
        iplOriginalColor = cvRetrieveFrame(capture);        
        // convert color pixel values of "iplOriginalColor" to grey scales of "iplOriginalGrey"
        cvCvtColor(iplOriginalColor, iplOriginalGrey, CV_RGB2GRAY);               
        // extract edges with Canny detector from "iplOriginalGrey" to save the results in the image "iplEdge" 
        cvCanny(iplOriginalGrey, iplEdge, thresLow, thresHigh, aperture);

        cvCvtColor(iplEdge, iplEdgeClone, CV_GRAY2BGR);
       
        cvShowImage(titleOriginal, iplOriginalColor);
        cvShowImage(titleEdge, iplEdge);

//        cvZero(iplImg);
       
        cout << "frame # " << frameNo << endl;
       
        double like[N]; // likelihood between measurement and prediction
        double like_sum = 0; // sum of likelihoods
       
        for (int n = 0; n < N; n++) // for "N" particles
        {
            // predict
            double prediction;
            for (int row = 0; row < D; row++)
            {
                prediction = cvmGet(pb[n],row,0) + cvmGet(v[n],row,0)
                            + cvmGet(transition_noise,row,row) * gaussian_random();
                cvmSet(pp[n], row, 0, prediction);
            }
            if ( cvmGet(pp[n],2,0) < 2) { cvmSet(pp[n],2,0,0.0); }
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))),
//             cvPoint(cvRound(cvmGet(pb[n],0,0)), cvRound(cvmGet(pb[n],1,0))), CV_RGB(100,100,0), 1);           
            cvCircle(iplEdgeClone, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), cvRound(cvmGet(pp[n],2,0)), CV_RGB(255, 255, 0));
//            cvCircle(iplImg, cvPoint(iplImg->width *0.5, iplImg->height * 0.5), 100, CV_RGB(255, 255, 0), -1);
//            cvSaveImage("a.bmp", iplImg);

            double cX = cvmGet(pp[n], 0, 0); // predicted center-y of the object
            double cY = cvmGet(pp[n], 1, 0); // predicted center-x of the object
            double cR = cvmGet(pp[n], 2, 0); // predicted radius of the object           

            if ( cR < 0 ) { cR = 0; }
           
            // measure
            // search measurements
            CvPoint2D64f direction [8]; // 8 searching directions
            // define 8 starting points in each direction
            direction[0].x = cX + cR;    direction[0].y = cY;      // East
            direction[2].x = cX;        direction[2].y = cY - cR; // North
            direction[4].x = cX - cR;    direction[4].y = cY;      // West
            direction[6].x = cX;        direction[6].y = cY + cR; // South
            int cD = cvRound( cR/sqrt(2.0) );
            direction[1].x = cX + cD;    direction[1].y = cY - cD; // NE
            direction[3].x = cX - cD;    direction[3].y = cY - cD; // NW
            direction[5].x = cX - cD;    direction[5].y = cY + cD; // SW
            direction[7].x = cX + cD;    direction[7].y = cY + cD; // SE       
           
            CvPoint2D64f search [8];    // searched point in each direction         
            double scale = 0.4;
            double scope [8]; // scope of searching
   
            for ( int i = 0; i < 8; i++ )
            {
//                scope[2*i] = cR * scale;
//                scope[2*i+1] = cD * scale;
                scope[i] = 6.0;
            }
           
            CvPoint d[8];
            d[0].x = 1;        d[0].y = 0; // E
            d[1].x = 1;        d[1].y = -1; // NE
            d[2].x = 0;        d[2].y = 1; // N
            d[3].x = 1;        d[3].y = 1; // NW
            d[4].x = 1;        d[4].y = 0; // W
            d[5].x = 1;        d[5].y = -1; // SW
            d[6].x = 0;        d[6].y = 1; // S
            d[7].x = 1;        d[7].y = 1; // SE           
           
            int count = 0; // number of measurements
            double dist_sum = 0;
           
            for (int i = 0; i < 8; i++) // for 8 directions
            {
                double dist = scope[i] * 1.5;
                for ( int range = 0; range < scope[i]; range++ )
                {
                    int flag = 0;
                    for (int turn = -1; turn <= 1; turn += 2) // reverse the searching direction
                    {
                        search[i].x = direction[i].x + turn * range * d[i].x;
                        search[i].y = direction[i].y + turn * range * d[i].y;
                       
//                        cvCircle(iplImg, cvPoint(cvRound(search[i].x), cvRound(search[i].y)), 2, CV_RGB(0, 255, 0), -1);
//                        cvShowImage(titleResult, iplImg);
//                        cvWaitKey(100);

                        // detect measurements   
//                        CvScalar s = cvGet2D(iplEdge, cvRound(search[i].y), cvRound(search[i].x));
                        unsigned char s = CV_IMAGE_ELEM(iplEdge, unsigned char, cvRound(search[i].y), cvRound(search[i].x));
//                        if ( s.val[0] > 200 && s.val[1] > 200 && s.val[2] > 200 ) // bgr color               
                        if (s > 250) // bgr color                           
                        { // when the pixel value is white, that means a measurement is detected
                            flag = 1;
                            count++;
//                            cvCircle(iplEdgeClone, cvPoint(cvRound(search[i].x), cvRound(search[i].y)), 3, CV_RGB(200, 0, 255));
//                            cvShowImage("3D Particle filter", iplEdgeClone);
//                            cvWaitKey(1);
/*                            // get measurement
                            cvmSet(measurement, 0, 0, search[i].x);
                            cvmSet(measurement, 1, 0, search[i].y);   
                            double dist = distance(measurement, pp[n]);
*/                            // evaluate the difference between predictions of the particle and measurements
                            dist = distanceEuclidean(search[i], direction[i]);
                            break; // break for "turn"
                        } // end if
                    } // for turn
                    if ( flag == 1 )
                    { break; } // break for "range"
                } // for range
               
                dist_sum += dist; // for all searching directions of one particle 

            } // for i direction
           
            double dist_avg; // average distance of measurements from the one particle "n"
//            cout << "count = " << count << endl;
            dist_avg = dist_sum / 8;
//            cout << "dist_avg = " << dist_avg << endl;
           
//            estimate likelihood with "dist_avg"
            like[n] = likelihood(dist_avg, measurement_noise);
//            cout << "likelihood of particle-#" << n << " = " << like[n] << endl;
            like_sum += like[n];   
        } // for n particle
//        cout << "sum of likelihoods of N particles = " << like_sum << endl;
       
        // estimate states       
        double state_x = 0.0;
        double state_y = 0.0;
        double state_r = 0.0;
        // estimate the state with predicted particles
        for (int n = 0; n < N; n++) // for "N" particles
        {
            w[n] = like[n] / like_sum; // update normalized weights of particles           
//            cout << "w" << n << "= " << w[n] << "  ";               
            state_x += w[n] * cvmGet(pp[n], 0, 0); // center-x of the object
            state_y += w[n] * cvmGet(pp[n], 1, 0); // center-y of the object
            state_r += w[n] * cvmGet(pp[n], 2, 0); // radius of the object           
        }
        if (state_r < 0) { state_r = 0; }
        cvmSet(state, 0, 0, state_x);
        cvmSet(state, 1, 0, state_y);       
        cvmSet(state, 2, 0, state_r);
       
        cout << endl << "* * * * * *" << endl;       
        cout << "estimation: (x,y,r) = " << cvmGet(state,0,0) << ",  " << cvmGet(state,1,0)
        << ",  " << cvmGet(state,2,0) << endl;
        cvCircle(iplEdgeClone, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0)) ),
                 cvRound(cvmGet(state,2,0)), CV_RGB(255, 0, 0), 1);

        cvShowImage(titleResult, iplEdgeClone);
        cvWaitKey(1);

   
        // update particles       
        cout << endl << "updating particles" << endl;
        double a[N]; // portion between particles
       
        // define integrated portions of each particles; 0 < a[0] < a[1] < a[2] = 1
        a[0] = w[0];
        for (int n = 1; n < N; n++)
        {
            a[n] = a[n - 1] + w[n];
//            cout << "a" << n << "= " << a[n] << "  ";           
        }
//        cout << "a" << N << "= " << a[N] << "  " << endl;           
       
        for (int n = 0; n < N; n++)
        {   
            // select a particle from the distribution
            int pselected;
            for (int k = 0; k < N; k++)
            {
                if ( uniform_random() < a[k] )               
                {
                    pselected = k;
                    break;
                }
            }
//            cout << "p " << n << " => " << pselected << "  ";       
           
            // retain the selection 
            for (int row = 0; row < D; row++)
            {
                cvmSet(pu[n], row, 0, cvmGet(pp[pselected],row,0));
                cvSub(pp[pselected], pb[pselected], vu[n]); // pp - pb -> vu
            }
        }
       
        // updated each particle and its velocity
        for (int n = 0; n < N; n++)
        {
            for (int row = 0; row < D; row++)
            {
                cvmSet(pb[n], row, 0, cvmGet(pu[n],row,0));
                cvmSet(v[n], row , 0, cvmGet(vu[n],row,0));
            }
        }
        cout << endl << endl ;
       
//      cvShowImage(titleResult, iplImg);  
//        cvWaitKey(1000);       
        cvWaitKey(1);       
        frameNo++;
    }
   
    cvWaitKey();   
   
    return 0;
}








posted by maetel
2009. 11. 19. 15:52

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2009. 11. 17. 15:48 Computer Vision
Kalman Filtering
http://academic.csuohio.edu/simond/courses/eec644/kalman.pdf

72-79p, Embedded Systems Programming f e a tur e, JUNE 2001
http://www.embedded.com/9900168?_requestid=49635

The Kalman filter update equations in C
http://www.embedded.com/9900168?pgno=2
matrix algebra reference
ftp://ftp.embedded.com/pub/2001/simon06


Dan Simon 
http://academic.csuohio.edu/simond/


Kalman filter
: estimates system states that can only be observed indirectly or inaccurately by the system itself.
: estimates the variables of a wide range of processes.
: estimates the states of a linear system.
: minimizes the variance of the estimation error

Linear system
x: state of the system
u: known input to the system
y: measured output
w: process noise
z: measurement noise


http://wiki.answers.com/Q/What_is_a_feedback_system
A feedback system, in general engineering terms, is a system whose output if fed back to the input, and depending on the output, your input is adjusted so as to reach a steady-state. In colloquial language, you adjust your input based on the output of your system so as to achieve a certain end, like minimizing disturbance, cancelling echo (in a speech system) and so on.


Criteria of an Estimator
1) The expected value of the estimate should be equal to the expected value of the state.
2) The estimator should be with the smallest possible error variance.


Requirement of Kalman filter
1) The average value of w is zero and average value of z is zero.
2) No correlation exists between w and z. w_k and z_k are independent random variables.


Kalman filter equations

K matrix: Kalman gain
P matrix: estimation error covariance



http://en.wikipedia.org/wiki/Three_sigma_rule
In statistics, the 68-95-99.7 rule, or three-sigma rule, or empirical rule, states that for a normal distribution, nearly all values lie within 3 standard deviations of the mean.


"steady state Kalman filter"
 - K matrix & P matrix are constant

"extended Kalman filter"
: an extension of linear Kalman filter theory to nonlinear systems

"Kalman smoother"
: to estimate the state as a function of time so to reconstruct the trajectory after the fact


H infinity filter
=> correlated noise problem
=> unknown noise covariances problem

http://academic.csuohio.edu/simond/estimation/


Rudolph Kalman

Peter Swerling, 1958

Karl Gauss's method of least squares, 1795

spacecraft navigation for the Apollo space program


> applications
all forms of navigation (aerospace, land, and marine)
nuclear power plant instrumentation
demographic modeling
manufacturing
the detection of underground radioactivity
fuzzy logic and neural network training



Gelb, A. Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974.

Anderson, B. and J. Moore. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979.

Grewal, M. and A. Andrews. Kalman Filtering Theory and Practice. Englewood Cliffs, NJ: Prentice-Hall, 1993.

Sorenson, H. Kalman Filtering: Theory and Application. Los Alamitos, CA: IEEE Press, 1985.

Peter Joseph’s Web site @http://ourworld.compuserve.com/homepages/PDJoseph/

posted by maetel
2009. 11. 16. 16:31 Computer Vision
Haug, A.J. (2005). "A Tutorial on Bayesian Estimation and Tracking Techniques Applicable to Nonlinear and Non-Gaussian Processes". The MITRE Corporation, USA, Tech. Rep., Feb.
download@http://www.mitre-corporation.net/work/tech_papers/tech_papers_05/05_0211/05_0211.pdf
.


posted by maetel
2009. 11. 10. 15:14 Computer Vision
1차원 particle filter 간단 예제
// 1-D Particle filter algorithm exercise
// 2009-11-06
// ref. Probabilistic Robotics: 98p

#include <iostream>
#include <cstdlib> //defining RAND_MAX
#include <ctime> //time as a random seed
#include <cmath>
using namespace std;

#define PI 3.14159265
#define N 10 //number of particles

////////////////////////////////////////////////////////////////////////////
// random number generators written by kyu
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}

double normal_distribution(double mean, double standardDeviation, double state) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * (state - mean) * (state - mean) / variance ) / sqrt(2 * PI * variance);
}

////////////////////////////////////////////////////////////////////////////


int main (int argc, char * const argv[]) {
   
    double groundtruth[] = {0.5, 2.0, 3.5, 5.0, 7.0, 8.0, 10.0};
    double measurement[] = {0.4, 2.1, 3.2, 5.3, 7.4, 8.1, 9.6};
    double transition_noise = 0.3; // covariance of Gaussian noise to control
    double measurement_noise = 0.3; // covariance of Gaussian noise to measurement
   
    double x[N]; // N particles
    double x_p[N]; // predicted particles
    double state; // estimated state with particles
    double x_u[N]; // temporary variables for updating particles
    double v[N]; // velocity
    double v_u[N]; // temporary variables for updating velocity   
    double m[N]; // measurement
    double l[N]; // likelihood
    double lsum; // sum of likelihoods
    double w[N]; // weight of each particle
    double a[N]; // portion between particles
   
    double grn[N]; // Gaussian random number
    double urn[N]; // uniform random number
   
    srand(time(NULL));        
   
    // initialize particles
    for (int n = 0; n < N; n++)
    {
        x[n] = 0.0;
        v[n] = 0.0;
        w[n] = (double)1/(double)N;           
    }
   
    int step = 7;
    for (int t = 0; t < step; t++)
    {
        cout << "step " << t << endl;       
        // measure
        m[t] = measurement[t];
       
        cout << "groundtruth = " << groundtruth[t] << endl;
        cout << "measurement = " << measurement[t] << endl;       
       
        lsum = 0;
        for (int n = 0; n < N; n++)
        {
            // predict
            grn[n] = gaussian_random();
            x_p[n] = x[n] + v[n] + transition_noise * grn[n];
//            cout << grn[n] << endl;
           
            // estimate likelihood between measurement and each prediction
            l[n] = normal_distribution(m[t], measurement_noise, x_p[n]); // ref. 14p eqn(2.3)
            lsum += l[n];
        }
//            cout << "lsum = " << lsum << endl;       
       
        // estimate states       
        state = 0;
        for (int n = 0; n < N; n++)
        {
            w[n] = l[n] / lsum; // update normalized weights of particles           
//            cout << "w" << n << "= " << w[n] << "  ";               
            state += w[n] * x_p[n]; // estimate the state with particles
        }   
        cout << "estimation = " << state << endl;
       
        // update       
        // define integrated portions of each particles; 0 < a[0] < a[1] < a[2] = 1
        a[0] = w[0];
        for (int n = 1; n < N; n++)
        {
            a[n] = a[n - 1] + w[n];
//            cout << "a" << n << "= " << a[n] << "  ";           
        }
        for (int n = 0; n < N; n++)
        {   
            // select a particle from the distribution
            urn[n] = uniform_random();
            int select;
            for (int k = 0; k < N; k++)
            {
                if (urn[n] < a[k] )
                {
                    select = k;
                    break;
                }
            }
            cout << "select" << n << "= " << select << "  ";       
            // retain the selection 
            x_u[n] = x_p[select];
            v_u[n] = x_p[select] - x[select];
        }
        cout << endl << endl;
        // updated each particle and its velocity
        for (int n = 0; n < N; n++)
        {
            x[n] = x_u[n];
            v[n] = v_u[n];
//            cout << "v" << n << "= " << v[n] << "  ";   
        }
    }
   
    return 0;
}



실행 결과:




2차원 particle filter 간단 예제

// 2-D Particle filter algorithm exercise
// 2009-11-10
// ref. Probabilistic Robotics: 98p

#include <OpenCV/OpenCV.h> // matrix operations
#include <iostream>
#include <cstdlib> // RAND_MAX
#include <ctime> // time as a random seed
#include <cmath>
#include <algorithm>
using namespace std;

#define PI 3.14159265
#define N 100 //number of particles

// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}

double normal_distribution(double mean, double standardDeviation, double state) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * (state - mean) * (state - mean) / variance ) / sqrt(2 * PI * variance);
}
////////////////////////////////////////////////////////////////////////////

// set groundtruth
void set_groundtruth (CvMat* groundtruth, double x, double y)
{
    cvmSet(groundtruth, 0, 0, x); // x-value
    cvmSet(groundtruth, 1, 0, y); // y-value
   
    cout << "groundtruth = " << cvmGet(groundtruth,0,0) << "  " << cvmGet(groundtruth,1,0) << endl;
}


// count the number of detections in measurement process
int count_detections (void)
{
    // set cases of measurement results
    double mtype[4];
    mtype [0] = 0.0;
    mtype [1] = 0.5;
    mtype [2] = mtype[1] + 0.3;
    mtype [3] = mtype[2] + 0.2;
//    cout << "check measurement type3 = " << mtype[3] << endl; // just to check

    // select a measurement case
    double mrn = uniform_random();       
    int type = 1;
    for ( int k = 0; k < 3; k++ )
    {   
        if ( mrn < mtype[k] )
        {
            type = k;
            break;
        }
    }
    return type;
}

// distance between measurement and prediction
double distance(CvMat* measurement, CvMat* prediction)
{
    double distance2 = 0;
    double differance = 0;
    for (int u = 0; u < 2; u++)
    {
        differance = cvmGet(measurement,u,0) - cvmGet(prediction,u,0);
        distance2 += differance * differance;
    }
    return sqrt(distance2);
}


// likelihood based on multivariative normal distribution (ref. 15p eqn(2.4))
double likelihood(CvMat *mean, CvMat *covariance, CvMat *sample) {
   
    CvMat* diff = cvCreateMat(2, 1, CV_64FC1);
    cvSub(sample, mean, diff); // sample - mean -> diff
    CvMat* diff_t = cvCreateMat(1, 2, CV_64FC1);
    cvTranspose(diff, diff_t); // transpose(diff) -> diff_t
    CvMat* cov_inv = cvCreateMat(2, 2, CV_64FC1);
    cvInvert(covariance, cov_inv); // transpose(covariance) -> cov_inv
    CvMat* tmp = cvCreateMat(2, 1, CV_64FC1);
    CvMat* dist = cvCreateMat(1, 1, CV_64FC1);
    cvMatMul(cov_inv, diff, tmp); // cov_inv * diff -> tmp   
    cvMatMul(diff_t, tmp, dist); // diff_t * tmp -> dist
   
    double likeliness = exp( -0.5 * cvmGet(dist, 0, 0) );
    double bound = 0.0000001;
    if ( likeliness < bound )
    {
        likeliness = bound;
    }
    return likeliness;
//    return exp( -0.5 * cvmGet(dist, 0, 0) );
//    return max(0.0000001, exp(-0.5 * cvmGet(dist, 0, 0)));   
   
}



/*
struct particle
{
    double weight; // weight of a particle
    CvMat* loc_p = cvCreateMat(2, 1, CV_64FC1); // previously estimated position of a particle
    CvMat* loc = cvCreateMat(2, 1, CV_64FC1); // currently estimated position of a particle
    CvMat* velocity = cvCreateMat(2, 1, CV_64FC1); // estimated velocity of a particle
    cvSub(loc, loc_p, velocity); // loc - loc_p -> velocity
};
*/


int main (int argc, char * const argv[]) {
   
    srand(time(NULL));

    int width = 400; // width of image window
    int height = 400; // height of image window   
   
    IplImage *iplImg = cvCreateImage(cvSize(width, height), 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("ParticleFilter-2d", 0);
   
   
     // covariance of Gaussian noise to control
    CvMat* transition_noise = cvCreateMat(2, 2, CV_64FC1);
    cvmSet(transition_noise, 0, 0, 3); //set transition_noise(0,0) to 0.1
    cvmSet(transition_noise, 0, 1, 0.0);
    cvmSet(transition_noise, 1, 0, 0.0);
    cvmSet(transition_noise, 1, 1, 6);      
   
    // covariance of Gaussian noise to measurement
    CvMat* measurement_noise = cvCreateMat(2, 2, CV_64FC1);
    cvmSet(measurement_noise, 0, 0, 2); //set measurement_noise(0,0) to 0.3
    cvmSet(measurement_noise, 0, 1, 0.0);
    cvmSet(measurement_noise, 1, 0, 0.0);
    cvmSet(measurement_noise, 1, 1, 2);  
   
    CvMat* state = cvCreateMat(2, 1, CV_64FC1);
    // declare particles
/*    particle pb[N]; // N estimated particles
    particle pp[N]; // N predicted particles       
    particle pu[N]; // temporary variables for updating particles       
*/
    CvMat* pb [N]; // estimated particles
    CvMat* pp [N]; // predicted particles
    CvMat* pu [N]; // temporary variables to update a particle
    CvMat* v[N]; // estimated velocity of each particle
    CvMat* vu[N]; // temporary varialbe to update the velocity   
    double w[N]; // weight of each particle
    for (int n = 0; n < N; n++)
    {
        pb[n] = cvCreateMat(2, 1, CV_64FC1);
        pp[n] = cvCreateMat(2, 1, CV_64FC1);
        pu[n] = cvCreateMat(2, 1, CV_64FC1);   
        v[n] = cvCreateMat(2, 1, CV_64FC1);   
        vu[n] = cvCreateMat(2, 1, CV_64FC1);           
    }   
   
    // initialize particles and the state
    for (int n = 0; n < N; n++)
    {
        w[n] = (double) 1 / (double) N; // equally weighted
        for (int row=0; row < 2; row++)
        {
            cvmSet(pb[n], row, 0, 0.0);
            cvmSet(v[n], row, 0, 15 * uniform_random());
            cvmSet(state, row, 0, 0.0);           
        }
    }
   
    // set the system   
    CvMat* groundtruth = cvCreateMat(2, 1, CV_64FC1); // groundtruth of states   
    CvMat* measurement [3]; // measurement of states
    for (int k = 0; k < 3; k++) // 3 types of measurement
    {
        measurement[k] = cvCreateMat(2, 1, CV_64FC1);
    }
   
    cout << "start filtering... " << endl << endl;
    int step = 30; //30; // timestep
   
    for (int t = 0; t < step; t++) // for "step" steps
    {
//        cvZero(iplImg);
        cout << "step " << t << endl;
       
        // set groundtruth
        double gx = 10 * t;
        double gy = (-1.0 / width ) * (gx - width) * (gx - width) + height;
        set_groundtruth(groundtruth, gx, gy);

        // set measurement types
        double c1 = 1.0, c2 = 4.0;   
        // measured point 1
        cvmSet(measurement[0], 0, 0, gx + (c1 * cvmGet(measurement_noise,0,0) * gaussian_random())); // x-value
        cvmSet(measurement[0], 1, 0, gy + (c1 * cvmGet(measurement_noise,1,1) * gaussian_random())); // y-value
        // measured point 2
        cvmSet(measurement[1], 0, 0, gx + (c2 * cvmGet(measurement_noise,0,0) * gaussian_random())); // x-value
        cvmSet(measurement[1], 1, 0, gy + (c2 * cvmGet(measurement_noise,1,1) * gaussian_random())); // y-value
        // measured point 3 // clutter noise
        cvmSet(measurement[2], 0, 0, width*uniform_random()); // x-value
        cvmSet(measurement[2], 1, 0, height*uniform_random()); // y-value       

        // count the number of measurements       
        int count = count_detections(); // number of detections
        cout << "# of measured points = " << count << endl;
   
        // get measurement           
        for (int index = 0; index < count; index++)
        {
            cout << "measurement #" << index << " : "
                << cvmGet(measurement[index],0,0) << "  " << cvmGet(measurement[index],1,0) << endl;
           
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(measurement[index],0,0)), cvRound(cvmGet(measurement[index],1,0))), 4, CV_RGB(200, 0, 255), 1);
           
        }
       
       
        double like[N]; // likelihood between measurement and prediction
        double like_sum = 0; // sum of likelihoods
       
        for (int n = 0; n < N; n++) // for "N" particles
        {
            // predict
            double prediction;
            for (int row = 0; row < 2; row++)
            {
               
                prediction = cvmGet(pb[n],row,0) + cvmGet(v[n],row,0) + cvmGet(transition_noise,row,row) * gaussian_random();
                cvmSet(pp[n], row, 0, prediction);
            }
           
            cvLine(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), cvPoint(cvRound(cvmGet(pb[n],0,0)), cvRound(cvmGet(pb[n],1,0))), CV_RGB(100,100,0), 1);           
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), 1, CV_RGB(255, 255, 0));

           
           
            // evaluate measurements
            double range = (double) width; // range to search measurements for each particle
//            cout << "range of distances = " << range << endl;
            int mselected;
            for (int index = 0; index < count; index++)
            {
                double d = distance(measurement[index], pp[n]);
               
                if ( d < range )
                {
                    range = d;
                    mselected = index; // selected measurement
                }
            }
///            cout << "selected measurement # = " << mselected << endl;
            like[n] = likelihood(measurement[mselected], measurement_noise, pp[n]);   
       
///            cout << "likelihood of #" << n << " = " << like[n] << endl;
           
            like_sum += like[n];
        }
       
///        cout << "sum of likelihoods = " << like_sum << endl;
       
        // estimate states       
        double state_x = 0.0;
        double state_y = 0.0;
   
        // estimate the state with predicted particles
        for (int n = 0; n < N; n++) // for "N" particles
        {
            w[n] = like[n] / like_sum; // update normalized weights of particles           
///            cout << "w" << n << "= " << w[n] << "  ";               
            state_x += w[n] * cvmGet(pp[n], 0, 0); // x-value
            state_y += w[n] * cvmGet(pp[n], 1, 0); // y-value
        }
        cvmSet(state, 0, 0, state_x);
        cvmSet(state, 1, 0, state_y);       
       
        cout << endl << "* * * * * *" << endl;       
        cout << "estimation = " << cvmGet(state,0,0) << "  " << cvmGet(state,1,0) << endl;
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0))), 4, CV_RGB(0, 255, 200), 2);
   
        // update particles       
        cout << endl << "updating particles" << endl;
        double urn[N]; // uniform random number
        double a[N]; // portion between particles

        // define integrated portions of each particles; 0 < a[0] < a[1] < a[2] = 1
        a[0] = w[0];
        for (int n = 1; n < N; n++)
        {
            a[n] = a[n - 1] + w[n];
//            cout << "a" << n << "= " << a[n] << "  ";           
        }
//        cout << "a" << N << "= " << a[N] << "  " << endl;           
       
        for (int n = 0; n < N; n++)
        {   
            // select a particle from the distribution
            urn[n] = uniform_random();
            int pselected;
            for (int k = 0; k < N; k++)
            {
                if (urn[n] < a[k] )
                {
                    pselected = k;
                    break;
                }
            }
///            cout << "particle " << n << " => " << pselected << "  ";       
            // retain the selection 
            cvmSet(pu[n], 0, 0, cvmGet(pp[pselected],0,0)); // x-value
            cvmSet(pu[n], 1, 0, cvmGet(pp[pselected],1,0)); // y-value
           
            cvSub(pp[pselected], pb[pselected], vu[n]); // pp - pb -> vu
   
        }
        // updated each particle and its velocity
        for (int n = 0; n < N; n++)
        {
            for (int row = 0; row < 2; row++)
            {
                cvmSet(pb[n], row, 0, cvmGet(pu[n],row,0));
                cvmSet(v[n], row , 0, cvmGet(vu[n],row,0));
            }
        }
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(groundtruth,0,0)), cvRound(cvmGet(groundtruth,1,0))), 4, cvScalarAll(255), 1);
       
        cout << endl << endl ;
       
        cvShowImage("ParticleFilter-2d", iplImg);
        cvWaitKey(1000);   
       
       
       
       
    } // for "t"
   
   
    cvWaitKey();   
   
    return 0;
}




실행 결과:

콘솔 창:



Lessons:
1. transition noise와 measurement noise, (그것들의 covariances) 그리고 각 입자의 초기 위치와 상태를 알맞게 설정하는 것이 관건임을 배웠다. 그것을 Tuning이라 부른다는 것도.
1-1. 코드에서 가정한 system에서는 특히 입자들의 초기 속도를 어떻게 주느냐에 따라 tracking의 성공 여부가 좌우된다.
2. 실제 상태는 등가속도 운동을 하는 비선형 시스템이나, 여기에서는 프레임 간의 운동을 등속으로 가정하여 선형 시스템으로 근사한 모델을 적용한 것이다.
2-1. 그러므로 여기에 Kalman filter를 적용하여 결과를 비교해 볼 수 있겠다. 이 경우, 3차원 Kalman gain을 계산해야 한다.
2-2. 분홍색 부분을 고쳐 비선형 모델로 만든 후 Particle filtering을 하면 결과가 더 좋지 않을까? Tuning도 더 쉬어지지 않을까?
3. 코드 좀 정리하자. 너무 지저분하다. ㅡㅡ;
4. 아니, 근데 영 헤매다가 갑자기 따라잡는 건 뭐지??? (아래 결과)

white: groundtruth / pink: measurements / green: estimation



console:



// 2-D Particle filter algorithm exercise
// lym, VIP Lab, Sogang Univ.
// 2009-11-23
// ref. Probabilistic Robotics: 98p

#include <OpenCV/OpenCV.h> // matrix operations
#include <iostream>
#include <cstdlib> // RAND_MAX
#include <ctime> // time as a random seed
#include <cmath>
#include <algorithm>
using namespace std;

#define PI 3.14159265
#define N 100 //number of particles

int width = 400; // width of image window
int height = 400; // height of image window   
IplImage *iplImg = cvCreateImage(cvSize(width, height), 8, 3);

// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}

double normal_distribution(double mean, double standardDeviation, double state) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * (state - mean) * (state - mean) / variance ) / sqrt(2 * PI * variance);
}
////////////////////////////////////////////////////////////////////////////

// set groundtruth
void get_groundtruth (CvMat* groundtruth, double x, double y)
{
    cvmSet(groundtruth, 0, 0, x); // x-value
    cvmSet(groundtruth, 1, 0, y); // y-value
   
    cout << "groundtruth = " << cvmGet(groundtruth,0,0) << "  " << cvmGet(groundtruth,1,0) << endl;
    cvCircle(iplImg, cvPoint(cvRound(cvmGet(groundtruth,0,0)), cvRound(cvmGet(groundtruth,1,0))), 2, cvScalarAll(255), 2);   
}


// count the number of detections in measurement process
int count_detections (void)
{
    // set cases of measurement results
    double mtype[4];
    mtype [0] = 0.0;
    mtype [1] = 0.5;
    mtype [2] = mtype[1] + 0.3;
    mtype [3] = mtype[2] + 0.2;
    //    cout << "check measurement type3 = " << mtype[3] << endl; // just to check
   
    // select a measurement case
    double mrn = uniform_random();       
    int type = 1;
    for ( int k = 0; k < 3; k++ )
    {   
        if ( mrn < mtype[k] )
        {
            type = k;
            break;
        }
    }
    return type;
}

// get measurements
int get_measurement (CvMat* measurement[], CvMat* measurement_noise, double x, double y)
{
    // set measurement types
    double c1 = 1.0, c2 = 4.0;   
    // measured point 1
    cvmSet(measurement[0], 0, 0, x + (c1 * cvmGet(measurement_noise,0,0) * gaussian_random())); // x-value
    cvmSet(measurement[0], 1, 0, y + (c1 * cvmGet(measurement_noise,1,1) * gaussian_random())); // y-value
    // measured point 2
    cvmSet(measurement[1], 0, 0, x + (c2 * cvmGet(measurement_noise,0,0) * gaussian_random())); // x-value
    cvmSet(measurement[1], 1, 0, y + (c2 * cvmGet(measurement_noise,1,1) * gaussian_random())); // y-value
    // measured point 3 // clutter noise
    cvmSet(measurement[2], 0, 0, width*uniform_random()); // x-value
    cvmSet(measurement[2], 1, 0, height*uniform_random()); // y-value       

    // count the number of measured points   
    int number = count_detections(); // number of detections
    cout << "# of measured points = " << number << endl;

    // get measurement           
    for (int index = 0; index < number; index++)
    {
        cout << "measurement #" << index << " : "
        << cvmGet(measurement[index],0,0) << "  " << cvmGet(measurement[index],1,0) << endl;
       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(measurement[index],0,0)), cvRound(cvmGet(measurement[index],1,0))), 4, CV_RGB(255, 0, 255), 2);           
    }

    return number;
}


// distance between measurement and prediction
double distance(CvMat* measurement, CvMat* prediction)
{
    double distance2 = 0;
    double differance = 0;
    for (int u = 0; u < 2; u++)
    {
        differance = cvmGet(measurement,u,0) - cvmGet(prediction,u,0);
        distance2 += differance * differance;
    }
    return sqrt(distance2);
}


// likelihood based on multivariative normal distribution (ref. 15p eqn(2.4))
double likelihood(CvMat *mean, CvMat *covariance, CvMat *sample) {
   
    CvMat* diff = cvCreateMat(2, 1, CV_64FC1);
    cvSub(sample, mean, diff); // sample - mean -> diff
    CvMat* diff_t = cvCreateMat(1, 2, CV_64FC1);
    cvTranspose(diff, diff_t); // transpose(diff) -> diff_t
    CvMat* cov_inv = cvCreateMat(2, 2, CV_64FC1);
    cvInvert(covariance, cov_inv); // transpose(covariance) -> cov_inv
    CvMat* tmp = cvCreateMat(2, 1, CV_64FC1);
    CvMat* dist = cvCreateMat(1, 1, CV_64FC1);
    cvMatMul(cov_inv, diff, tmp); // cov_inv * diff -> tmp   
    cvMatMul(diff_t, tmp, dist); // diff_t * tmp -> dist
   
    double likeliness = exp( -0.5 * cvmGet(dist, 0, 0) );
    double bound = 0.0000001;
    if ( likeliness < bound )
    {
        likeliness = bound;
    }
    return likeliness;
    //    return exp( -0.5 * cvmGet(dist, 0, 0) );
    //    return max(0.0000001, exp(-0.5 * cvmGet(dist, 0, 0)));   
}


int main (int argc, char * const argv[]) {
   
    srand(time(NULL));

    // set the system   
    CvMat* state = cvCreateMat(2, 1, CV_64FC1);    // state of the system to be estimated
    CvMat* groundtruth = cvCreateMat(2, 1, CV_64FC1); // groundtruth of states   
    CvMat* measurement [3]; // measurement of states
    for (int k = 0; k < 3; k++) // 3 types of measurement
    {
        measurement[k] = cvCreateMat(2, 1, CV_64FC1);
    }   

    // declare particles
    CvMat* pb [N]; // estimated particles
    CvMat* pp [N]; // predicted particles
    CvMat* pu [N]; // temporary variables to update a particle
    CvMat* v[N]; // estimated velocity of each particle
    CvMat* vu[N]; // temporary varialbe to update the velocity   
    double w[N]; // weight of each particle
    for (int n = 0; n < N; n++)
    {
        pb[n] = cvCreateMat(2, 1, CV_64FC1);
        pp[n] = cvCreateMat(2, 1, CV_64FC1);
        pu[n] = cvCreateMat(2, 1, CV_64FC1);   
        v[n] = cvCreateMat(2, 1, CV_64FC1);   
        vu[n] = cvCreateMat(2, 1, CV_64FC1);           
    }   
   
    // initialize the state and particles
    for (int n = 0; n < N; n++)
    {
        w[n] = (double) 1 / (double) N; // equally weighted
        for (int row=0; row < 2; row++)
        {
            cvmSet(state, row, 0, 0.0);   
            cvmSet(pb[n], row, 0, 0.0);
            cvmSet(v[n], row, 0, 15 * uniform_random());
        }
    }
   
    // set the process noise
    // covariance of Gaussian noise to control
    CvMat* transition_noise = cvCreateMat(2, 2, CV_64FC1);
    cvmSet(transition_noise, 0, 0, 3); //set transition_noise(0,0) to 3.0
    cvmSet(transition_noise, 0, 1, 0.0);
    cvmSet(transition_noise, 1, 0, 0.0);
    cvmSet(transition_noise, 1, 1, 6);      
   
    // set the measurement noise
    // covariance of Gaussian noise to measurement
    CvMat* measurement_noise = cvCreateMat(2, 2, CV_64FC1);
    cvmSet(measurement_noise, 0, 0, 2); //set measurement_noise(0,0) to 2.0
    cvmSet(measurement_noise, 0, 1, 0.0);
    cvmSet(measurement_noise, 1, 0, 0.0);
    cvmSet(measurement_noise, 1, 1, 2);  
   
    // initialize the image window
    cvZero(iplImg);   
    cvNamedWindow("ParticleFilter-3d", 0);

    cout << "start filtering... " << endl << endl;
    int step = 30; //30; // timestep
   
    for (int t = 0; t < step; t++) // for "step" steps
    {
//        cvZero(iplImg);
        cout << "step " << t << endl;
       
        // get the groundtruth
        double gx = 10 * t;
        double gy = (-1.0 / width ) * (gx - width) * (gx - width) + height;
        get_groundtruth(groundtruth, gx, gy);
        // get measurements
        int count = get_measurement(measurement, measurement_noise, gx, gy);
       
        double like[N]; // likelihood between measurement and prediction
        double like_sum = 0; // sum of likelihoods
       
        for (int n = 0; n < N; n++) // for "N" particles
        {
            // predict
            double prediction;
            for (int row = 0; row < 2; row++)
            {
                prediction = cvmGet(pb[n],row,0) + cvmGet(v[n],row,0) + cvmGet(transition_noise,row,row) * gaussian_random();
                cvmSet(pp[n], row, 0, prediction);
            }
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), cvPoint(cvRound(cvmGet(pb[n],0,0)), cvRound(cvmGet(pb[n],1,0))), CV_RGB(100,100,0), 1);           
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), 1, CV_RGB(255, 255, 0));
           
            // evaluate measurements
            double range = (double) width; // range to search measurements for each particle
//            cout << "range of distances = " << range << endl;
            int mselected;
            for (int index = 0; index < count; index++)
            {
                double d = distance(measurement[index], pp[n]);
               
                if ( d < range )
                {
                    range = d;
                    mselected = index; // selected measurement
                }
            }
//            cout << "selected measurement # = " << mselected << endl;
            like[n] = likelihood(measurement[mselected], measurement_noise, pp[n]);   
//            cout << "likelihood of #" << n << " = " << like[n] << endl;           
            like_sum += like[n];
        }
//        cout << "sum of likelihoods = " << like_sum << endl;
       
        // estimate states       
        double state_x = 0.0;
        double state_y = 0.0;
        // estimate the state with predicted particles
        for (int n = 0; n < N; n++) // for "N" particles
        {
            w[n] = like[n] / like_sum; // update normalized weights of particles           
//            cout << "w" << n << "= " << w[n] << "  ";               
            state_x += w[n] * cvmGet(pp[n], 0, 0); // x-value
            state_y += w[n] * cvmGet(pp[n], 1, 0); // y-value
        }
        cvmSet(state, 0, 0, state_x);
        cvmSet(state, 1, 0, state_y);       
       
        cout << endl << "* * * * * *" << endl;       
        cout << "estimation = " << cvmGet(state,0,0) << "  " << cvmGet(state,1,0) << endl;
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0))), 4, CV_RGB(0, 255, 200), 2);
       
        // update particles       
        cout << endl << "updating particles" << endl;
        double a[N]; // portion between particles
       
        // define integrated portions of each particles; 0 < a[0] < a[1] < a[2] = 1
        a[0] = w[0];
        for (int n = 1; n < N; n++)
        {
            a[n] = a[n - 1] + w[n];
//            cout << "a" << n << "= " << a[n] << "  ";           
        }
//        cout << "a" << N << "= " << a[N] << "  " << endl;           
       
        for (int n = 0; n < N; n++)
        {   
            // select a particle from the distribution
            int pselected;
            for (int k = 0; k < N; k++)
            {
                if ( uniform_random() < a[k] )               
                {
                    pselected = k;
                    break;
                }
            }
//            cout << "p " << n << " => " << pselected << "  ";       

            // retain the selection 
            cvmSet(pu[n], 0, 0, cvmGet(pp[pselected],0,0)); // x-value
            cvmSet(pu[n], 1, 0, cvmGet(pp[pselected],1,0)); // y-value               
            cvSub(pp[pselected], pb[pselected], vu[n]); // pp - pb -> vu
        }
       
        // updated each particle and its velocity
        for (int n = 0; n < N; n++)
        {
            for (int row = 0; row < 2; row++)
            {
                cvmSet(pb[n], row, 0, cvmGet(pu[n],row,0));
                cvmSet(v[n], row , 0, cvmGet(vu[n],row,0));
            }
        }
        cout << endl << endl ;
       
        cvShowImage("ParticleFilter-3d", iplImg);
        cvWaitKey(1000);   
       
    } // for "t"
   
    cvWaitKey();   
   
    return 0;
}


posted by maetel
2009. 11. 8. 16:31 Computer Vision
Branislav Kisačanin & Vladimir Pavlović & Thomas S. Huang
Real-Time Vision for Human-Computer Interaction
(RTV4HCI)
Springer, 2005
(google book's overview)

2004 IEEE CVPR Workshop on RTV4HCI - Papers
http://rtv4hci.rutgers.edu/04/


Computer vision and pattern recognition continue to play a dominant role in the HCI realm. However, computer vision methods often fail to become pervasive in the field due to the lack of real-time, robust algorithms, and novel and convincing applications.

Keywords:
head and face modeling
map building
pervasive computing
real-time detection

Contents:
RTV4HCI: A Historical Overview.
- Real-Time Algorithms: From Signal Processing to Computer Vision.
- Recognition of Isolated Fingerspelling Gestures Using Depth Edges.
- Appearance-Based Real-Time Understanding of Gestures Using Projected Euler Angles.
- Flocks of Features for Tracking Articulated Objects.
- Static Hand Posture Recognition Based on Okapi-Chamfer Matching.
- Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features.
- Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters.
- A Real-Time Vision Interface Based on Gaze Detection -- EyeKeys.
- Map Building from Human-Computer Interactions.
- Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures.
- Epipolar Constrained User Pushbutton Selection in Projected Interfaces.
- Vision-Based HCI Applications.
- The Office of the Past.
- MPEG-4 Face and Body Animation Coding Applied to HCI.
- Multimodal Human-Computer Interaction.
- Smart Camera Systems Technology Roadmap.
- Index.




RTV4HCI: A Historical Overview
Matthew Turk (mturk@cs.ucsb.edu)
University of California, Santa Barbara
http://www.stanford.edu/~mturk/
http://www.cs.ucsb.edu/~mturk/

The goal of research in real-time vision for human-computer interaction is to develop algorithms and systems that sense and perceive humans and human activity, in order to enable more natural, powerful, and effective computer interfaces.

Computers in the Human Interaction Loop (CHIL)

perceptual interfaces
multimodal interfaces
post-WIMP(windows, icons, menus, pointer) interfaces

implicit user awareness or explicit user control

The user interface
- the software and devices that implement a particular model (or set of models) of HCI

Computer vision technologies must ultimately deliver a better "user experience".

B Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Third Edition, Addison-Wesley, 1998.
: 1) time to learn 2) speed of performance 3) user error rates 4) retention over time 5) subjective satisfaction

- Presence and location (Face and body detection, head and body tracking)
- Identity (Face recognition, gait recognition)
- Expression (Facial feature tracking, expression modeling and analysis)
- Focus of attention (Head/face tracking, eye gaze tracking)
- Body posture and movement (Body modeling and tracking)
- Gesture (Gesture recognition, hand tracking)
- Activity (Analysis of body movement)

eg.
VIDEOPLACE (M W Krueger, Artificial Reality II, Addison-Wesley, 1991)
Magic Morphin Mirror / Mass Hallucinations (T Darrell et al., SIGGRAPH Visual Proc, 1997)

Principal Component Analysis (PCA)
Linear Discriminant Analysis (LDA)
Gabor Wavelet Networks (GWNs)
Active Appearance Models (AAMs)
Hidden Markov Models (HMMs)

Identix Inc.
Viisage Technology Inc.
Cognitec Systems


- MIT Medial Lab
ALIVE system (P Maes et al., The ALIVE system: wireless, full-body interaction with autonomous agents, ACM Multimedia Systems, 1996)
PFinder system (C R Wren et al., Pfinder: Real-time tracking of the human body, IEEE Trans PAMI, pp 780-785, 1997)
KidsRoom project (A Bobick et al., The KidsRoom: A perceptually-based interactive and immersive story environment, PRESENCE: Teleoperators and Virtual Environments, pp 367-391, 1999)




Flocks of Features for Tracking Articulated Objects
Mathias Kolsch (kolsch@nps.edu
Computer Science Department, Naval Postgraduate School, Monterey
Matthew Turk (mturk@cs.ucsb.edu)
Computer Science Department, University of California, Santa Barbara




Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features
Guangqi Ye (grant@cs.jhu.edu), Jason J. Corso, Gregory D. Hager
Computational Interaction and Robotics Laboratory
The Johns Hopkins University



Map Building from Human-Computer Interactions
http://groups.csail.mit.edu/lbr/mars/pubs/pubs.html#publications
Artur M. Arsenio (arsenio@csail.mit.edu)
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology



Vision-Based HCI Applications
Eric Petajan (eric@f2f-inc.com)
face2face animation, inc.
eric@f2f-inc.com



The Office of the Past
Jiwon Kim (jwkim@cs.washington.edu), Steven M. Seitz (seitz@cs.washington.edu)
University of Washington
Maneesh Agrawala (maneesh@microsoft.com)
Microsoft Research
Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10  Page: 157   Year of Publication: 2004
http://desktop.google.com
http://grail.cs.washington.edu/projects/office/
http://www.realvnc.com/



Smart Camera Systems Technology Roadmap
Bruce Flinchbaugh (b-flinchbaugh@ti.com)
Texas Instruments

posted by maetel
2009. 11. 5. 16:12 Computer Vision
Kalman filter 연습 코딩

1차원 간단 예제
// 1-D Kalman filter algorithm exercise
// VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <iostream>
using namespace std;

int main (int argc, char * const argv[]) {
   
    double groundtruth[] = {1.0, 2.0, 3.5, 5.0, 7.0, 8.0, 10.0};
    double measurement[] = {1.0, 2.1, 3.2, 5.3, 7.4, 8.1, 9.6};
    double transition_noise = 0.1; // covariance of Gaussian noise to control
    double measurement_noise = 0.3; // covariance of Gaussian noise to measurement
   
    double x = 0.0, v = 1.0;    double cov = 0.5;
   
    double x_p, c_p; // prediction of x and cov
    double gain; // Kalman gain
    double x_pre, m;
   
    for (int t=0; t<7; t++)
    {
        // prediction
        x_pre = x;
        x_p = x + v;
        c_p = cov + transition_noise;
        m = measurement[t];
        // update
        gain = c_p / (c_p + measurement_noise);
        x = x_p + gain * (m - x_p);
        cov = ( 1 - gain ) * c_p;
        v = x - x_pre;

        cout << t << endl;
        cout << "estimation  = " << x << endl;
        cout << "measurement = " << measurement[t] << endl;   
        cout << "groundtruth = " << groundtruth[t] << endl;
    }
    return 0;
}

실행 결과:
0
estimation  = 1
measurement = 1
groundtruth = 1
1
estimation  = 2.05
measurement = 2.1
groundtruth = 2
2
estimation  = 3.14545
measurement = 3.2
groundtruth = 3.5
3
estimation  = 4.70763
measurement = 5.3
groundtruth = 5
4
estimation  = 6.76291
measurement = 7.4
groundtruth = 7
5
estimation  = 8.50584
measurement = 8.1
groundtruth = 8
6
estimation  = 9.9669
measurement = 9.6
groundtruth = 10
logout

[Process completed]


2차원 연습
// 2-D Kalman filter algorithm exercise
// lym, VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

int main (int argc, char * const argv[]) {
    int step = 7;
   
    IplImage *iplImg = cvCreateImage(cvSize(150, 150), 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("Kalman-2d", 0);
   
    //ground truth of real states
    double groundtruth[] = {10.0, 20.0, 35, 50.0, 70.0, 80.0, 100.0, //x-value
                            10.0, 20.0, 40.0, 55, 65, 80.0, 90.0}; //y-value
    //measurement of observed states
    double measurement_set[] = {10.0, 21, 32, 53, 74, 81, 96,  //x-value
                            10.0, 19, 42, 56, 66, 78, 88};    //y-value
    //covariance of Gaussian noise to control
//    double transition_noise[] = { 0.1, 0.0, 
//                                  0.0, 0.1 }; 
    CvMat* transition_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(transition_noise, 0, 0, 0.1); //set transition_noise(0,0) to 0.1
    cvmSet(transition_noise, 0, 1, 0.0);
    cvmSet(transition_noise, 1, 0, 0.0);
    cvmSet(transition_noise, 1, 1, 0.1);    
    //covariance of Gaussian noise to measurement
//    double measurement_noise[] = { 0.3, 0.0, 
//                                   0.0, 0.2 };
    CvMat* measurement_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(measurement_noise, 0, 0, 0.3); //set measurement_noise(0,0) to 0.3
    cvmSet(measurement_noise, 0, 1, 0.0);
    cvmSet(measurement_noise, 1, 0, 0.0);
    cvmSet(measurement_noise, 1, 1, 0.2);        
   
    CvMat* state = cvCreateMat(2, 1, CV_64FC1);    //states to be estimated   
    CvMat* state_p = cvCreateMat(2, 1, CV_64FC1);  //states to be predicted
    CvMat* velocity = cvCreateMat(2, 1, CV_64FC1); //motion controls to change states
    CvMat* measurement = cvCreateMat(2, 1, CV_64FC1); //measurement of states
   
    CvMat* cov = cvCreateMat(2, 2, CV_64FC1);     //covariance to be updated
    CvMat* cov_p = cvCreateMat(2, 2, CV_64FC1); //covariance to be predicted
    CvMat* gain = cvCreateMat(2, 2, CV_64FC1);     //Kalman gain to be updated
   
    // temporary matrices to be used for estimation
    CvMat* Kalman = cvCreateMat(2, 2, CV_64FC1); //
    CvMat* invKalman = cvCreateMat(2, 2, CV_64FC1); //

    CvMat* I = cvCreateMat(2,2,CV_64FC1);
    cvSetIdentity(I); // does not seem to be working properly   
//  cvSetIdentity (I, cvRealScalar (1));   
    // check matrix
    for(int i=0; i<2; i++)
    {
        for(int j=0; j<2; j++)
        {
            cout << cvmGet(I, i, j) << "\t";           
        }
        cout << endl;
    }
 
    // set the initial state
    cvmSet(state, 0, 0, 0.0); //x-value //set state(0,0) to 0.0
    cvmSet(state, 1, 0, 0.0); //y-value //set state(1,0) to 0.0
    // set the initital covariance of state
    cvmSet(cov, 0, 0, 0.5); //set cov(0,0) to 0.5
    cvmSet(cov, 0, 1, 0.0); //set cov(0,1) to 0.0
    cvmSet(cov, 1, 0, 0.0); //set cov(1,0) to 0.0
    cvmSet(cov, 1, 0, 0.4); //set cov(1,1) to 0.4   
    // set the initial control
    cvmSet(velocity, 0, 0, 10.0); //x-direction //set velocity(0,0) to 1.0
    cvmSet(velocity, 1, 0, 10.0); //y-direction //set velocity(0,0) to 1.0
   
    for (int t=0; t<step; t++)
    {
        // retain the current state
        CvMat* state_out = cvCreateMat(2, 1, CV_64FC1); // temporary vector   
        cvmSet(state_out, 0, 0, cvmGet(state,0,0)); 
        cvmSet(state_out, 1, 0, cvmGet(state,1,0));        
        // predict
        cvAdd(state, velocity, state_p); // state + velocity -> state_p
        cvAdd(cov, transition_noise, cov_p); // cov + transition_noise -> cov_p
        // measure
        cvmSet(measurement, 0, 0, measurement_set[t]); //x-value
        cvmSet(measurement, 1, 0, measurement_set[step+t]); //y-value
        // estimate Kalman gain
        cvAdd(cov_p, measurement_noise, Kalman); // cov_p + measure_noise -> Kalman
        cvInvert(Kalman, invKalman); // inv(Kalman) -> invKalman
        cvMatMul(cov_p, invKalman, gain); // cov_p * invKalman -> gain       
        // update the state
        CvMat* err = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvSub(measurement, state_p, err); // measurement - state_p -> err
        CvMat* adjust = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvMatMul(gain, err, adjust); // gain*err -> adjust
        cvAdd(state_p, adjust, state); // state_p + adjust -> state
        // update the covariance of states
        CvMat* cov_up = cvCreateMat(2, 2, CV_64FC1); // temporary matrix   
        cvSub(I, gain, cov_up); // I - gain -> cov_up       
        cvMatMul(cov_up, cov_p, cov); // cov_up *cov_p -> cov
        // update the control
        cvSub(state, state_out, velocity); // state - state_p -> velocity
       
        // result in colsole
        cout << "step " << t << endl;
        cout << "estimation  = " << cvmGet(state,0,0) << setw(10) << cvmGet(state,1,0) << endl;
        cout << "measurement = " << cvmGet(measurement,0,0) << setw(10) << cvmGet(measurement,1,0) << endl;   
        cout << "groundtruth = " << groundtruth[t] << setw(10) << groundtruth[t+step] << endl;
        // result in image
        cvCircle(iplImg, cvPoint(cvRound(groundtruth[t]), cvRound(groundtruth[t + step])), 3, cvScalarAll(255));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(measurement,0,0)), cvRound(cvmGet(measurement,1,0))), 2, cvScalar(255, 0, 0));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0))), 2, cvScalar(0, 0, 255));
       
        cvShowImage("Kalman-2d", iplImg);
        cvWaitKey(500);   
   
    }
    cvWaitKey();   
   
    return 0;
}


실행 결과:


step 0
estimation  = 10        10
measurement = 10        10
groundtruth = 10        10
step 1
estimation  = 20.5   19.6263
measurement = 21        19
groundtruth = 20        20
step 2
estimation  = 31.4545   35.5006
measurement = 32        42
groundtruth = 35        40
step 3
estimation  = 47.0763   53.7411
measurement = 53        56
groundtruth = 50        55
step 4
estimation  = 67.6291   69.0154
measurement = 74        66
groundtruth = 70        65
step 5
estimation  = 85.0584   81.1424
measurement = 81        78
groundtruth = 80        80
step 6
estimation  = 99.669    90.634
measurement = 96        88
groundtruth = 100        90



posted by maetel
2009. 10. 29. 19:03 Computer Vision
M. Armstrong and A. Zisserman,
“Robust object tracking,”
in Proc 2nd Asian Conference on Computer Vision, 1995, vol. I.
Springer, 1996, pp. 58–62.


Abstract
We describe an object tracker robust to a number of ambient conditions which often severely degrade performance, for example partial occlusion. The robustness is achieved by describing the object as a set of related geometric primitives (lines, conics, etc.), and using redundant measurements to facilitate the detection of outliers. This improves the overall tracking performance. Results are given for frame rate tracking on image sequences.


posted by maetel
2009. 10. 27. 23:31 Computer Vision
R. L. Thompson, I. D. Reid, L. A. Munoz, and D. W. Murray,
Providing synthetic views for teleoperation using visual pose tracking in multiple cameras,”
IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 31, no. 1, pp. 43–54, 2001.

Abstract - This paper describes a visual tool for teleoperative experimentation involving remote manipulation and contact tasks. Using modest hardware, it recovers in real-time the pose of moving polyhedral objects, and presents a synthetic view of the scene to the teleoperator using any chosen viewpoint and viewing direction. The method of line tracking introduced by Harris is extended to multiple calibrated cameras, and afforced by robust methods and iterative ltering. Experiments are reported which determine the static and dynamic performance of the vision system, and its use in teleoperation is illustrated in two experiments, a peg in hole manipulation task and an impact control task.


Line tracking 
http://en.wikipedia.org/wiki/Passive_radar#Line_tracking
The line-tracking step refers to the tracking of target returns from individual targets, over time, in the range-Doppler space produced by the cross-correlation processing. A standard Kalman filter is typically used. Most false alarms are rejected during this stage of the processing.


- Three difficulties using the Harris tracker
First it was found to be easily broken by occlusions and changing lighting. Robust methods to mitigate this problem have been investigated monocularly by Armstrong and Zisserman [20], [21]. Although this has a marked effect on tracking performance, the second problem found is that the accuracy of the pose recovered in a single camera was poor, with evident correlation between depth and rotation about axes parallel to the image plane. Maitland and Harris [22] had already noted as much when recovering the pose of a pointing device destined for neurosurgical application [23].
They reported much improved accuracy using two cameras; but the object was stationary, had an elaborate pattern drawn on it and was visible at all times to both cameras. The third difficulty, or rather uncertainty, was that the convergence properties and dynamic performances of the monocular and multicamera methods were largely unreported.


"Harris' RAPiD tracker included a constant velocity Kalman filter."


posted by maetel
2009. 10. 27. 14:40 Computer Vision
Harris' RAPiD
C. Harris and C. Stennett, “Rapid - a video rate object tracker,” in Proc 1st British Machine Vision Conference, Sep 1990, pp. 73–77.


ref.
C. Harris, “Tracking with rigid models,” in Active Vision, A. Blake and A. Yuille, Eds. MIT Press, 1992, pp. 59–73.

RAPID (Real-time Attitude and Position Determination) is a real-time model-based tracking algorithm for a known three dimensional object executing arbitrary motion, and viewed by a single video-camera. The 3D object model consists of selected control points on high contrast edges, which can be surface markings, folds or profile edges.
The use of either an alpha-beta tracker or a Kalman filter permits large object motion to be tracked and produces more stable tracking results. The RAPID tracker runs at video-rate on a standard minicomputer equipped with an image capture board.

alpha-beta tracker
http://en.wikipedia.org/wiki/Alpha_beta_filter

Kalman filter
http://en.wikipedia.org/wiki/Kalman_filter




posted by maetel
2009. 10. 26. 21:35 Computer Vision

Avoiding moving outliers in visual SLAM by tracking moving objects


Wangsiripitak, S.   Murray, D.W.  
Dept. of Eng. Sci., Univ. of Oxford, Oxford, UK;

This paper appears in: Robotics and Automation, 2009. ICRA '09. IEEE International Conference on
Publication Date: 12-17 May 2009
On page(s): 375-380
ISSN: 1050-4729
ISBN: 978-1-4244-2788-8
INSPEC Accession Number: 10748966
Digital Object Identifier: 10.1109/ROBOT.2009.5152290
Current Version Published: 2009-07-06


http://www.robots.ox.ac.uk/~lav//Research/Projects/2009somkiat_slamobj/project.html

Abstract

parallel implementation of monoSLAM with a 3D object tracker
information to register objects to the map's frame
the recovered geometry

I. Introduction

approaches to handling movement in the environment
segmentation between static and moving features
outlying moving points

1) active search -> sparse maps
2) robust methods -> multifocal tensors
3-1) tracking known 3D objects in the scene
  -2) determining whether they are moving
  -3) using their convex hulls to mask out features

"Knowledge that they are occluded rather than unreliable avoids the need to invoke the somewhat cumbersome process of feature deletion, followed later perhaps by unnecessary reinitialization."

[15] H. Zhou and S. Sakane, “Localizing objects during robot SLAM in semi-dynamic environments,” in Proc of the 2008 IEEE/ASME Int Conf on Advanced Intelligent Mechatronics, 2008, pp. 595–601.

"[15] noted that movement is likely to associated with objects in the scene, and classified them according to the likelihood that they would move."

the use of 3D objects for reasoning about motion segmentation and occlusion

occlusion masks

II. Underlying Processes
A. Visual SLAM

Monocular visual SLAM - EKF

idempotent 멱등(冪等)
http://en.wikipedia.org/wiki/Idempotence
Idempotence describes the property of operations in mathematics and computer science that means that multiple applications of the operation do not change the result.

http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
http://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles
http://en.wikipedia.org/wiki/Quaternion
http://en.wikipedia.org/wiki/Euler_Angles
Berthold K.P. Horn, "Some Notes on Unit Quaternions and Rotation"

"Standard monocular SLAM takes no account of occlusion."

B. Object pose tracking

Harris' RAPiD
[17] C. Harris and C. Stennett, “Rapid - a video rate object tracker,” in Proc 1st British Machine Vision Conference, Sep 1990, pp. 73–77
[20] C. Harris, “Tracking with rigid models,” in Active Vision, A. Blake and A. Yuille, Eds. MIT Press, 1992, pp. 59–73.

"(RAPiD makes the assumption that the pose change required between current and new estimates is sufficiently small, first, to allow a linearization of the solution and, second, to make trivial the problem of inter-image correspondence.) The correspondences used are between predicted point to measured image edge, allowing search in 1D rather than 2D within the image. This makes very sparing use of image data — typically only several hundred pixels per image are addressed."

aperture problem
http://en.wikipedia.org/wiki/Motion_perception
http://focus.hms.harvard.edu/2001/Mar9_2001/research_briefs.html

[21] R. L. Thompson, I. D. Reid, L. A. Munoz, and D. W. Murray, “Providing synthetic views for teleoperation using visual pose tracking in multiple cameras,” IEEE Transactions on Systems, Man and
Cybernetics, Part A, vol. 31, no. 1, pp. 43–54, 2001.
- "Three difficulties using the Harris tracker":
(1)First it was found to be easily broken by occlusions and changing lighting. Robust methods to mitigate this problem have been investigated monocularly by Armstrong and Zisserman. (2)Although this has a marked effect on tracking performance, the second problem found is that the accuracy of the pose recovered in a single camera was poor, with evident correlation between depth and rotation about axes parallel to the image plane. Maitland and Harris had already noted as much when recovering the pose of a pointing device destined for neurosurgical application. They reported much improved accuracy using two cameras; but the object was stationary, had an elaborate pattern drawn on it and was visible at all times to both cameras. (3)The third difficulty, or rather uncertainty, was that the convergence properties and dynamic performances of the monocular and multicamera methods were largely unreported.
(3) : little solution
(2) => [21] "recovered pose using 3 iterations of the pose update cycle per image"
(1) => [21], [22] : search -> matching -> weighting

[22] M. Armstrong and A. Zisserman, “Robust object tracking,” in Proc 2nd Asian Conference on Computer Vision, 1995, vol. I. Springer, 1996, pp. 58–62.

RANSAC
[23] M. Fischler and R. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, June 1981.

Least median of squares as the underlying standard deviation is unknown
[24] P. J. Rousseeuw, “Least median of squares regression,” Journal of the American Statistical Association, vol. 79, no. 388, pp. 871–880, 1984.



III. MonoSLAM with Tracked Objects
A. Information from SLAM to the object tracker


B. Information from the object tracker to SLAM


"The convex hull is uniformly dilated by an amount that corresponds to the projection of the typical change in pose."




posted by maetel
2009. 10. 22. 16:53 Computer Vision
Probabilistic Robotics
Sebastian Thrun, Wolfram Burgard and Dieter Fox
MIT Press, September 2005



Preface     xvii    
Acknowledgments    xix
I    Basics    1
1    Introduction     3
2    Recursive State Estimation    13
3    Gaussian Filters    39
4    Nonparametric Filters    85
5    Robot Motion    117
6    Robot Perception    149
II    Localization    189
7    Mobile Robot Localization: Markov and Gaussian    191
8    Mobile Robot Localization: Grid And Monte Carlo    237
III    Mapping    279
9    Occupancy Grid Mapping    281
10    Simultaneous Localization and Mapping    309
11    The GraphSLAM Algorithm    337
12    The Sparse Extended Information Filter    385
13    The FastSLAM Algorithm    437
IV    Planning and Control    485
14    Markov Decision Processes    487
15    Partially Observable Markov Decision Processes    513
16    Approximate POMDP Techniques    547
17    Exploration    569    
Bibliography    607   
Index     639


Probability robotics is a subfield of robotics concerned with perception and control.

Introduction

probabilistic robotics
: explicit representation of uncertainty using the calculus of probability theory

perception
action

Bayes filters are a probabilistic tool for estimating the state of dynamic systems.





Bayes Filters are Familiar!
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes (POMDPs)


Kalman filter

Gaussian filter

discrete Kalman filter


Kalman filter update in 1-D

correction

prediction



Kalman filter algorithm


EKF = extended Kalman filter
: calculates a Gaussian approximation to the true belief.

Taylor series expansion
"Linearization approximates the nonlinear function g by a linear function that is tangent to g at the mean of the Gaussian."











SLAM





Techniques for Generating Consistent Maps
• Scan matching
• EKF SLAM
• Fast-SLAM
• Probabilistic mapping with a single map and a posterior about poses Mapping + Localization
• Graph-SLAM, SEIFs

Approximations for SLAM
• Local submaps
[Leonard et al.99, Bosse et al. 02, Newman et al. 03]
• Sparse links (correlations)
[Lu & Milios 97, Guivant & Nebot 01]
• Sparse extended information filters
[Frese et al. 01, Thrun et al. 02]
• Thin junction tree filters
[Paskin 03]
• Rao-Blackwellisation (FastSLAM)
[Murphy 99, Montemerlo et al. 02, Eliazar et al. 03, Haehnel et al. 03]

EKF-SLAM Summary
•Quadratic in the number of landmarks: O(n2)
• Convergence results for the linear case.
• Can diverge if nonlinearities are large!
• Have been applied successfully in large-scale environments.
• Approximations reduce the computational complexity.


ch8

eg. Xavier - Localization in a topological map
ref.  Probabilistic Robot Navigation in Partially Observable Environments 
Reid Simmons and Sven Koenig
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI '95), July, 1995, pp. 1080 - 1087.
  • Open Link in New Tab
  • Download
posted by maetel
2009. 10. 21. 12:29 Computer Vision
Machine Learning Summer Schools — Plone site
http://www.mlss.cc/
posted by maetel
2009. 10. 21. 11:53 Computer Vision
University of Cambridge: Augmented Maps

posted by maetel
2009. 10. 21. 11:31 Computer Vision
University of Cambridge: Natural Feature Tracking for Mobile Phones

Daniel Wagner, Gerhard Reitmayr, Alessandro Mulloni, Tom Drummond and Dieter Schmalstieg
Pose Tracking from Natural Features on Mobile Phones
In Proc. IEEE ISMAR'08, 2008, Cambridge, UK.

posted by maetel
2009. 10. 12. 22:24 @GSMC/이재준: Media Aesthetics
Aesthetic Computing, MIT Press

Chapter 3
A Forty-Year Perspective on Aesthetic Computing in the Leonardo Journal
Roger F. Malina





notes

1. Leonardo/ISAST is a professional organization that seeks to document and promote the work of
artists whose work involves contemporary science and technology, and to stimulate collaboration
among artists, scientists, and engineers. The Leonardo publications can be accessed at http://www.leonardo.info. These publications include the Leonardo journal, Leonardo Music Journal, the Leonardo
Book Series, and the electronic publications LEA and Leonardo On Line.

2. See, for example, the New Media Dictionary project, at http://www.comm.uqam.ca/GRAM/
Accueil.html. A number of researchers have been documenting the rapid mutation of terminology.
No good comprehensive cross-linguistic thesauruses exist.

3. See http://portal.unesco.org/digiarts. The UNESCO DIGIARTS program supports a number of
regional initiatives, as well as work in schools, through the young digital creators program.

4. See Intel Art and Entertainment Research Committee at http://www.intel.com/research/
university/aim/arts.htm.



references

Ascott, R. 1968. ‘‘The Cybernetic Stance: My Process and Purpose.’’ Leonardo 1(2): 105–12.

Ascott, R., and Loeffler, C. 1991. ‘‘Connectivity: Art and Interactive Telecommunications.’’ Leonardo 24(2): 1–85.

Bolter, J., and Gromala, D. 2003. Windows and Mirrors. Cambridge, MA: MIT Press.

Fishwick, P., et al. 2003. ‘‘Aesthetic Computing Manifesto.’’ Leonardo 36(4): 255.

Galloway, A. 2004. Protocol. Cambridge, MA: MIT Press.

Goldberg, K. 2000. The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet. Cambridge, MA: MIT Press.

Harris, C., ed. 1999. Art and Innovation. Cambridge, MA: MIT Press.

Malina, F., ed. 1979. Visual Art, Mathematics and Computers: Selections from the Journal Leonardo. Oxford, UK: Pergamon Press.

Mallary, R. 1969. ‘‘Notes on Jack Burnham’s Concepts of a Software Exhibition.’’ Leonardo 3(2):
189–90.

Malloy, J. 2003. Women, Art and Technology. Cambridge, MA: MIT Press.

Manovich, L. 2001. The Language of New Media. Cambridge, MA: MIT Press.

Mitchell, W., ed. 2003. Beyond Productivity: Information Technology, Innovation and Creativity. Washington, DC: National Academy Press.

Naimark, M. 2003. Truth, Beauty, Freedom, and Money: Technology-Based Art and the Dynamics
of Sustainability. Accessed at http://www.artslab.net.

Rabinowitz, S., and Galloway, E. 1984. Cafe´ Manifesto. Accessed at http://main.ecafe.com.

Reichardt, J., ed. 1968. ‘‘Cybernetic Serendipity: The Computer and the Arts.’’ Studio International
special issue. London: Studio International.

Rheingold, H. 2002. Smart Mobs: The Next Social Revolution. Cambridge, MA: Perseus Publishing.

Snow, C. P. 1993. The Two Cultures. Cambridge: Cambridge University Press.

Wilson, S. 2002. Information Arts: Intersections of Art, Science, and Technology. Cambridge, MA: MIT
Press.


posted by maetel