블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

'SLAM'에 해당되는 글 26건

  1. 2010.02.09 Sola & Monin & Devy & Lemaire, "Undelayed initialization in bearing only SLAM"
  2. 2010.01.25 Kragic & Vincze <Vision for Robotics>
  3. 2010.01.22 Z. Wang, S. Huang and G. Dissanayake "D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping"
  4. 2010.01.22 Paul Michael Newman "On the Structure and Solution of the Simultaneous Localisation and Map Building Problem"
  5. 2010.01.21 Randall C. Smith and Peter Cheeseman "On the representation and estimation of spatial uncertainly"
  6. 2010.01.15 Kalman filtering for SLAM 연습
  7. 2010.01.14 RoSEC 2010 winter school
  8. 2009.12.02 Joan Solà - 6DOF SLAM toolbox for Matlab
  9. 2009.10.26 Somkiat Wangsiripitak & David W Murray <Avoiding moving outliers in visual SLAM by tracking moving objects>
  10. 2009.10.22 Sebastian Thrun & Wolfram Burgard & Dieter Fox <Probabilistic Robotics>
  11. 2009.09.16 Chekhlov et al < Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM>
  12. 2009.08.24 Andrew J. Davison, Ian Reid, Nicholas Molton & Olivier Stasse <MonoSLAM: Real-Time Single Camera SLAM> 1
  13. 2009.08.17 PTAM to be dissected on OS X
  14. 2009.08.05 PTAM test log on Mac OS X 7
  15. 2009.08.04 SLAM related generally
  16. 2009.07.23 Brian Williams, Georg Klein and Ian Reid <Real-Time SLAM Relocalisation>
  17. 2009.07.22 Durrant-Whyte & Bailey "Simultaneous localization and mapping"
  18. 2009.07.21 임현, 이영삼 <이동로봇의 동시간 위치인식 및 지도작성(SLAM)> 3
  19. 2009.07.15 Georg Klein & David Murrayt <Parallel Tracking and Mapping for Small AR Workspaces>
  20. 2009.04.09 Montemerlo & Thrun & Koller & Wegbreit <FastSLAM: A factored solution to the simultaneous localization and mapping problem>
  21. 2009.03.31 Montemerlo & Thrun <Simultaneous localization and mapping with unknown data association using FastSLAM>
  22. 2009.03.31 A. J. Davison <Real-time simultaneous localisation and mapping with a single camera>
  23. 2009.03.27 Ethan Eade & Tom Drummond <Scalable Monocular SLAM>
  24. 2009.03.27 people in SLAM
  25. 2009.03.26 Civera, Davison & Montiel <Inverse Depth Parametrization for Monocular SLAM>
2010. 2. 9. 17:50 Computer Vision

Undelayed initialization in bearing only SLAM


Sola, J.   Monin, A.   Devy, M.   Lemaire, T.  
CNRS, Toulouse, France;

This paper appears in: Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on
Publication Date: 2-6 Aug. 2005
On page(s): 2499- 2504
ISBN: 0-7803-8912-3
INSPEC Accession Number: 8750433
Digital Object Identifier: 10.1109/IROS.2005.1545392
Current Version Published: 2005-12-05


ref. http://homepages.laas.fr/jsola/JoanSola/eng/bearingonly.html




 기존 SLAM에서 쓰이는 레이저 레인지 스캐너 등 range and bearing 센서 대신 공간에 대한 풍부한 정보를 주는 카메라를 쓰면, 1차원 (인식된 물체까지의 거리 정보, depth)을 잃게 되어 bearing-only SLAM이 된다.

EKF requires Gaussian representations for all the involved random variables that form the map (the robot pose and all landmark's positions). Moreover, their variances need to be small to be able to approximate all the non linear functions with their linearized forms.

두 입력 이미지 프레임 사이에 baseline을 구할 수 있을 만큼 충분한 시점 차가 존재해야 랜드마크의 위치를 결정할 수 있으므로, 이를 확보하기 위한 시간이 필요하게 된다.

http://en.wikipedia.org/wiki/Structure_from_motion
  1. Extract features from images
  2. Find an initial solution for the structure of the scene and the motion of the cameras
  3. Extend the solution and optimise it
  4. Calibrate the cameras
  5. Find a dense representation of the scene
  6. Infer geometric, textural and reflective properties of the scene.

sequential probability ratio test
http://en.wikipedia.org/wiki/Sequential_probability_ratio_test
http://www.agrsci.dk/plb/bembi/africa/sampling/samp_spr.html
http://eom.springer.de/S/s130240.htm

EKF (extended Kalman filter) - inconsistency and divergence
GSF (Gaussian sum filter) - computation load
FIS (Federated Information Sharing)


posted by maetel
2010. 1. 25. 02:50 Computer Vision

Foundations and Trends® in
Robotics

Vol. 1, No. 1 (2010) 1–78
© 2009 D. Kragic and M. Vincze
DOI: 10.1561/2300000001

Vision for Robotics

Danica Kragic1 and Markus Vincze2
1 Centre for Autonomous Systems, Computational Vision and Active Perception Lab, School of Computer Science and Communication, KTH, Stockholm, 10044, Sweden, dani@kth.se
2 Vision for Robotics Lab, Automation and Control Institute, Technische Universitat Wien, Vienna, Austria, vincze@acin.tuwien.ac.at

SUGGESTED CITATION:
Danica Kragic and Markus Vincze (2010) “Vision for Robotics”,
Foundations and Trends® in Robotics: Vol. 1: No. 1, pp 1–78.
http:/dx.doi.org/10.1561/2300000001


Abstract

Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of various tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing visionbased control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition.


1 Introduction 2
1.1 Scope and Outline 4

2 Historical Perspective 7
2.1 Early Start and Industrial Applications 7
2.2 Biological Influences and Affordances 9
2.3 Vision Systems 12

3 What Works 17
3.1 Object Tracking and Pose Estimation 18
3.2 Visual Servoing–Arms and Platforms 27
3.3 Reconstruction, Localization, Navigation, and Visual SLAM 32
3.4 Object Recognition 35
3.5 Action Recognition, Detecting, and Tracking Humans 42
3.6 Search and Attention 44

4 Open Challenges 48
4.1 Shape and Structure for Object Detection 49
4.2 Object Categorization 52
4.3 Semantics and Symbol Grounding: From Robot Task to Grasping and HRI 54
4.4 Competitions and Benchmarking 56

5 Discussion and Conclusion 59

Acknowledgments 64
References 65


posted by maetel
2010. 1. 22. 00:20 Computer Vision
D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping  
Z. Wang, S. Huang and G. Dissanayake
ARC Centre of Excellence for Autonomous Systems (CAS), Faculty of Engineering, University of Technology, Sydney, Australia
International Journal of Robotics Research Volume 26 Issue 2 - Publication Date: 1 February 2007 (Special Issue on the Fifth International Conference on Field and Service Robotics, 2005)
http://dx.doi.org/10.1177/0278364906075173


posted by maetel
2010. 1. 22. 00:10 Computer Vision
On the Structure and Solution of the Simultaneous Localisation and Map Building Problem.
Paul Michael Newman.
1999. Ph. D. thesis, Australian Centre for Field Robotics - The University of Sydney


출처: http://cogvis.nada.kth.se/~hic/SLAM/

posted by maetel
2010. 1. 21. 23:39 Computer Vision
(Sola: "the first consistent SLAM algorithm")

Randall C. Smith and Peter Cheeseman. 1986. On the representation and estimation of spatial uncertainly. Int. J. Rob. Res. 5, 4 (December 1986), 56-68.
DOI=10.1177/027836498600500404 http://dx.doi.org/10.1177/027836498600500404


posted by maetel
2010. 1. 15. 11:55 Computer Vision
1차원 SLAM을 위한 Kalman filter 간단 예제




void cvGEMM(const CvArr* src1, const CvArr* src2, double alpha, const CvArr* src3, double beta, CvArr* dst, int tABC=0)

\texttt{dst} = \texttt{alpha} \, op(\texttt{src1}) \, op(\texttt{src2}) + \texttt{beta} \, op(\texttt{src3}) \quad \text {where $op(X)$ is $X$ or $X^ T$}


define cvMatMulAdd(src1, src2, src3, dst ) cvGEMM(src1, src2, 1, src3, 1, dst, 0 )define cvMatMul(src1, src2, dst ) cvMatMulAdd(src1, src2, 0, dst)




// 1-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-14
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;

#define num_landmarks 10
#define num_dim (num_landmarks + 2)

#define step 100
#define width 1000
#define height 200

int main (int argc, char * const argv[]) {
   
    srand(time(NULL));
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[num_landmarks];//    = { 200, 600, 400 }; // x-position
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[n] = width * uniform_random();
    }
   
    // set the initial state
    double rob_pos = 25.0; // initial robot position
    double rob_vel = 10.0; // initial robot velocity
    // set the initial covariance of the state
    double rob_pos_cov = 0.01; // covariance of noise to robot position
    double rob_vel_cov = 0.01; // covariance of noise to robot velocity
    double obs_cov = 900; // covarriance of noise to measurement of landmarks
    double xGroundTruth, vGroundTruth;
    xGroundTruth = rob_pos;
    vGroundTruth = rob_vel;
   

   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-1d", 0);
   
    // H matrix
    int Hrow = num_landmarks;
    int Hcol = num_landmarks + 2;
    CvMat* H = cvCreateMat(Hrow, Hcol, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int row = 0; row < Hrow; row++)
    {
        cvmSet(H, row, 0, -1.0);
        cvmSet(H, row, row+2, 1.0);
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H ; uncertainty of control
    CvMat* Q = cvCreateMat(Hrow, Hrow, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    int Grow = num_landmarks + 2;
    int Gcol = Grow;
    CvMat* G = cvCreateMat(Grow, Gcol, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous position
    cvmSet(G, 0, 1, 1.0); // velocity   
    for (int row = 1; row < Grow; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to H ; uncertainty of observation
    CvMat* R = cvCreateMat(Grow, Grow, CV_64FC1); // 5x5
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_pos_cov);
    cvmSet(R, 1, 1, rob_vel_cov);   
    displayMatrix(R, "R matrix");   
   
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state to be predicted
    CvMat* u = cvCreateMat(1, 1, CV_64FC1); // control vector    
    cvmSet(u, 0, 0, 1.0); // set u(0,0) to 1.0, the constant velocity here
    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))   
    CvMat* obs = cvCreateMat(num_landmarks, 1, CV_64FC1); // observation for each landmark
   
    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_pos + sqrt(rob_pos_cov)*gaussian_random()); // set mu(0,0) to "rob_pos"
    cvmSet(mu, 1, 0, rob_vel + sqrt(rob_vel_cov)*gaussian_random()); // set mu(0,0) to "rob_vel"   
    for(int n = 0; n < num_landmarks; n++)
    {
//        cvmSet(mu, n+2, 0, landmark[n] + sqrt(obs_cov)*gaussian_random());
        cvmSet(mu, n+2, 0, landmark[n]);       
    }   
    displayMatrix(mu, "mu vector");
   
    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1); // 5x5
    cvTranspose(G, Gt); // transpose(G) -> Gt   
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1); // 5x5 * 5x5
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 5x5
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 5xnum_landmarks
    cvTranspose(H, Ht); // transpose(H) -> Ht
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 5x5 * 5xnum_landmarks
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 5x5 * 5xnum_landmarks    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // num_landmarksx1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 5x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 5xnum_landmarks * num_landmarksx5
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 5x5 identity matrix
    cvSetIdentity(I);       
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 5x5

   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;       
        cvZero(iplImg);
   
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
//        displayMatrix(mu_p, "mu_p vector");   
       
        // predict the covariance of the state (ref. L3, KF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
//        displayMatrix(sigma_p, "sigma_p matrix");
       
        // estimate Kalman gain (ref. L4, KF algorithm, 42p)
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
    //    displayMatrix(HsigmaHtplusQ, "H*sigma*Ht + Q matrix");
       
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        displayMatrix(invGain, "invGain matrix");
       
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
    //    displayMatrix(K, "K matrix");       

        // measure
        xGroundTruth += vGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, vGroundTruth);
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(rob_ground, n + 2, 0, landmark[n]);
        }

        for(int n = 0; n < num_landmarks; n++)
        {
            double rn = sqrt(obs_cov) * gaussian_random();
            cvmSet(delta, n, 0, rn);
        }
    //    displayMatrix(delta, "delta vector; measurement noise");
        displayMatrix(rob_ground, "rob_ground");

        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // update the state with Kalman gain (ref. L5, KF algorithm, 42p)
        cvMatMul(H, mu_p, Hmu); // H * mu_p -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        // update the coariance of the state (ref. L6, KF algorith, 42p)
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");

        // result in console
        cout << "landmarks  = " << landmark[0] << setw(10) << landmark[1] << setw(10) << landmark[2] << setw(10) << endl;
        cout << "robot position = " << cvmGet(mu, 0, 0) << endl;
//        cout << "measurement = " << cvmGet(z,0,0) << setw(10) << cvmGet(z,1,0) << setw(10) << cvmGet(z,2,0) << endl;   
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
        cout << "observation = " << cvmGet(obs,0,0) << setw(10) << cvmGet(obs,1,0) << setw(10) << cvmGet(obs,2,0) << endl;
        cout<< "estimation = " << cvmGet(mu,2,0) << setw(10) << cvmGet(mu,3,0) << setw(10) << cvmGet(mu,4,0) << endl;

        // result in image
        // ground truth of robot position       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(height/2)), 1, cvScalar(100, 0, 255));
        // robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(height/2)), 3, cvScalar(255, 0, 100));
        // uncertainty of robot position, purple line
        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
                       cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[index]), cvRound(height/2)), 3, cvScalarAll(255));
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,index,0)), cvRound(height/2)), 2, cvScalar(0, 200, 255));
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0)), cvRound(height/2)), 2, cvScalar(50, 255, 0));
            // uncertainty of estimation, green line
            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);

        }
   
        cvShowImage("SLAM-1d", iplImg);
        cvWaitKey(0);
       
    }
    cvWaitKey();   
   
    return 0;
}



console:



2차원 SLAM을 위한 Kalman filter 간단 예제

// 2-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-18
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

#define num_landmarks 20
#define num_dim ( 2 * (num_landmarks + 2) )

#define step 200
#define width 800
#define height 600


// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}


void displayMatrix(CvMat *mat, char *title = NULL) {
    if(title) cout << title << endl;
    for(int iR = 0; iR < mat->rows; iR++) {
        for(int iC = 0; iC < mat->cols; iC++) {
            printf("%.2f ", cvmGet(mat, iR, iC));
        }
        printf("\n");
    }
    printf("\n");
    return;



void draw2DEllipseFromCovariance
(CvMat* cov, CvPoint* cnt, IplImage *iplImg, CvScalar* curveColor /*= cvScalarAll(255)*/, CvScalar* centerColor /*= cvScalarAll(128) */, int thickness /*= 1*/)
{
   
    if(NULL == cov || 2 != cov->rows || 2 != cov->cols) {
        printf("covariance matrix is not 2x2 !! \n");
        exit(0);
    }
    double eigenvalues[6], eigenvectors[36]; 
    float ev1, ev2, vx, vy, angle;
   
    CvSize axes;
    CvMat evals = cvMat(1, 2, CV_64F, eigenvalues), evecs = cvMat(2, 2, CV_64F, eigenvectors);
   
    cvSVD(cov, &evals, &evecs, 0, CV_SVD_MODIFY_A + CV_SVD_U_T ); 
   
    ev1 = cvmGet(&evals, 0, 0);        ev2 = cvmGet(&evals, 0, 1);
   
    if( ev1 < 0 && ev2 < 0 ) {
        ev1 = -ev1;
        ev2 = -ev2;
    }
    if( ev1 < ev2 ) {
        float tmp = ev1;
        ev1 = ev2;
        ev2 = tmp;
    }
    if( ev1 <= 0 || ev2 <= 0 ) {
        printf("COV Eigenvalue is negativ or zero(!)\n");
        exit(0);
    }
   
    // calc angle 
    angle = (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI); 
   
    axes = cvSize(cvRound(sqrt(ev1)), cvRound(sqrt(ev2)));
    (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI);
    cvEllipse(iplImg, *cnt, axes, angle, 0, 360, *curveColor, thickness);
   
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y - 1), cvPoint(cnt->x + 2, cnt->y + 1), *centerColor, 1);
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y + 1), cvPoint(cnt->x + 2, cnt->y - 1), *centerColor, 1);
   
}


int main (int argc, char * const argv[]) {

    srand(time(NULL));
   
    // set the initial state
    double rob_x = width * 0.1; // robot's initial x-position
    double rob_y = height * 0.4; // robot's initial y-position   
    double rob_vx = 10.0; // robot's initial x-velocity
    double rob_vy = 10.0; // robot's initial y-velocity   
   
    // set the initial covariance of the state uncertainty
    double rob_px_cov = 0.01; // covariance of noise to robot's x-position
    double rob_py_cov = 0.01; // covariance of noise to robot's y-position   
    double rob_vx_cov = 0.01; // covariance of noise to robot's x-velocity
    double rob_vy_cov = 0.01; // covariance of noise to robot's y-velocity   
   
    // set the initial covariance of the measurement noise
    double obs_x_cov = 900; // covarriance of noise to x-measurement of landmarks
    double obs_y_cov = 900; // covarriance of noise to y-measurement of landmarks
   
    // ground truth of state
    double xGroundTruth = rob_x;
    double yGroundTruth = rob_y;
    double vxGroundTruth = rob_vx;
    double vyGroundTruth = rob_vy;
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[2*num_landmarks];  
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[2*n] = width * uniform_random();
        landmark[2*n+1] = height * uniform_random();
    }   
   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-2d");
   
    // H matrix, measurement matrix
    CvMat* H = cvCreateMat(2*num_landmarks, num_dim, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int r = 0; r < num_landmarks; r++)
    {
        cvmSet(H, 2*r, 0, -1.0); // robot's x-position
        cvmSet(H, 2*r, 2*r+4, 1.0); // landmark's x-position
        cvmSet(H, 2*r+1, 1, -1.0); // robot's y-position
        cvmSet(H, 2*r+1, 2*r+5, 1.0); // landmarks's y-position        
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H; uncertainty of control
    CvMat* Q = cvCreateMat(2*num_landmarks, 2*num_landmarks, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_x_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    CvMat* G = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous x-position
    cvmSet(G, 0, 2, 1.0); // x-velocity
    cvmSet(G, 1, 1, 1.0); // previous y-position
    cvmSet(G, 1, 3, 1.0); // y-velocity   
    for (int row = 2; row < G->rows; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to G; uncertainty of observation
    CvMat* R = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_px_cov);
    cvmSet(R, 1, 1, rob_py_cov);
    cvmSet(R, 2, 2, rob_vx_cov);
    cvmSet(R, 3, 3, rob_vy_cov);   
    displayMatrix(R, "R matrix");   
       
   
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // ground truth of state        
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be predicted

    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, 2*num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))  
    CvMat* obs = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // observation for each landmark

    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_x); // set mu(0,0) to "rob_x"
    cvmSet(mu, 1, 0, rob_y); // set mu(1,0) to "rob_y"
    cvmSet(mu, 2, 0, rob_vx); // set mu(2,0) to "rob_vx"
    cvmSet(mu, 3, 0, rob_vy); // set mu(3,0) to "rob_vy"
    for (int n = 0; n < 2*num_landmarks; n++)
    {
        cvmSet(mu, n+4, 0, landmark[n]); // set mu(4,0) to "landmark[0]", ...
    }
    displayMatrix(mu, "mu vector");
   
/*    // initialize "sigma" matrix
    cvmSet(sigma, 0, 0, rob_px_cov);
    cvmSet(sigma, 1, 1, rob_py_cov);
    cvmSet(sigma, 2, 2, rob_vx_cov);
    cvmSet(sigma, 3, 3, rob_vy_cov);
    for (int r = 4; r < sigma->rows; r=r*2)
    {
        cvmSet(sigma, r, r, obs_x_cov);
        cvmSet(sigma, r+1, r+1, obs_y_cov);        
    }
*/    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_x_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1);
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1);
    cvTranspose(G, Gt); // transpose(G) -> Gt
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1);
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 10x10
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 10x6
    cvTranspose(H, Ht); // transpose(H) -> Ht       
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 10x10 * 10x6
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 10x10 * 10x6    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // 6x10 * 10x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // 6x1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 10x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 10x6 * 6x10
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 10x10 identity matrix
    cvSetIdentity(I); // does not seem to be working properly      
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 10x10
   
    CvPoint trajectory[step];
    CvPoint robot_ground[step];
   
    int frame = int(0.9*step);
   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;      
        cvZero(iplImg);
       
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
       
        // predict the covariance of the state (ref. L3, EKF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
               
        // estimate Kalman gain (ref. L4, EKF algorithm, 42p)   
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
        displayMatrix(K, "K matrix");  
   
       
        // measure
        // set ground truths
        if ( xGroundTruth >= width || xGroundTruth <= 0)
        {
            vxGroundTruth = - vxGroundTruth;
        }   
        if ( yGroundTruth >= height || yGroundTruth <= 0 )
        {
            vyGroundTruth = - vyGroundTruth;
        }   
        xGroundTruth += vxGroundTruth;
        yGroundTruth += vyGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, yGroundTruth);
        cvmSet(rob_ground, 2, 0, vxGroundTruth);
        cvmSet(rob_ground, 3, 0, vyGroundTruth);
       
        robot_ground[t] = cvPoint(cvRound(xGroundTruth),cvRound(yGroundTruth)); 
       
        for (int dim = 0; dim < 2*num_landmarks; dim++)
        {
            cvmSet(rob_ground, dim+4, 0, landmark[dim]);
        }
        displayMatrix(rob_ground, "rob_ground");
        // set measurement noise
        for(int n = 0; n < num_landmarks; n++)
        {
            double rn_x = sqrt(obs_x_cov) * gaussian_random();
            double rn_y = sqrt(obs_y_cov) * gaussian_random();           
            cvmSet(delta, 2*n, 0, rn_x);
            cvmSet(delta, 2*n+1, 0, rn_y);
           
        }
//      displayMatrix(delta, "delta vector; measurement noise");
       
        // define z, measurement, vector
        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // observation relative to robot's position
        for( int n = 0; n < 2*num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
       
        // update the state with Kalman gain (ref. L5, EKF algorithm, 42p)
        cvMatMul(H, mu, Hmu); // H * mu -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        trajectory[t] = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
       
       
        // update the covariance of the state
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");
       
        // result in console
        cout << "robot position: " << "px = " << cvmGet(mu, 0, 0) << "  py = " << cvmGet(mu, 1, 0) << endl;
        for (int n = 0; n < num_landmarks; n++)
        {
            cout << setw(10) << "landmark" << n+1 << " (" << landmark[2*n] << ", " << landmark[2*n+1] << ") "
            << setw(10) << "observation" << n+1 << " (" << cvmGet(obs,2*n,0) << ", " << cvmGet(obs,2*n+1,0) << ") "
            << setw(10) << "estimation" << n+1 << " (" << cvmGet(mu,4+2*n,0) << ", " << cvmGet(mu,4+2*n+1,0) << ") " << endl;
        }       
       
       
        CvMat* local_uncertain = cvCreateMat(2, 2, CV_64FC1);
        CvMat* map_uncertain [num_landmarks];
        for (int n = 0; n < num_landmarks; n++)
        {
            map_uncertain [n] = cvCreateMat(2, 2, CV_64FC1);
        }
        cvmSet(local_uncertain, 0, 0, cvmGet(sigma,0,0));
        cvmSet(local_uncertain, 0, 1, cvmGet(sigma,0,1));
        cvmSet(local_uncertain, 1, 0, cvmGet(sigma,1,0));
        cvmSet(local_uncertain, 1, 1, cvmGet(sigma,1,1));
       
        displayMatrix(local_uncertain, "local_uncertain");       
       
        for (int n = 0; n < num_landmarks; n++)
        {
            cvmSet(map_uncertain[n], 0, 0, cvmGet(sigma,n+4,n+4));
            cvmSet(map_uncertain[n], 0, 1, cvmGet(sigma,n+4,n+5));
            cvmSet(map_uncertain[n], 1, 0, cvmGet(sigma,n+5,n+4));
            cvmSet(map_uncertain[n], 1, 1, cvmGet(sigma,n+5,n+5));

            displayMatrix(map_uncertain[n], "map_uncertain");
        } 
       
        // result in image
        // ground truth of robot position, red       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(cvmGet(rob_ground,1,0))), 2, cvScalar(60, 0, 255), 2);
        // estimated robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(cvmGet(mu,1,0))), 5, cvScalar(255, 0, 100), 2);
        // uncertainty of robot position, purple line
//        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
//               cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
   
        CvPoint local = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
        CvScalar local_center_color = cvScalar(255, 0, 100);
        CvScalar local_curve_color = cvScalarAll(128);
       
        draw2DEllipseFromCovariance(local_uncertain, &local, iplImg, &local_center_color, &local_curve_color, 1);
       
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[2*index]), cvRound(landmark[2*index+1])), 4, cvScalarAll(255), 2);
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,2*index,0)), cvRound(cvmGet(obs,2*index+1,0))), 4, cvScalar(0, 200, 255), 1);
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0))), 3, cvScalar(50, 255, 0), 1);
            // uncertainty of estimation, green line
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
//                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);
       
            CvPoint observed = cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0)));
            CvScalar observed_center_color = cvScalar(50, 255, 0);
            CvScalar observed_curve_color = cvScalar(50, 255, 0);
           
            draw2DEllipseFromCovariance(map_uncertain[index], &observed, iplImg, &observed_center_color, &observed_curve_color, 1);    
        }
       
        for( int p = 1; p <= t; p++ )
        {
            cvLine(iplImg, robot_ground[p-1], robot_ground[p], cvScalar(60, 0, 255), 1);           
            cvLine(iplImg, trajectory[p-1], trajectory[p], cvScalar(255, 0, 100), 1);
        }

        if ( t == frame )
        {
            cvSaveImage("2D SLAM test.bmp", iplImg);
        }
       
        cvShowImage("SLAM-2d", iplImg);
        cvWaitKey(100);   
       
    }
    cout << endl << endl << "process finished" << endl;
    cvWaitKey();   
   
    return 0;
}












posted by maetel
2010. 1. 14. 17:27 Footmarks
RoSEC international summer/winter school
Robotics-Specialized Education Consortium for Graduates sponsored by MKE

로봇 특성화 대학원 사업단 주관
2010 RoSEC International Winter School
2010년 1월 11일(월)부터 1월 16일(토)
한양대학교 HIT(한양종합기술연구원) 6층 제1세미나실(606호)



Robot mechanism
Byung-Ju Yi (Hanyang University, Korea)
한양대 휴먼로보틱스 연구실 이병주 교수님  bj@hanyang.ac.kr
- Classification of robotic mechanism and Design consideration of robotic mechanism
- Design Issue and application examples of master slave robotic system
- Trend of robotic mechanism research

Actuator and Practical PID Control
Youngjin Choi (Hanyang University, Korea)
한양대 휴먼로이드 연구실 최영진 교수님 cyj@hanyang.ac.kr
- Operation Principle of DC/RC/Stepping Motors & Its Practice
- PID Control and Tuning
- Stability of PID Control and Application Examples

Coordination of Robots and Humans
Kazuhiro Kosuge (Tohoku University, Japan)
일본 도호쿠 대학 시스템 로보틱스 연구실 고스게 카즈히로 교수님
- Robotics as systems integration
- Multiple Robots Coordination
- Human Robot Coordination and Interaction

Robot Control
Rolf Johansson (Lund University, Sweden)
스웨덴 룬드 대학 로보틱스 연구실 Rolf.Johansson@control.lth.se
- Robot motion and force control
- Stability of motion
- Robot obstacle avoidance

Lecture from Industry or Government
(S. -R. Oh, KIST)

Special Talk from Government
(Y. J. Weon, MKE)

Mobile Robot Navigation
Jae-Bok Song (Korea University, Korea)
고려대 지능로봇 연구실 송재복 교수님 jbsong@korea.ac.kr
- Mapping
- Localization
- SLAM

3D Perception for Robot Vision
In Kyu Park (Inha University, Korea)
인하대 영상미디어 연구실 박인규 교수님 pik@inha.ac.kr
- Camera Model and Calibration
- Shape from Stereo Views
- Shape from Multiple Views

Lecture from Industry or Government
(H. S. Kim, KITECH)

Roboid Studio
Kwang Hyun Park (Kwangwoon University, Korea)
광운대 정보제어공학과 박광현 교수님 akaii@kw.ac.kr
- Robot Contents
- Roboid Framework
- Roboid Component

Software Framework for LEGO NXT
Sanghoon Lee (Hanyang University, Korea)
한양대 로봇 연구실 이상훈 교수님
- Development Environments for LEGO NXT
- Programming Issues for LEGO NXT under RPF of OPRoS
- Programming Issues for LEGO NXT under Roboid Framework

Lecture from Industry or Government
(Robomation/Mobiletalk/Robotis)

Robot Intelligence : From Reactive AI to Semantic AI
Il Hong Suh (Hanyang University, Korea)
한양대 로봇 지능/통신 연구실 서일홍 교수님
- Issues in Robot Intelligence
- Behavior Control: From Reactivity to Proactivity
- Use of Semantics for Robot Intelligence

AI-Robotics
Henrik I. Christensen (Georgia Tech., USA)

-
Semantic Mapping
- Physical Interaction with Robots
- Efficient object recognition for robots

Lecture from Industry or Government
(M. S. Kim, Director of CIR, 21C Frontier Program)

HRI
Dongsoo Kwon (KAIST, Korea)

- Introduction to human-robot interaction
- Perception technologies of HRI
- Cognitive and emotional interaction

Robot Swarm for Environmental Monitoring
Nak Young Chong (JAIST, Japan)

- Self-organizing Mobile Robot Swarms: Models
- Self-organizing Mobile Robot Swarms: Algorithms
- Self-organizing Mobile Robot Swarms: Implementation


posted by maetel
2009. 12. 2. 21:33 Computer Vision
Joan Solà

6DOF SLAM toolbox for Matlab http://homepages.laas.fr/jsola/JoanSola/eng/toolbox.html

References

[1] J. Civera, A.J. Davison, and J.M.M Montiel. Inverse depth parametrization for monocular SLAM. IEEE Trans. on Robotics, 24(5), 2008.

[2] J. Civera, Andrew Davison, and J. Montiel. Inverse Depth to Depth Conversion for Monocular SLAM. In IEEE Int. Conf. on Robotics and Automation, pages 2778 –2783, April 2007.

[3] A. J. Davison. Real-time simultaneous localisation and mapping with a single camera. In Int. Conf. on Computer Vision, volume 2, pages 1403–1410, Nice, October 2003.

[4] Andrew J. Davison. Active search for real-time vision. Int. Conf. on Computer Vision, 1:66–73, 2005.

[5] Andrew J. Davison, Ian D. Reid, Nicholas D. Molton, and Olivier Stasse. MonoSLAM: Real-time single camera SLAM. Trans. on Pattern Analysis and Machine Intelligence, 29(6):1052–1067, June 2007.

[6] Ethan Eade and Tom Drummond. Scalable monocular SLAM. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 1:469–476, 2006.

[7] Thomas Lemaire and Simon Lacroix. Monocular-vision based SLAM using line segments. In IEEE Int. Conf. on Robotics and Automation, pages 2791–2796, Rome, Italy, 2007.

[8] Nicholas Molton, Andrew J. Davison, and Ian Reid. Locally planar patch features for real-time structure from motion. In British Machine Vision Conference, 2004.

[9] J. Montiel, J. Civera, and A. J. Davison. Unified inverse depth parametrization for monocular SLAM. In Robotics: Science and Systems, Philadelphia, USA, August 2006.

[10] L. M. Paz, P. Pini´es, J. Tard´os, and J. Neira. Large scale 6DOF SLAM with stereo-in-hand. IEEE Trans. on Robotics, 24(5), 2008.

[11] J. Sol`a, Andr´e Monin, Michel Devy, and T. Vidal-Calleja. Fusing monocular information in multi-camera SLAM. IEEE Trans. on Robotics, 24(5):958–968, 2008.

[12] Joan Sol`a. Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach. PhD thesis, Institut National Polytechnique de Toulouse, 2007.

[13] Joan Sol`a, Andr´e Monin, and Michel Devy. BiCamSLAM: Two times mono is more than stereo. In IEEE Int. Conf. on Robotics and Automation, pages 4795–4800, Rome, Italy, April 2007.

[14] Joan Sol`a, Andr´e Monin, Michel Devy, and Thomas Lemaire. Undelayed initialization in bearing only SLAM. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pages 2499–2504, Edmonton, Canada, 2005.

[15] Joan Sol`a, Teresa Vidal-Calleja, and Michel Devy. Undelayed initialization of line segments in monocular SLAM. In IEEE Int. Conf. on Intelligent Robots and Systems, Saint Louis, USA, 2009. To appear.



slamtb.m



Plucker line (HighLevel/userDataLin.m) http://en.wikipedia.org/wiki/Pl%C3%BCcker_coordinates http://www.cgafaq.info/wiki/Plucker_line_coordinates


direct observation model
http://vismod.media.mit.edu/tech-reports/TR-451/node8.html
inverse observation model
http://vismod.media.mit.edu/tech-reports/TR-451/node9.html
( source: MIT Media Laboratory's Vision and Modeling group )
posted by maetel
2009. 10. 26. 21:35 Computer Vision

Avoiding moving outliers in visual SLAM by tracking moving objects


Wangsiripitak, S.   Murray, D.W.  
Dept. of Eng. Sci., Univ. of Oxford, Oxford, UK;

This paper appears in: Robotics and Automation, 2009. ICRA '09. IEEE International Conference on
Publication Date: 12-17 May 2009
On page(s): 375-380
ISSN: 1050-4729
ISBN: 978-1-4244-2788-8
INSPEC Accession Number: 10748966
Digital Object Identifier: 10.1109/ROBOT.2009.5152290
Current Version Published: 2009-07-06


http://www.robots.ox.ac.uk/~lav//Research/Projects/2009somkiat_slamobj/project.html

Abstract

parallel implementation of monoSLAM with a 3D object tracker
information to register objects to the map's frame
the recovered geometry

I. Introduction

approaches to handling movement in the environment
segmentation between static and moving features
outlying moving points

1) active search -> sparse maps
2) robust methods -> multifocal tensors
3-1) tracking known 3D objects in the scene
  -2) determining whether they are moving
  -3) using their convex hulls to mask out features

"Knowledge that they are occluded rather than unreliable avoids the need to invoke the somewhat cumbersome process of feature deletion, followed later perhaps by unnecessary reinitialization."

[15] H. Zhou and S. Sakane, “Localizing objects during robot SLAM in semi-dynamic environments,” in Proc of the 2008 IEEE/ASME Int Conf on Advanced Intelligent Mechatronics, 2008, pp. 595–601.

"[15] noted that movement is likely to associated with objects in the scene, and classified them according to the likelihood that they would move."

the use of 3D objects for reasoning about motion segmentation and occlusion

occlusion masks

II. Underlying Processes
A. Visual SLAM

Monocular visual SLAM - EKF

idempotent 멱등(冪等)
http://en.wikipedia.org/wiki/Idempotence
Idempotence describes the property of operations in mathematics and computer science that means that multiple applications of the operation do not change the result.

http://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation
http://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles
http://en.wikipedia.org/wiki/Quaternion
http://en.wikipedia.org/wiki/Euler_Angles
Berthold K.P. Horn, "Some Notes on Unit Quaternions and Rotation"

"Standard monocular SLAM takes no account of occlusion."

B. Object pose tracking

Harris' RAPiD
[17] C. Harris and C. Stennett, “Rapid - a video rate object tracker,” in Proc 1st British Machine Vision Conference, Sep 1990, pp. 73–77
[20] C. Harris, “Tracking with rigid models,” in Active Vision, A. Blake and A. Yuille, Eds. MIT Press, 1992, pp. 59–73.

"(RAPiD makes the assumption that the pose change required between current and new estimates is sufficiently small, first, to allow a linearization of the solution and, second, to make trivial the problem of inter-image correspondence.) The correspondences used are between predicted point to measured image edge, allowing search in 1D rather than 2D within the image. This makes very sparing use of image data — typically only several hundred pixels per image are addressed."

aperture problem
http://en.wikipedia.org/wiki/Motion_perception
http://focus.hms.harvard.edu/2001/Mar9_2001/research_briefs.html

[21] R. L. Thompson, I. D. Reid, L. A. Munoz, and D. W. Murray, “Providing synthetic views for teleoperation using visual pose tracking in multiple cameras,” IEEE Transactions on Systems, Man and
Cybernetics, Part A, vol. 31, no. 1, pp. 43–54, 2001.
- "Three difficulties using the Harris tracker":
(1)First it was found to be easily broken by occlusions and changing lighting. Robust methods to mitigate this problem have been investigated monocularly by Armstrong and Zisserman. (2)Although this has a marked effect on tracking performance, the second problem found is that the accuracy of the pose recovered in a single camera was poor, with evident correlation between depth and rotation about axes parallel to the image plane. Maitland and Harris had already noted as much when recovering the pose of a pointing device destined for neurosurgical application. They reported much improved accuracy using two cameras; but the object was stationary, had an elaborate pattern drawn on it and was visible at all times to both cameras. (3)The third difficulty, or rather uncertainty, was that the convergence properties and dynamic performances of the monocular and multicamera methods were largely unreported.
(3) : little solution
(2) => [21] "recovered pose using 3 iterations of the pose update cycle per image"
(1) => [21], [22] : search -> matching -> weighting

[22] M. Armstrong and A. Zisserman, “Robust object tracking,” in Proc 2nd Asian Conference on Computer Vision, 1995, vol. I. Springer, 1996, pp. 58–62.

RANSAC
[23] M. Fischler and R. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, June 1981.

Least median of squares as the underlying standard deviation is unknown
[24] P. J. Rousseeuw, “Least median of squares regression,” Journal of the American Statistical Association, vol. 79, no. 388, pp. 871–880, 1984.



III. MonoSLAM with Tracked Objects
A. Information from SLAM to the object tracker


B. Information from the object tracker to SLAM


"The convex hull is uniformly dilated by an amount that corresponds to the projection of the typical change in pose."




posted by maetel
2009. 10. 22. 16:53 Computer Vision
Probabilistic Robotics
Sebastian Thrun, Wolfram Burgard and Dieter Fox
MIT Press, September 2005



Preface     xvii    
Acknowledgments    xix
I    Basics    1
1    Introduction     3
2    Recursive State Estimation    13
3    Gaussian Filters    39
4    Nonparametric Filters    85
5    Robot Motion    117
6    Robot Perception    149
II    Localization    189
7    Mobile Robot Localization: Markov and Gaussian    191
8    Mobile Robot Localization: Grid And Monte Carlo    237
III    Mapping    279
9    Occupancy Grid Mapping    281
10    Simultaneous Localization and Mapping    309
11    The GraphSLAM Algorithm    337
12    The Sparse Extended Information Filter    385
13    The FastSLAM Algorithm    437
IV    Planning and Control    485
14    Markov Decision Processes    487
15    Partially Observable Markov Decision Processes    513
16    Approximate POMDP Techniques    547
17    Exploration    569    
Bibliography    607   
Index     639


Probability robotics is a subfield of robotics concerned with perception and control.

Introduction

probabilistic robotics
: explicit representation of uncertainty using the calculus of probability theory

perception
action

Bayes filters are a probabilistic tool for estimating the state of dynamic systems.





Bayes Filters are Familiar!
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes (POMDPs)


Kalman filter

Gaussian filter

discrete Kalman filter


Kalman filter update in 1-D

correction

prediction



Kalman filter algorithm


EKF = extended Kalman filter
: calculates a Gaussian approximation to the true belief.

Taylor series expansion
"Linearization approximates the nonlinear function g by a linear function that is tangent to g at the mean of the Gaussian."











SLAM





Techniques for Generating Consistent Maps
• Scan matching
• EKF SLAM
• Fast-SLAM
• Probabilistic mapping with a single map and a posterior about poses Mapping + Localization
• Graph-SLAM, SEIFs

Approximations for SLAM
• Local submaps
[Leonard et al.99, Bosse et al. 02, Newman et al. 03]
• Sparse links (correlations)
[Lu & Milios 97, Guivant & Nebot 01]
• Sparse extended information filters
[Frese et al. 01, Thrun et al. 02]
• Thin junction tree filters
[Paskin 03]
• Rao-Blackwellisation (FastSLAM)
[Murphy 99, Montemerlo et al. 02, Eliazar et al. 03, Haehnel et al. 03]

EKF-SLAM Summary
•Quadratic in the number of landmarks: O(n2)
• Convergence results for the linear case.
• Can diverge if nonlinearities are large!
• Have been applied successfully in large-scale environments.
• Approximations reduce the computational complexity.


ch8

eg. Xavier - Localization in a topological map
ref.  Probabilistic Robot Navigation in Partially Observable Environments 
Reid Simmons and Sven Koenig
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI '95), July, 1995, pp. 1080 - 1087.
  • Open Link in New Tab
  • Download
posted by maetel
2009. 9. 16. 22:04 Computer Vision
Denis Chekhlov, Andrew Gee, Andrew Calway, Walterio Mayol-Cuevas
Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM
http://dx.doi.org/10.1109/ISMAR.2007.4538840

demo: Ninja on A Plane: Discovering Planes in SLAM for AR

http://www.cs.bris.ac.uk/Publications/pub_master.jsp?id=2000745
posted by maetel
2009. 8. 24. 16:47 Computer Vision
Davison, A. J. and Molton, N. D. 2007.
MonoSLAM: Real-Time Single Camera SLAM.
IEEE Trans. Pattern Anal. Mach. Intell. 29, 6 (Jun. 2007), 1052-1067.
DOI= http://dx.doi.org/10.1109/TPAMI.2007.1049

posted by maetel
2009. 8. 17. 20:15 Computer Vision
Oxford 대학  Active Vision Group에서 개발한
PTAM (Parallel Tracking and Mapping for Small AR Workspaces)
Questions? E-mail ptam@robots.ox.ac.uk
README

맥미니에서의 설치를 끝내고 (test log on mac) 테스트해 보면 성능이 좋지 않아 그대로 쓸 수는 없는 상태이다.


0. Video Input
The software requires a video camera with a wide-angle lens, capable of 640x480x30Hz video capture and an appropriate driver installation (which is supported by libCVD.)




1. Camera Parameters
 
CameraCalibrator를 실행시키면 calibrator_settings.cfg 파일을 읽어 온다.
여기에 gvars (PTAM 라이브러리를 지원하는 Gvars3 라이브러리) settings이 설정되어 있...어야 하는데 빈 채로 주어졌다.

CameraCalibrator를 실행시킨 결과로 연산된 카메라 파라미터는 camera.cfg 파일에 저장된다.
실행 후 열어 보면,
Camera.Parameters=[ 3.02629 6.17916 0.524049 0.291111 2.1234 ]
라는 식으로 CameraCalibrator 실행창에서 나타나는 그대로 되어 있다. save 버튼을 눌렀을 때 저장되는 것.

PTAM을 실행시키면 settings.cfg 파일을 읽어 온다. 파일을 열어 보면, 여기에도 gvars setting을 첨가할 수 있다는 주석이 있고, 다음 명령문으로 위에서 저장한 camera.cfg 파일을 불러서 실행한다.
exec camera.cfg
즉, Camera.Parameters 변수에 값이 assign되는 것.

정리하면,
calibrator_settings.cfg -> CameraCalibrator -> camera.cfg -> settings.cfg -> PTAM







fast feature detection
http://mi.eng.cam.ac.uk/~er258/work/fast.html

ref.
http://en.wikipedia.org/wiki/Feature_detection_%28computer_vision%29



main.cc
1) settings.cfg 파일 로드 GUI.LoadFile("settings.cfg");

2) 사용자 입력 parsing GUI.StartParserThread();

3) 클래스 system (system.h) 실행 s.Run();

atexit
Set function to be executed on exit
The function pointed by the function pointer argument is called when the program terminates normally.

try-if 구문
1) Deitel 823p "catch handler"
2) theuhm@naver: "에러가 발생한 객체는 예외를 발생시킴과 동시에 try블럭 안의 모든 객체는 스코프를 벗어나 참조할 수 없게 되므로 예외를 처리하는 동안 try블럭 안에서 예외를 발생시켰을 수 있는 객체의 참조를 원천적으로 막아 더 안전하고 깔끔한 예외처리를 할 수 있는 환경을 만들어줍니다. 그리고 예외를 던질 때에 예외 객체의 클래스를 적절히 구성하면, 예외 객체에 예외를 처리하는 방법을 담아서 던질 수도 있습니다. 그렇게 구성하면 굉장히 깔끔한 코드를 얻을 수 있죠.

set
Sets are a kind of associative containers that stores unique elements, and in which the elements themselves are the keys. Sets are typically implemented as binary search trees.

namespace

system.h
1) PTAM에서 핵심적 기능을 하는 클래스들과 클래스 "System"을 선언
// Defines the System class
// This stores the main functional classes of the system
class ATANCamera;
class Map;
class MapMaker;
class Tracker;
class ARDriver;
class MapViewer;
class System;



system.cc



ATANCamera.h
FOV distortion model of Deverneay and Faugeras

Duvernay and Faugeras


'Computer Vision' 카테고리의 다른 글

Five-Point algorithm  (0) 2009.08.18
UNIX references  (0) 2009.08.17
PTAM test log on Mac OS X  (7) 2009.08.05
SLAM related generally  (0) 2009.08.04
Kalman Filter  (0) 2009.07.30
posted by maetel
2009. 8. 5. 14:36 Computer Vision
Oxford 대학  Active Vision Group에서 개발한
PTAM (Parallel Tracking and Mapping for Small AR Workspaces)
http://www.robots.ox.ac.uk/~gk/PTAM/

Questions? E-mail ptam@robots.ox.ac.uk


0. requirements 확인

readme 파일에서 언급하는 대로 프로세서와 그래픽카드를 확인하니

내가 설치할 컴퓨터 사양:
Model Name:    Mac mini
  Model Identifier:    Macmini3,1
  Processor Name:    Intel Core 2 Duo
  Processor Speed:    2 GHz
  Number Of Processors:    1
  Total Number Of Cores:    2
  L2 Cache:    3 MB
  Memory:    1 GB
  Bus Speed:    1.07 GHz
  Boot ROM Version:    MM31.0081.B00

그래픽 카드:
NVIDIA GeForce 9400

"Intel Core 2 Duo processors 2.4GHz+ are fine."이라고 했는데, 2.0이면 되지 않을까? 그래픽 카드는 동일한 것이니 문제 없고.


1. library dependency 확인

1. TooN - a header library for linear algebra
2. libCVD - a library for image handling, video capture and computer vision
3. Gvars3 - a run-time configuration/scripting library, this is a sub-project of libCVD.
셋 다 없으므로,

1-1. TooN 다운로드

TooN (Tom's object oriented Numerics)선형대수 (벡터, 매트릭스 연산)를 위해 Cambridge Machine Intelligence lab에서 개발한 C++ 라이브러리라고 한다.

ref. TooN Documentation (<- 공식 홈보다 정리가 잘 되어 있군.)

다음과 같은 명령으로 다운로드를 받는다.
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/toon co TooN

실행 결과:

생성된 TooN 폴더에 들어가서
%%% ./configure

실행 결과:


1-1-1. 더 안정적인(?) 버전을 받으려면

%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/toon co -D "Mon May 11 16:29:26 BST 2009" TooN

실행 결과:


1-2. libCVD 다운로드

libCVD (Cambridge Video Dynamics)같은 연구실에서 만든 컴퓨터 비전 관련 이미지 처리를 위한 C++ 라이브러리

ref. CVD documentation

%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/libcvd co -D "Mon May 11 16:29:26 BST 2009" libcvd

실행 결과:



1-3. Gvars3 다운로드

Gvars3 (configuration system library)
 
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/libcvd co -D "Mon May 11 16:29:26 BST 2009" gvars3

실행 결과:


2. 다운로드한 기반 라이브러리 설치

2-1. TooN 설치

2-1-1. configure file 실행

configure scripts는 source code를 compile하고 실행시킬 수 있게 만들어 주는 것.

생성된 TooN 폴더에 들어가서
%%% ./configure

실행 결과:

2-1-2. 설치

(TooN은 헤더파일들의 모음이므로 compile이 필요없다.)

%%% sudo make install

실행 결과:
mkdir -p //usr/local/include/TooN
cp *.h //usr/local/include/TooN
cp -r optimization //usr/local/include/TooN/
cp -r internal //usr/local/include/TooN/


2-2. libCVD 설치

2-2-1. configure 파일 실행

생성된 libCVD 폴더에 들어가서
%%% export CXXFLAGS=-D_REENTRANT
%%% ./configure --without-ffmpeg

실행 결과:

2-2-2. documents 생성 (생략해도 되는 듯)

다시 시도했더니
%%% make docs

make: *** No rule to make target `docs'.  Stop.
여전히 안 되는 듯... 아! doxygen을 맥포트로 설치해서 그런가 보다. (데이터베이스가 서로 다르다고 한다.)_M#]


2-2-3. compile 컴파일하기

%%% make

실행 결과:


2-2-4. install 설치하기

%%% sudo make install

실행 결과:


2-3. Gvars3  설치

2-3-1. configure 파일 실행

Gvars3 폴더에 들어가서
%%% ./configure --disable-widgets

실행 결과:


2-3-2. compile 컴파일하기

%%% make

실행 결과:


2-3-3. install 설치하기

%%% sudo make install

mkdir -p //usr/local/lib
cp libGVars3.a libGVars3_headless.a //usr/local/lib
mkdir -p //usr/local/lib
cp libGVars3-0.6.dylib //usr/local/lib
ln -fs  //usr/local/lib/libGVars3-0.6.dylib //usr/local/lib/libGVars3-0.dylib
ln -fs  //usr/local/lib/libGVars3-0.dylib //usr/local/lib/libGVars3.dylib
mkdir -p //usr/local/lib
cp libGVars3_headless-0.6.dylib //usr/local/lib
ln -fs  //usr/local/lib/libGVars3_headless-0.6.dylib //usr/local/lib/libGVars3_headless-0.dylib
ln -fs  //usr/local/lib/libGVars3_headless-0.dylib //usr/local/lib/libGVars3_headless.dylib
mkdir -p //usr/local/include
cp -r gvars3 //usr/local/include


2-4. OS X에서의 컴파일링과 관련하여

ref. UNIX에서 컴파일하기
Porting UNIX/Linux Applications to Mac OS X: Compiling Your Code in Mac OS X



3. PTAM 컴파일

3-1. 해당 플랫폼의 빌드 파일을 PTAM source 디렉토리로 복사

내 (OS X의) 경우, PTAM/Build/OS X에 있는 모든 (두 개의) 파일 Makefile과 VideoSource_OSX.cc를 PTAM 폴더에 옮겼다.

3-2. video source 셋업

카메라에 맞는 video input file을 컴파일하도록 Makefile을 수정해 주어야 한다.
맥의 경우, (아마도 Logitech Quickcam Pro 5000 을 기준으로 하는) 하나의 소스 파일만이 존재하므로 그대로 두면 될 듯.

3-3. video source 추가

다른 비디오 소스들은 libCVD에 클래스로 만들어져 있다고 한다. 여기에 포함되어 있지 않은 경우에는 VideoSource_XYZ.cc 라는 식의 이름을 갖는 파일을 만들어서 넣어 주어야 한다.

3-4. compile

PTAM 폴더에 들어가서
%% make

실행 결과:
g++ -g -O3 main.cc -o main.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
g++ -g -O3 VideoSource_OSX.cc -o VideoSource_OSX.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
g++ -g -O3 GLWindow2.cc -o GLWindow2.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
In file included from OpenGL.h:20,
                 from GLWindow2.cc:1:
/usr/local/include/cvd/gl_helpers.h:38:19: error: GL/gl.h: No such file or directory
/usr/local/include/cvd/gl_helpers.h:39:20: error: GL/glu.h: No such file or directory
/usr/local/include/cvd/gl_helpers.h: In function 'void CVD::glPrintErrors()':
/usr/local/include/cvd/gl_helpers.h:569: error: 'gluGetString' was not declared in this scope
make: *** [GLWindow2.o] Error 1

이 에러 메시지는 다음 링크에서 논의되고 있는 문제와 비슷한 상황인 것 같다.
http://en.allexperts.com/q/Unix-Linux-OS-1064/Compiling-OpenGL-unix-linux.htm


3-4-1. OpenGL on UNIX

PTAM이 OpenGL을 사용하고 있는데, OpenGL이 Mac에 기본으로 설치되어 있으므로 신경쓰지 않았던 부분이다. 물론 system의 public framework으로 들어가 있음을 확인할 수 있다. 그런데 UNIX 프로그램에서 접근할 수는 없는가? (인터넷에서 검색해 보아도 따로 설치할 수 있는 다운로드 링크나 방법을 찾을 수 없다.)

에러 메시지에 대한 정확한 진단 ->
philphys: 일단 OpenGL은 분명히 있을 건데 그 헤더파일과 라이브러리가 있는 곳을 지정해 주지 않아서 에러가 나는 것 같아. 보통 Makefile에 이게 지정되어 있어야 하는데 실행결과를 보니까 전혀 지정되어 있지 않네. 중간에 보면 -I /MY_CUSTOM_INCLUDE_PATH/ 라는 부분이 헤더 파일의 위치를 지정해 주는 부분이고 또 라이브러리는 뒤에 링크할 때 지정해 주게 되어 있는데 거기까지는 가지도 못 했네.
즉, "링커가 문제가 아니라, 컴파일러 옵션에 OpenGL의 헤더파일이 있는 디렉토리를 지정해 주어야 할 것 같다"고 한다.

문제의 Makefile을 들여다보고

Makefile을 다음과 같이 수정하고 (보라색 부분 추가)
COMPILEFLAGS = -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT -I/usr/X11R6/include/

philphys: /usr/X11R6/include 밑에 GL 폴더가 있고 거기에 필요한 헤더파일들이 모두 들어 있다. 그래서 코드에선 "GL/gl.h" 하는 식으로 explicit하게 GL 폴더를 찾게 된다.

그러고 보면 아래와 같은 설명이 있었던 것이다.
Since the Linux code compiles directly against the nVidia driver's GL headers, use of a different GL driver may require some modifications to the code.

다시 컴파일 하니,
실행 결과:

설치 완료!
두 실행파일 PTAM과 CameraCalibrator이 생성되었다.


3-5. X11R6에 대하여

X11R6 = Xwindow Verion 11 Release 6

Xwindow
X.org



4. camera calibration

CameraCalibrator 파일을 실행시켜 카메라 캘리브레이션을 시도했더니 GUI 창이 뜨는데 연결된 웹캠(Logitech QuickCam Pro 4000)으로부터 입력을 받지 못 한다.

4-0. 증상

CameraCalibrator 실행파일을 열면, 다음과 같은 터미널 창이 새로 열린다.
Last login: Fri Aug  7 01:14:05 on ttys001
%% /Users/lym/PTAM/CameraCalibrator ; exit;
  Welcome to CameraCalibrator
  --------------------------------------
  Parallel tracking and mapping for Small AR workspaces
  Copyright (C) Isis Innovation Limited 2008

  Parsing calibrator_settings.cfg ....
! GUI_impl::Loadfile: Failed to load script file "calibrator_settings.cfg".
  VideoSource_OSX: Creating QTBuffer....
  IMPORTANT
  This will open a quicktime settings planel.
  You should use this settings dialog to turn the camera's
  sharpness to a minimum, or at least so small that no sharpening
  artefacts appear! In-camera sharpening will seriously degrade the
  performance of both the camera calibrator and the tracking system.

그리고 Video란 이름의 GUI 창이 열리는데, 이때 아무런 설정을 바꾸지 않고 그대로 OK를 누르면 위의 터미널 창에 다음과 같은 메시지가 이어지면서 자동 종료된다.
  .. created QTBuffer of size [640 480]
2009-08-07 01:20:57.231 CameraCalibrator[40836:10b] ***_NSAutoreleaseNoPool(): Object 0xf70e2c0 of class NSThread autoreleasedwith no pool in place - just leaking
Stack: (0x96827f0f 0x96734442 0x9673a1b4 0xbc2db7 0xbc7e9a 0xbc69d30xbcacbd 0xbca130 0x964879c9 0x90f8dfb8 0x90e69618 0x90e699840x964879c9 0x90f9037c 0x90e7249c 0x90e69984 0x964879c9 0x90f8ec800x90e55e05 0x90e5acd5 0x90e5530f 0x964879c9 0x94179eb9 0x282b48 0xd9f40xd6a6 0x2f16b 0x2fea4 0x26b6)
! Code for converting from format "Raw RGB data"
  not implemented yet, check VideoSource_OSX.cc.

logout

[Process completed]

그러므로 3-3의 문제 -- set up video source (비디오 소스 셋업) --로 돌아가야 한다.
즉, VideoSource_OSX.cc 파일을 수정해서 다시 컴파일한 후 실행해야 한다.

Other video source classes are available with libCVD. Finally, if a custom video source not supported by libCVD is required, the code for it will have to be put into some VideoSource_XYZ.cc file (the interface for this file is very simple.)

삽질...



4-1. VideoSource_OSX.cc 파일 수정



수정한 VideoSource 파일

터미널 창:
Welcome to CameraCalibrator
  --------------------------------------
  Parallel tracking and mapping for Small AR workspaces
  Copyright (C) Isis Innovation Limited 2008

  Parsing calibrator_settings.cfg ....
  VideoSource_OSX: Creating QTBuffer....
  IMPORTANT
  This will open a quicktime settings planel.
  You should use this settings dialog to turn the camera's
  sharpness to a minimum, or at least so small that no sharpening
  artefacts appear! In-camera sharpening will seriously degrade the
  performance of both the camera calibrator and the tracking system.
>   .. created QTBuffer of size [640 480]
2009-08-13 04:02:50.464 CameraCalibrator[6251:10b] *** _NSAutoreleaseNoPool(): Object 0x9df180 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x96670f4f 0x9657d432 0x965831a4 0xbc2db7 0xbc7e9a 0xbc69d3 0xbcacbd 0xbca130 0x924b09c9 0x958e8fb8 0x957c4618 0x957c4984 0x924b09c9 0x958eb37c 0x957cd49c 0x957c4984 0x924b09c9 0x958e9c80 0x957b0e05 0x957b5cd5 0x957b030f 0x924b09c9 0x90bd4eb9 0x282b48 0xd414 0xcfd6 0x2f06b 0x2fda4)



4-2. Camera Calibrator 실행


Camera calib is [ 1.51994 2.03006 0.499577 0.536311 -0.0005 ]
  Saving camera calib to camera.cfg...
  .. saved.



5. PTAM 실행


  Welcome to PTAM
  ---------------
  Parallel tracking and mapping for Small AR workspaces
  Copyright (C) Isis Innovation Limited 2008

  Parsing settings.cfg ....
  VideoSource_OSX: Creating QTBuffer....
  IMPORTANT
  This will open a quicktime settings planel.
  You should use this settings dialog to turn the camera's
  sharpness to a minimum, or at least so small that no sharpening
  artefacts appear! In-camera sharpening will seriously degrade the
  performance of both the camera calibrator and the tracking system.
>   .. created QTBuffer of size [640 480]
2009-08-13 20:17:54.162 ptam[1374:10b] *** _NSAutoreleaseNoPool(): Object 0x8f5850 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x96670f4f 0x9657d432 0x965831a4 0xbb9db7 0xbbee9a 0xbbd9d3 0xbc1cbd 0xbc1130 0x924b09c9 0x958e8fb8 0x957c4618 0x957c4984 0x924b09c9 0x958eb37c 0x957cd49c 0x957c4984 0x924b09c9 0x958e9c80 0x957b0e05 0x957b5cd5 0x957b030f 0x924b09c9 0x90bd4eb9 0x282b48 0x6504 0x60a6 0x11af2 0x28da 0x2766)
  ARDriver: Creating FBO...  .. created FBO.
  MapMaker: made initial map with 135 points.
  MapMaker: made initial map with 227 points.


The software was developed with a Unibrain Fire-i colour camera, using a 2.1mm M12 (board-mount) wide-angle lens. It also runs well with a Logitech Quickcam Pro 5000 camera, modified to use the same 2.1mm M12 lens.

iSight를 Netmate 1394B 9P Bilingual to 6P 케이블로  MacMini에 연결하여 해 보니 더 잘 된다.



'Computer Vision' 카테고리의 다른 글

UNIX references  (0) 2009.08.17
PTAM to be dissected on OS X  (0) 2009.08.17
SLAM related generally  (0) 2009.08.04
Kalman Filter  (0) 2009.07.30
OpenCV 1.0 설치 on Mac OS X  (0) 2009.07.27
posted by maetel
2009. 8. 4. 23:07 Computer Vision
SLAM 전반/기본에 관한 자료

Durrant-Whyte & Bailey "Simultaneous localization and mapping"
http://leeway.tistory.com/667


Søren Riisgaard and Morten Rufus Blas
SLAM for Dummies: A Tutorial Approach to Simultaneous Localization and Mapping
http://leeway.tistory.com/688


Joan Solà Ortega (de l’Institut National Polytechnique de Toulouse, 2007)
Towards visual localization, mapping and moving objects tracking by a mobile robot: A geometric and probabilistic approach
ch3@ http://leeway.tistory.com/628


SLAM summer school
2009 Australian Centre for Field Robotics, University of Sydney
http://www.acfr.usyd.edu.au/education/summerschool.shtml
2006 Department of Engineering Science and Robotics Research Group, Oxford
http://www.robots.ox.ac.uk/~SSS06/Website/index.html
2004 Laboratory for Analysis and Architecture of Systems  (LAAS-CNRS) located in Toulouse
http://www.laas.fr/SLAM/
2002 Centre for Autonomous Systems
Numerical Analysis and Computer Science
Royal Institute of Technology
, Stockholm
http://www.cas.kth.se/SLAM/


http://www.doc.ic.ac.uk/%7Eajd/Scene/Release/monoslamtutorial.pdf
Oxford 대학 Active Vision LabVisual Information Processing (VIP) Research Group에서 개발한 SceneLib tutorial인데, Monocular Single Camera를 사용한 SLAM의 기본 개념을 정리해 놓았다.

'Computer Vision' 카테고리의 다른 글

PTAM to be dissected on OS X  (0) 2009.08.17
PTAM test log on Mac OS X  (7) 2009.08.05
Kalman Filter  (0) 2009.07.30
OpenCV 1.0 설치 on Mac OS X  (0) 2009.07.27
cameras on mac os x  (0) 2009.07.27
posted by maetel
2009. 7. 23. 18:53 Computer Vision
Brian Williams, Georg Klein and Ian Reid
(Department of Engineering Science, University of Oxford, UK)
Real-Time SLAM Relocalisation
In Proceedings of the International Conference on Computer Vision, Rio de Janeiro, Brazil, 2007
demo 1
demo 2


• real-time, high-accuracy localisation and mapping during tracking
• real-time (re-)localisation when when tracking fails
• on-line learning of image patch appearance so that no prior training or map structure is required and features are added and removed during operation.


Lepetit's image patch classifier (feature appearance learning)
=> integrating the classifier more closely into the process of map-building
(by using classification results to aid in the selection of new points to add to the map)


> recovery from tracking failure: local vs. global
local -  particle filter -> rich feature descriptor
global - proximity using previous key frames


- based on SceneLib (Extended Kalman Filter)
- rotational (and a degree of perspective) invariance via local patch warping
- assuming the patch is fronto-parallel when first seen
http://freshmeat.net/projects/scenelib/

active search

innovation covariance

joint compatibility test

randomized lists key-point recognition algorithm
1. randomized: (2^D  - 1) tests -> D tests
2. independent treatment of classes
3. binary leaf scores (2^D * C * N bits for all scores)
4. intensity offset
5. explicit noise handing

training the classifier

The RANSAC (Random Sample Consensus) Algorithm




ref.
Davison, A. J. and Molton, N. D. 2007.
MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 6 (Jun. 2007), 1052-1067. DOI= http://dx.doi.org/10.1109/TPAMI.2007.1049

Vision-based global localization and mapping for mobile robots
Se, S.   Lowe, D.G.   Little, J.J.   (MD Robotics, Brampton, Ont., Canada)

Lepetit, V. 2006.
Keypoint Recognition Using Randomized Trees. IEEE Trans. Pattern Anal. Mach. Intell. 28, 9 (Sep. 2006), 1465-1479. DOI= http://dx.doi.org/10.1109/TPAMI.2006.188

Lepetit, V., Lagger, P., and Fua, P. 2005.
Randomized Trees for Real-Time Keypoint Recognition. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cvpr'05) - Volume 2 - Volume 02 (June 20 - 26, 2005). CVPR. IEEE Computer Society, Washington, DC, 775-781. DOI= http://dx.doi.org/10.1109/CVPR.2005.288

Fischler, M. A. and Bolles, R. C. 1981.
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 6 (Jun. 1981), 381-395. DOI= http://doi.acm.org/10.1145/358669.358692
posted by maetel
2009. 7. 22. 16:55 Computer Vision
Durrant-Whyte, H.Bailey, T.
(Australian Centre for Field Robotics, Sydney Univ., NSW)
Simultaneous localization and mapping
Robotics & Automation Magazine, 2006

Simultaneous localization and mapping: part I
Robotics & Automation Magazine, IEEE, Volume: 13,  Issue: 2 (June 2006)

Simultaneous localization and mapping (SLAM): part II
Robotics & Automation Magazine, IEEE, Volume: 13,  Issue: 3
(Sept. 2006)


Table 1. Open-source SLAM software.
Kai Arras
The CAS Robot Navigation Toolbox, a MATLAB simulation toolbox for robot localization and mapping
http://www.cas.kth.se/toolbox/index.html

Tim Bailey
MATLAB simulators for EKF-SLAM, UKF-SLAM, and FastSLAM 1.0 and 2.0. http://www.acfr.usyd.edu.au/homepages/academic/tbailey/software/index.html

Mark Paskin
Java library with several SLAM variants, including Kalman filter, information filter, and thin junction tree forms. Includes a MATLAB interface.
http://www.stanford.edu/~paskin/slam/

Andrew Davison
Scene, a C++ library for map-building and localization. Facilitates real-time single camera SLAM.
http://www.doc.ic.ac.uk/~ajd/Scene/ index.html

José Neira
MATLAB EKF-SLAM simulator that demonstrates joint compatibility branch-and-bound data association.
http://webdiis.unizar.es/~neira/software/slam/slamsim.htm

Dirk Hähnel
C language grid-based version of FastSLAM.
http://www.informatik.uni-freiburg.de/~haehnel/old/download.html

Various
MATLAB code from the 2002 SLAM summer school.
http://www.cas.kth.se/slam/toc.html

Table 2. Online datasets.
Eduardo Nebot
Numerous large-scale outdoor datasets, notably the popular Victoria Park data.
http://www.acfr.usyd.edu.au/homepages/academic/enebot/dataset.htm

Chieh-Chih Wang
Three large-scale outdoor datasets collected by the Navlab11 testbed.
http://www.cs.cmu.edu/~bobwang/datasets.html

Radish (The Robotics Many and varied indoor datasets, including large-area Data Set Repository) data from the CSU Stanislaus Library, the Intel Research Lab in Seattle, the Edmonton Convention Centre, and more.
http://radish.sourceforge.net/

IJRR (The International Journal of Robotics Research)
IJRR maintains a Web page for each article, often containing data and video of results. A good paper example is by Bosse et al. [3], which has data from Killian Court at MIT.
http://www.ijrr.org/contents/23\_12/abstract/1113.html


IEEE Robotics and Automation Society http://www.ieee-ras.org/
IEEE ICRA (International Conference on Robotics and Automation) http://www.icra2009.org/
http://icra2010.grasp.upenn.edu/

International Foundation of Robotics Research http://www.ifrr.org/
ISRR 2009 - 14th International Symposium on Robotics Research http://www.isrr2009.ethz.ch/

IROS 2009: The 2009 IEEE/RSJ International Conference on Intelligent RObots and Systems www.iros09.mtu.edu/
http://www.iros2010.org.tw/

ICARCV 2010 - The 11th International Conference on Control, Automation, Robotics and Vision
http://www.icarcv.org/2010/



History

- 1986, probabilistic SLAM problem (IEEE Robotics and Automation Conference)
Peter Cheeseman, Jim Crowley, and Hugh Durrant-Whyte, Raja Chatila, Oliver Faugeras, Randal Smith
> estimation-theoretic methods, consistent mapping

- consistent probabilistic mapping
Smith and Cheesman [39] and Durrant-Whyte [17]
> statistical basis
"There must be a high degree of correlation between estimates of the location of different landmarks in a map"

- visual navigation & sonar-based navigation
Ayache and Faugeras [1],  Crowley [9] and Chatila and Laumond [6]
> Kalman filter-type algorithms

Smith et al. [40] "The estimations of the landmarks are all necessarily correlated with each other because of the common error in estimated vehicle location."

> joint state (of the vehicle pose and every landmark position) to be updated following each landmark observation & estimator (state vector)

- random-walk behavior with unbounded error growth (without knowledge of the convergence behavior of the map)

> single estimation problem: "The combined mapping and localization problem is convergent."
"The more the correlations between landmarks grew, the better the solution."

- 1995, coining of SLAM (a paper at the International Symposium on Robotics Research) or called CLM (concurrent mapping and localization)
Csorba [10], [11]. the Massachusetts Institute of Technology [29], Zaragoza [4], [5], the ACFR at Sydney [20], [45], and others [7], [13]
> computational efficiency, addressing in data association, loop closure

- 1999 ISRR, convergence between the Kalman-filter-based SLAM methods and the probabilistic localisation and mapping methods introduced by Thrun

- 2000 IEEE ICRA
> algorithmic complexity, data association, implementation



Formulation

SLAM = process by which a mobile robot can build a map of an environment and at the same time use this map to deduce its location
(In SLAM, both the trajectory of the platform and the location of all landmarks are estimated online without the need for any a priori knowledge of location.)

Probabilistic SLAM
The SLAM probability distribution = the joint posterior density of the landmark locations and vehicle state (at time k) given the recorded observations and control inputs up to and including time k together with the initial state of the vehicle

recursive solution
: observation model + motion (state transition) model == Bayes theorem ==> estimate of SLAM distribution

   observation model -> prediction (; measurement update)
+ motion model -> correction (; time update)
+ Markov process
=> map building problem + localization problem
: joint posterior density of the landmark locations and vehicle state

As the map is built, the location accuracy of the robot measured relative to the map is bounded only by the quality of the map and relative measurement sensor.
: Robot relative location accuracy becomes equal to the localization accuracy achievable with a given map.


Solutions

1)
state-space model + additive Gaussian noise
EKF = extended Kalman filter

2)
a set of samples of a more general non-Gaussian probability distribution to describe vehicle motion
Rao-Blackwellized particle filter or FastSLAM algorithm

3)
information-state form

ref. Sebastian Thrun, Yufeng Liu, Daphne Koller, Andrew Y. Ng, Zoubin Ghahramani, Hugh Durrant-Whyte
Simultaneous Localization and Mapping With Sparse Extended Information Filters



1) EKF-SLAM



2) Rao-Blackwellized filter




posted by maetel
2009. 7. 21. 16:16 Computer Vision
임현, 이영삼 (인하대 전기공학부)
이동로봇의 동시간 위치인식 및 지도작성(SLAM)
제어 로봇 시스템 학회지 제15권 제2호 (2009년 6월)
from kyu


> definition
mapping: 환경을 인식가능한 정보로 변환하고
localization: 이로부터 자기 위치를 추정하는 것

> issues
- uncertainty <= sensor
- data association (데이터 조합): 차원이 높은 센서 정보로부터 2-3차원 정도의 정보를 추려내어 이를 지속적으로 - 대응시키는 것
- 관찰된 특징점 자료들을 효율적으로 관리하는 방법


> localization (위치인식)
: 그 위치가 미리 알려진 랜드마크를 관찰한 정보를 토대로 자신의 위치를 추정하는 것
: 초기치 x0와 k-1시점까지의 제어 입력, 관측벡터와 사전에 위치가 알려진 랜드마크를 통하여 매 k시점마다 로봇의 위치를 추정하는 것
- 로봇의 위치추정의 불확실성은 센서의 오차로부터 기인함.

> mapping (지도작성)
: 기준점과 상대좌표로 관찰된 결과를 누적하여 로봇이 위치한 환경을 모델링하는 것
: 위치와 관측정보 그리고 제어입력으로부터 랜드마크 집합을 추정하는 것
- 지도의 부정확성은 센서의 오차로부터 기인함.

> Simultaneous Localization and Mapping (SLAM, 동시간 위치인식 및 지도작성)
: 위치한 환경 내에서 로봇의 위치를 추정하는 것
: 랜드마크 관측벡터와 초기값 그리고 적용된 모든 제어입력이 주어진 상태에서 랜드마크의 위치와 k시점에서의 로봇 상태벡터 xk의 결합확률
- 재귀적 방법 + Bayes 정리
- observation model (관측 모델) + motion model (상태 공간 모델, 로봇의 움직임 모델)
- motion model은 상태 천이가 Markov 과정임을 의미함. (현재 상태는 오직 이전 상태와 입력 벡터로서 기술되고, 랜드마크 집합과 관측에 독립임.)
- prediction (time-update) + correction (measurement-update)
- 불확실성은 로봇 주행거리계와 센서 오차로부터 유발됨.


conditional Bayes rule
http://en.wikipedia.org/wiki/Bayes%27_theorem
 P(A|B \cap C) = \frac{P(A \cap B \cap C)}{P(B \cap C)} = \frac{P(B|A \cap C) \, P(A|C) \, P(C)}{P(C) \, P(B|C)} = \frac{P(B|A \cap C) \, P(A|C)}{P(B|C)}\,.

Markov process

total probability theorem: "law of alternatives"
http://en.wikipedia.org/wiki/Total_probability_theorem
\Pr(A)=\sum_{n} \Pr(A\cap B_n)\,
\Pr(A)=\sum_{n} \Pr(A\mid B_n)\Pr(B_n).\,

> Extended Kalman filter (EKF, 확장 칼만 필터)


http://en.wikipedia.org/wiki/Ground_truth

posted by maetel
2009. 7. 15. 16:49 Computer Vision
Klein, G. and Murray, D. 2007.
Parallel Tracking and Mapping for Small AR Workspaces
In Proceedings of the 2007 6th IEEE and ACM international Symposium on Mixed and Augmented Reality - Volume 00 (November 13 - 16, 2007). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 1-10. DOI= http://dx.doi.org/10.1109/ISMAR.2007.4538852

Georg Klein
David Murray
Active Vision Laboratory, Department of Engineering Science, University of Oxford

Source CodeUsage Example


1. parallel threads of tracking and mapping
2. mapping from smaller keyframes: batch techniques (Bundle Adjustment)
3. Initializing the map from 5-point Algorithm
4. Initializing new points with epipolar search
5. mapping thousands of points


Joint Compatibility Branch and Bound (JCBB)
http://en.wikipedia.org/wiki/JCBB

RANdom SAmple Consensus (RANSAC)
http://en.wikipedia.org/wiki/RANSAC

coarse-to-fine approach

batch method
bundle adjustment
http://en.wikipedia.org/wiki/Bundle_adjustment

Structure-from-Motion (SfM)

five-point stereo
http://en.wikipedia.org/wiki/Eight-point_algorithm

5-point algorithm
http://portal.acm.org/citation.cfm?id=987623

Henrik Stew´enius, Christopher Engels, David Nist´er
Recent Developments on Direct Relative Orientation


epipolar feature search

intensity-patch descriptor

(feature-to-feature or camera-to-feature) correlation-based search

NCC (normalized cross correlation) search
http://en.wikipedia.org/wiki/Cross-correlation#Normalized_cross-correlation


Unibrain Fire-i digital camera

http://en.wikipedia.org/wiki/YUV411

FAST-10 corner detection
http://wapedia.mobi/en/Corner_detection
http://en.wikipedia.org/wiki/Corner_detection

decaying velocity model

barrel radial distortion
http://en.wikipedia.org/wiki/Distortion_(optics)

lie group SE(3)

affine warp
warping matrix <- (1) back-projecting unit pixel displacements in the source keyframe pyramid level onto the patch's plane and then (2) projecting these into the current (target) frame

inverse compositional image alignment 

Tukey biweight objective function

M-estimator
http://en.wikipedia.org/wiki/M-estimator
Zhengyou Zhang, M-estimators

Shi-Tomasi corner detector

http://en.wikipedia.org/wiki/Levenberg-Marquardt

cubic-cost matrix factorization
http://en.wikipedia.org/wiki/Cubic_function



posted by maetel
2009. 4. 9. 21:24 Computer Vision
M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit.
FastSLAM: A factored solution to the simultaneous localization and mapping problem.
In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002. AAAI.



posted by maetel
2009. 3. 31. 22:25 Computer Vision

Simultaneous localization and mapping with unknown data association using FastSLAM
Montemerlo, M.   Thrun, S.  
Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA


This paper appears in: Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on
Publication Date: 14-19 Sept. 2003
Volume: 2
On page(s): 1985 - 1991 vol.2
Number of Pages: 3 vol.lii+4450
ISSN: 1050-4729
ISBN: 0-7803-7736-2
INSPEC Accession Number:7877180
Digital Object Identifier: 10.1109/ROBOT.2003.1241885
Current Version Published: 2003-11-10


Michael Montemerlo @ Field Robotics Center, Carnegie Mellon University 
http://en.wikipedia.org/wiki/Michael_Montemerlo

Sebastian Thrun @ Stanford Artificial Intelligence Laboratory, Stanford University
http://en.wikipedia.org/wiki/Sebastian_Thrun

http://www.probabilistic-robotics.org/


FastSLAM
http://robots.stanford.edu/probabilistic-robotics/ppt/fastslam.ppt

Rao-Blackwellized Particle Filter
http://en.wikipedia.org/wiki/Particle_filter


I. Introduction



SLAM

mobile robotics

the problem of building a map of an unknown environment from a sequence of noisy landmark measurements obtained from a moving robot + a robot localization problem => SLAM

autonomous robots operating in environments where precise maps and positioning are not available


 

Extended Kalman Filter (EKF)
: used for incrementally estimating the joint posterior distribution over robot pose and landmark positions

limitations of EKF
1) Quadratic complexity (: sensor updates require time quadratic in the total number of landmarks in the map)
=> limiting the number of landmarks to only a few hundred (whereas natural environment models frequently contain millions of features
2) Data association / correspondence (: mapping of observations to landmarks)
=> associating a small numver of observations with incorrect landmarks to cause the filter to diverge



 

FastSLAM decomposes the SLAM problem into a robot localization problem, and a collection of landmark estimation problems that are conditioned on the robot pose estimate.

ref. Montemerlo & Thrun & Koller & Wegbreit <FastSLAM: A factored solution to the simultaneous localization and mapping problem>   In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002. AAAI.

 

FastSLAM factors the SLAM posterior into a localization problem and K independent landmark estimation problems conditioned on the robot pose estimate.

> a modified particle filter to estimate the posterior over robot paths
> each particle possessing K independent Kalman filters that estimate the landmark locations conditioned on the particle's path
=> an instance of the Rao-Blackwellized particle filter

Representing particles as binary trees of Kalman Filters
-> Incorporating observations into FastSLAM in O(MlogK) time  (M, # of particles; K, # of landmarks)

Each particle represents a different robot pose hypothesis.
=> Data association can be considered separately for every particle.
=> 1) The noise of robot motion does not affect the accuracy of data association.
2) Incorrectly associated particls will receive lower probabilities and will be removed in future resampling steps.


 

On real-world data with ambiguous data association
Adding extra odometric noise
( Odometry is the use of data from the movement of actuators to estimate change in position over time. )
Estimating an accurate map without any odometry in the environment in which the Kalman Filter inevitably diverges.
How to incorporate negative information resulting in a measurable increase in the accuracy of the resulting map



 

II. SLAM Problem Definition


probabilistic Markov chain
http://en.wikipedia.org/wiki/Markov_chain

robot's position & heading orientation, s

K landmarks' locations, θ

i) The robot's current pose is a probabilistic function of the robot control and the previous pose at time t.

ii) The sensor measurement, range and bearing to landmarks, is a probabilistic function of the robot's current pose and the landmakr being at time t.

=> SLAM is the problem of determining the locations of all landmarks and robot poses from measurements and controls.


III. Data Association


uncertainty in the SLAM posterior, mapping between observations and landmarks
=> ambiguity in data association

i) measurement noise: uncertain landmark positions
<= 1) measurement ambiquity
2) confusion between nearby landmarks

ii) motion noise: robot pose uncertainty after incorporating a control
=> 1) adding a large amount of error to the robot's pose
2) causing a filter to diverge


IV. FastSLAM with Known Data Association


dynamic Bayes network
http://en.wikipedia.org/wiki/Dynamic_Bayesian_network

conditional independece
: The problem of determining the landmark locations could be decoupled into K independent estimation problems, one for each landmark.


FastSLAM estimates the factored SLAM posterior using a modified particle filter, with K independent Kalman Filters for each particle to estimate the landmark positions conditioned on the hypothesized robot paths. The resulting algorithm is an instance of the Rao-Blackwellized particle filter.



A. Particle Filter Path Estimation

Monte Carlo Localization (MCL) algorithm
http://en.wikipedia.org/wiki/Monte_Carlo_localization
 
particle set, representing the posterior ("guess") of a robot path

proposal distribution of particle filtering








posted by maetel
2009. 3. 31. 21:10 Computer Vision

Real-time simultaneous localisation and mapping with a single camera

Davison, A.J.  
Dept. of Eng. Sci., Oxford Univ., UK;

This paper appears in: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on
Publication Date: 13-16 Oct. 2003
On page(s): 1403-1410 vol.2
ISBN: 0-7695-1950-4
INSPEC Accession Number: 7971070
Digital Object Identifier: 10.1109/ICCV.2003.1238654
Current Version Published: 2008-04-03


 

posted by maetel
2009. 3. 27. 21:33 Computer Vision

Scalable Monocular SLAM
Eade, E.   Drummond, T.  
Cambridge University;

This paper appears in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on
Publication Date: 17-22 June 2006
Volume: 1,  On page(s): 469- 476
ISSN: 1063-6919
ISBN: 0-7695-2597-0
Digital Object Identifier: 10.1109/CVPR.2006.263
Current Version Published: 2006-07-05

 
Ethan Eade & Tom Drummond
Machine Intelligence Laboratory
the Division of Information Engineering at Cambridge University Engineering Department




monocular SLAM
particle filter + top-down search => real-time, large number  of landmarks

the first to apply this FastSLAM-type particle filter to single-camera SLAM


1. Introduction


SLAM = Simultaneous Localization and Mapping
: process of causally estimating both egomotion and structure in an online system

 SLAM using visual data in computer vision

SFM (= structure from motion): reconstructing scene geometry
+ causal or recursive estimation techniques

perspective-projection cameras

filtering methods to allow indirect observation models

Kalman filtering framework

Extended Kalman filter = EKF (-> to linearize the observation and dynamics models of the system)

causal estimation with recursive algorithms (cp. estimation depending only on observations up to the current time)
=> online operation (cp. SFM on global nonlinear optimization)


Davision's SLAM with a single camera
> EKF estimation framework
> top-down Bayesian estimation approach searching for landmarks in image regions constrained by estimate > uncertainty (instead of performing extensive bottom-up image processing and feature matching)
> Bayesian partial-initialization scheme for incorporating new landmarks
- cannot scale to large environment


EKF = the Extended Kalman filter
-  N*N covariace matrix for N landmarks
- updated with N*N computation cost

> SLAM system using a single camera as the only sensor
> frame-rate operation with many landmarks
> FastSLAM-style particle filter (the first use of such an approach in a monocular SLAM setting)
> top-down active search
> an efficient algorithm for discovering the depth of new landmarks that avoids linearization errors
> a novel method for using partially initialized landmarks to help constrain camera pose


FastSLAM
: based on the Rao-Blackwellized Particle Filter

2. Background

2.1 Scalable SLAM

> submap
bounded complexity -> bounded computation and space requirements

Montemerlo & Thrun
If the entire camera motion is known then the estimates of the positions of different landmarks become independent of each other.







Rao-Blackwellized Particle Filter



ZNCC = the Zero mean Normalized Cross-Correlation function epipolar constraint


epipolar constraint

http://en.wikipedia.org/wiki/Epipolar_geometry


posted by maetel
2009. 3. 27. 21:05 Computer Vision

Hugh F. Durrant-Whyte, Australian Centre for Field Robotics
http://en.wikipedia.org/wiki/Hugh_F._Durrant-Whyte

John J. Leonard, Center for Ocean Engineering, MIT

Sebastian Thrun, Stanford Artificial Intelligence Laboratory, Stanford University
http://en.wikipedia.org/wiki/Sebastian_Thrun

David Nistér, Center for Visualization and Virtual Environments, University of Kentucky

Ethan Eade, Machine Intelligence lab, Engineering Department, Cambridge University

Tom Drummond, Machine Intelligence Laboratory, Engineering Department, Cambridge University

Javier Civera, Departamento de Informática e Ingeniería de Sistemas, Universidad de Zaragoza

Andrew J. Davison, Reader in Robot Vision at the Department of Computing, Imperial College London

Jose Maria Martinez Montiel, Robotics and Real Time Group, Universidad de Zaragoza

Robert Castle, Active Vision Laboratory, Robotics Research Group, Oxford University

임현Embedded control system 연구실, 전기공학부, 인하대학교

김정호, Robotics and Computer Vision 연구실 (권인소), 한국과학기술원

labs
 
Active Vision Goup, Robotics Research Group, Engineering Department, Oxford University

Computer Vision & Robotics Group, Machine Intelligence Laboratory, Department of Engineering, University of Cambridge

Image Information Processing Lab 영상정보처리연구실 (홍기상), 포항공대

Intelligent Control and Systems Lab 지능제어 및 시스템 연구실 (김상우), 포항공대


posted by maetel
2009. 3. 26. 19:56 Computer Vision

Inverse Depth Parametrization for Monocular SLAM
Civera, J.   Davison, A.J.   Montiel, J. 


This paper appears in: Robotics, IEEE Transactions on
Publication Date: Oct. 2008
Volume: 24,  Issue: 5
On page(s): 932-945
ISSN: 1552-3098
INSPEC Accession Number: 10301459
Digital Object Identifier: 10.1109/TRO.2008.2003276
First Published: 2008-10-03
Current Version Published: 2008-10-31

Javier Civera, Departamento de Informática e Ingeniería de Sistemas, Universidad de Zaragoza

Andrew J. Davison, Reader in Robot Vision at the Department of Computing, Imperial College London

Jose Maria Martinez Montiel, Robotics and Real Time Group, Universidad de Zaragoza




monocular simultaneous localization and mapping  (SLAM)

representation of uncertainty

the standard extended Kalman filter (EKF)

direct parametrization of the inverse depth of features

feature initialization

camera motion estimates

6-D state vector --> converted to the Euclidean XYZ form

linearity index => automatic detection and conversion to maintain maximum efficiency



I. Introduction


monocular camera
: projective sensor measuring the beairng of image features

monocular (adj) 단안(單眼)(용)의, 외눈의

A stereo camera is a type of camera with two or more lenses. This allows the camera to simulate human binocular vision.

structure from motion = SFM
1) feature matching
2) global camera location & scene feature position estimates

sliding window processing

Sliding Window Protocol is a bi-directional data transmission protocol used in the data link layer (OSI model) as well as in TCP (transport layer of the OSI model). It is used to keep a record of the frame sequences sent and their respective acknowledgements received by both the users.

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images.

Odometry is the use of data from the movement of actuators to estimate change in position over time. Odometry is used by some robots, whether they be legged or wheeled, to estimate (not determine) their position relative to a starting location.

visual SLAM

probabilistic filtering approach

initializing uncertain depth estimates for distance features

Gaussian distributions implicit in the EKF

a new feature parametrization that is able to smoothly cope with initialization of features at all depth - even up to "infinity" - within the standard EKF framework: direct parametrization of inverse depth relative to the camera position from which a feature was first observed


A. Delayed and Undelayed Initialization

main map; main probabilistic state; main state vector

test for inclusion

delayed initialization
> treating newly detected features separately from the main map to reduce depth uncertainty before insertion into the full filter (with a standard XYZ representation)
- Features that retain low parallax over many frames (those very far from the camera or close to the motion epipole) are usually rejected completely because they never pass the test for inclusion
> (in 2-D and simulation) Initialization is delayed until the measurement equation is approximately Gaussian and the point can be safely triangulated.
> 3-D monocular vision with inertial sensing + auxiliary particle filter (in high frame rate sequence)

undelayed initialization
> While features with highly uncertain depths provide little information on camera translation, they are extremely useful as bearing references for orientation estimation.
: a multiple hypothesis scheme, initializing features at various depths and pruning those not reobserved in subsequent images
> Gaussian sum filter approximated by a federated information sharing method to keep the computational overhead low
-> to spread the Gaussian depth hypotheses along the ray according to inverse depth

Davision's particle method --> (Sola et al.) Gaussian sum filter --> (Civera at al.) new inverse depth scheme

 

A Gaussian sum is more efficient representation than particles (efficient enough that the separate Gaussians can call be put into the main state vector), but not as efficient as the single Gaussian representation that the inverse depth parametrization aalows.



B. Points at Infinity

efficient undelayed initialization + features at all depths (in outdoor scenes)


Point at infinity: a feature that exhibits no parallax during camera motion due to its extreme depth
-> not used for estimating camera translationm but for estimating rotation

The homogeneous coordinate systems of visual projective geometry used normally in SFM allow explicit representation of points at infinity(, and they have proven to play an important role during offline structure and motion estimation).

sequential SLAM system

Montiel and Davison: In special case where all features are known to be infinite -- in very-large-scale outdoor scenes or when the camera rotates on a tripod -- SLAM in pure angular coordinates turns the camera into a real-time visual compass.


Our probabilistic SLAM algorithm must be able to represent the uncertainty in depth of seemingly infinite features. Observing no parallax for a feature after 10 units of camera translation does tell us something about its depth -- it gives a reliable lower bound, which depends on the amount of motion made by the camera (if the feature had been closer than this, we would have observed parallax).

The explicit consideration of uncertainty in the locations of points has not been previously required in offline computer vision algorithms, but is very important in a more difficult online case.



C. Inverse Depth Representation

There is a unified and straightforward parametrization for feature locations that can handle both initialization and standard tracking of both close and very distant features within the standard EKF framework.


standard tracking

An explicit parametrization of the inverse depth of a feature along a semiinfinite ray from the position from which it was first viewed allows a Gaussian distribution to cover uncertainty in depth that spans a depth range from nearby to infinity, and permits seamless crossing over to finite depth estimates of features that have been apparently infinite for long periods of time.

linearity index + inverse depth parametrization

The projective nature of a camera means that the image measurement process is nearly linear in this inverse depth coordinate.


Inverse depth appears in the relation between image disparity and point depth in a stereo vision; it is interpreted as the parallax with respect to the plane at infinity. (Hartley and Zisserman)

Inverse depth is used to relate the motion field induced by scene points with the camera velocity in optical flow analysis. 

modified polar coordinates

target motion analysis = TMA

EKF-based sequential depth estimation from camera-known motion

multibaseline stereo

matching robustness for scene symmetries

sequential EKF process using inverse depth
( ref. Stochastic Approximation and Rate-Distortion Analysis for Robust Structure and Motion Estimation )

undelayed initialization for 2-D monocular SLAM 
( ref. A unified framework for nearby and distant landmarks in bearing-only SLAM )

FastSLAM-based system for monocular SLAM
( ref. Ethan Eade &  Tom Drummond,  Scalable Monocular SLAM )

special epipolar update step

FastSLAM

( ref. Civera, J.   Davison, A.J.   Montiel, J.M.M., Inverse Depth to Depth Conversion for Monocular SLAM 
J. Montiel and A. J. Davison “A visual compass based on SLAM,” )

loop-closing



II. State Vector Definition


handheld camera motion
> constant angular and linear velocity model

quaternion








posted by maetel