블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

'robotics'에 해당되는 글 21건

  1. 2010.12.11 Michael I. Jordan, "Generic constraints on underspecified target trajectories"
  2. 2010.02.09 Sola & Monin & Devy & Lemaire, "Undelayed initialization in bearing only SLAM"
  3. 2010.01.25 Kragic & Vincze <Vision for Robotics>
  4. 2010.01.22 Z. Wang, S. Huang and G. Dissanayake "D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping"
  5. 2010.01.22 Paul Michael Newman "On the Structure and Solution of the Simultaneous Localisation and Map Building Problem"
  6. 2010.01.21 Randall C. Smith and Peter Cheeseman "On the representation and estimation of spatial uncertainly"
  7. 2010.01.15 Kalman filtering for SLAM 연습
  8. 2010.01.14 RoSEC 2010 winter school
  9. 2009.11.17 Dan Simon "Kalman Filtering"
  10. 2009.10.27 R. L. Thompson et al. <Providing synthetic views for teleoperation using visual pose tracking in multiple cameras> 1
  11. 2009.10.22 Sebastian Thrun & Wolfram Burgard & Dieter Fox <Probabilistic Robotics>
  12. 2009.08.04 SLAM related generally
  13. 2009.07.30 Kalman Filter
  14. 2009.07.22 Durrant-Whyte & Bailey "Simultaneous localization and mapping"
  15. 2009.07.21 임현, 이영삼 <이동로봇의 동시간 위치인식 및 지도작성(SLAM)> 3
  16. 2009.07.15 Georg Klein & David Murrayt <Parallel Tracking and Mapping for Small AR Workspaces>
  17. 2009.04.09 Montemerlo & Thrun & Koller & Wegbreit <FastSLAM: A factored solution to the simultaneous localization and mapping problem>
  18. 2009.03.31 A. J. Davison <Real-time simultaneous localisation and mapping with a single camera>
  19. 2009.03.27 Ethan Eade & Tom Drummond <Scalable Monocular SLAM>
  20. 2009.03.27 people in SLAM
  21. 2009.02.14 Special Issue on Visual SLAM (IEEE Transactions on Robotics, Vol. 24, No. 5)
2010. 12. 11. 01:58 Computer Vision
Michael I. Jordan, Generic constraints on underspecified target trajectories, Proceedings of international conference on neural networks, (1989), 217-225
http://dx.doi.org/10.1109/IJCNN.1989.118584


informed by 함교수님

cf.

Michael I. Jordan



Introduction

connectionist networks

feedforward controller

forward model


activation patterns (in a network)

motor learning

interpretation


visible units & hidden units

The output units of such a network are hidden with respect to learning and yet are visible given their direct connection to the environment.

task space 
articulatory space



Forward models of the environment


The forward modeling approach assumes that the solution to this minimization problem is based on the computation of a gradient.


'Computer Vision' 카테고리의 다른 글

Seam Carving  (0) 2011.01.17
Jordan & Bishop "Neural Networks"  (0) 2010.12.14
Donald M. Wiberg [Schaum's Outline of Theory and Problems of State Space and Linear System]  (0) 2010.12.09
Winsock  (0) 2010.11.26
camera Firefly (FFMV-03M2M)  (0) 2010.11.25
posted by maetel
2010. 2. 9. 17:50 Computer Vision

Undelayed initialization in bearing only SLAM


Sola, J.   Monin, A.   Devy, M.   Lemaire, T.  
CNRS, Toulouse, France;

This paper appears in: Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on
Publication Date: 2-6 Aug. 2005
On page(s): 2499- 2504
ISBN: 0-7803-8912-3
INSPEC Accession Number: 8750433
Digital Object Identifier: 10.1109/IROS.2005.1545392
Current Version Published: 2005-12-05


ref. http://homepages.laas.fr/jsola/JoanSola/eng/bearingonly.html




 기존 SLAM에서 쓰이는 레이저 레인지 스캐너 등 range and bearing 센서 대신 공간에 대한 풍부한 정보를 주는 카메라를 쓰면, 1차원 (인식된 물체까지의 거리 정보, depth)을 잃게 되어 bearing-only SLAM이 된다.

EKF requires Gaussian representations for all the involved random variables that form the map (the robot pose and all landmark's positions). Moreover, their variances need to be small to be able to approximate all the non linear functions with their linearized forms.

두 입력 이미지 프레임 사이에 baseline을 구할 수 있을 만큼 충분한 시점 차가 존재해야 랜드마크의 위치를 결정할 수 있으므로, 이를 확보하기 위한 시간이 필요하게 된다.

http://en.wikipedia.org/wiki/Structure_from_motion
  1. Extract features from images
  2. Find an initial solution for the structure of the scene and the motion of the cameras
  3. Extend the solution and optimise it
  4. Calibrate the cameras
  5. Find a dense representation of the scene
  6. Infer geometric, textural and reflective properties of the scene.

sequential probability ratio test
http://en.wikipedia.org/wiki/Sequential_probability_ratio_test
http://www.agrsci.dk/plb/bembi/africa/sampling/samp_spr.html
http://eom.springer.de/S/s130240.htm

EKF (extended Kalman filter) - inconsistency and divergence
GSF (Gaussian sum filter) - computation load
FIS (Federated Information Sharing)


posted by maetel
2010. 1. 25. 02:50 Computer Vision

Foundations and Trends® in
Robotics

Vol. 1, No. 1 (2010) 1–78
© 2009 D. Kragic and M. Vincze
DOI: 10.1561/2300000001

Vision for Robotics

Danica Kragic1 and Markus Vincze2
1 Centre for Autonomous Systems, Computational Vision and Active Perception Lab, School of Computer Science and Communication, KTH, Stockholm, 10044, Sweden, dani@kth.se
2 Vision for Robotics Lab, Automation and Control Institute, Technische Universitat Wien, Vienna, Austria, vincze@acin.tuwien.ac.at

SUGGESTED CITATION:
Danica Kragic and Markus Vincze (2010) “Vision for Robotics”,
Foundations and Trends® in Robotics: Vol. 1: No. 1, pp 1–78.
http:/dx.doi.org/10.1561/2300000001


Abstract

Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of various tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing visionbased control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition.


1 Introduction 2
1.1 Scope and Outline 4

2 Historical Perspective 7
2.1 Early Start and Industrial Applications 7
2.2 Biological Influences and Affordances 9
2.3 Vision Systems 12

3 What Works 17
3.1 Object Tracking and Pose Estimation 18
3.2 Visual Servoing–Arms and Platforms 27
3.3 Reconstruction, Localization, Navigation, and Visual SLAM 32
3.4 Object Recognition 35
3.5 Action Recognition, Detecting, and Tracking Humans 42
3.6 Search and Attention 44

4 Open Challenges 48
4.1 Shape and Structure for Object Detection 49
4.2 Object Categorization 52
4.3 Semantics and Symbol Grounding: From Robot Task to Grasping and HRI 54
4.4 Competitions and Benchmarking 56

5 Discussion and Conclusion 59

Acknowledgments 64
References 65


posted by maetel
2010. 1. 22. 00:20 Computer Vision
D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping  
Z. Wang, S. Huang and G. Dissanayake
ARC Centre of Excellence for Autonomous Systems (CAS), Faculty of Engineering, University of Technology, Sydney, Australia
International Journal of Robotics Research Volume 26 Issue 2 - Publication Date: 1 February 2007 (Special Issue on the Fifth International Conference on Field and Service Robotics, 2005)
http://dx.doi.org/10.1177/0278364906075173


posted by maetel
2010. 1. 22. 00:10 Computer Vision
On the Structure and Solution of the Simultaneous Localisation and Map Building Problem.
Paul Michael Newman.
1999. Ph. D. thesis, Australian Centre for Field Robotics - The University of Sydney


출처: http://cogvis.nada.kth.se/~hic/SLAM/

posted by maetel
2010. 1. 21. 23:39 Computer Vision
(Sola: "the first consistent SLAM algorithm")

Randall C. Smith and Peter Cheeseman. 1986. On the representation and estimation of spatial uncertainly. Int. J. Rob. Res. 5, 4 (December 1986), 56-68.
DOI=10.1177/027836498600500404 http://dx.doi.org/10.1177/027836498600500404


posted by maetel
2010. 1. 15. 11:55 Computer Vision
1차원 SLAM을 위한 Kalman filter 간단 예제




void cvGEMM(const CvArr* src1, const CvArr* src2, double alpha, const CvArr* src3, double beta, CvArr* dst, int tABC=0)

\texttt{dst} = \texttt{alpha} \, op(\texttt{src1}) \, op(\texttt{src2}) + \texttt{beta} \, op(\texttt{src3}) \quad \text {where $op(X)$ is $X$ or $X^ T$}


define cvMatMulAdd(src1, src2, src3, dst ) cvGEMM(src1, src2, 1, src3, 1, dst, 0 )define cvMatMul(src1, src2, dst ) cvMatMulAdd(src1, src2, 0, dst)




// 1-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-14
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;

#define num_landmarks 10
#define num_dim (num_landmarks + 2)

#define step 100
#define width 1000
#define height 200

int main (int argc, char * const argv[]) {
   
    srand(time(NULL));
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[num_landmarks];//    = { 200, 600, 400 }; // x-position
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[n] = width * uniform_random();
    }
   
    // set the initial state
    double rob_pos = 25.0; // initial robot position
    double rob_vel = 10.0; // initial robot velocity
    // set the initial covariance of the state
    double rob_pos_cov = 0.01; // covariance of noise to robot position
    double rob_vel_cov = 0.01; // covariance of noise to robot velocity
    double obs_cov = 900; // covarriance of noise to measurement of landmarks
    double xGroundTruth, vGroundTruth;
    xGroundTruth = rob_pos;
    vGroundTruth = rob_vel;
   

   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-1d", 0);
   
    // H matrix
    int Hrow = num_landmarks;
    int Hcol = num_landmarks + 2;
    CvMat* H = cvCreateMat(Hrow, Hcol, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int row = 0; row < Hrow; row++)
    {
        cvmSet(H, row, 0, -1.0);
        cvmSet(H, row, row+2, 1.0);
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H ; uncertainty of control
    CvMat* Q = cvCreateMat(Hrow, Hrow, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    int Grow = num_landmarks + 2;
    int Gcol = Grow;
    CvMat* G = cvCreateMat(Grow, Gcol, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous position
    cvmSet(G, 0, 1, 1.0); // velocity   
    for (int row = 1; row < Grow; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to H ; uncertainty of observation
    CvMat* R = cvCreateMat(Grow, Grow, CV_64FC1); // 5x5
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_pos_cov);
    cvmSet(R, 1, 1, rob_vel_cov);   
    displayMatrix(R, "R matrix");   
   
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state to be predicted
    CvMat* u = cvCreateMat(1, 1, CV_64FC1); // control vector    
    cvmSet(u, 0, 0, 1.0); // set u(0,0) to 1.0, the constant velocity here
    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))   
    CvMat* obs = cvCreateMat(num_landmarks, 1, CV_64FC1); // observation for each landmark
   
    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_pos + sqrt(rob_pos_cov)*gaussian_random()); // set mu(0,0) to "rob_pos"
    cvmSet(mu, 1, 0, rob_vel + sqrt(rob_vel_cov)*gaussian_random()); // set mu(0,0) to "rob_vel"   
    for(int n = 0; n < num_landmarks; n++)
    {
//        cvmSet(mu, n+2, 0, landmark[n] + sqrt(obs_cov)*gaussian_random());
        cvmSet(mu, n+2, 0, landmark[n]);       
    }   
    displayMatrix(mu, "mu vector");
   
    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1); // 5x5
    cvTranspose(G, Gt); // transpose(G) -> Gt   
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1); // 5x5 * 5x5
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 5x5
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 5xnum_landmarks
    cvTranspose(H, Ht); // transpose(H) -> Ht
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 5x5 * 5xnum_landmarks
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // num_landmarksxnum_landmarks   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 5x5 * 5xnum_landmarks    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // num_landmarksx5 * 5x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // num_landmarksx1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 5x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 5xnum_landmarks * num_landmarksx5
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 5x5 identity matrix
    cvSetIdentity(I);       
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 5x5

   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;       
        cvZero(iplImg);
   
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
//        displayMatrix(mu_p, "mu_p vector");   
       
        // predict the covariance of the state (ref. L3, KF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
//        displayMatrix(sigma_p, "sigma_p matrix");
       
        // estimate Kalman gain (ref. L4, KF algorithm, 42p)
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
    //    displayMatrix(HsigmaHtplusQ, "H*sigma*Ht + Q matrix");
       
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        displayMatrix(invGain, "invGain matrix");
       
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
    //    displayMatrix(K, "K matrix");       

        // measure
        xGroundTruth += vGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, vGroundTruth);
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(rob_ground, n + 2, 0, landmark[n]);
        }

        for(int n = 0; n < num_landmarks; n++)
        {
            double rn = sqrt(obs_cov) * gaussian_random();
            cvmSet(delta, n, 0, rn);
        }
    //    displayMatrix(delta, "delta vector; measurement noise");
        displayMatrix(rob_ground, "rob_ground");

        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // update the state with Kalman gain (ref. L5, KF algorithm, 42p)
        cvMatMul(H, mu_p, Hmu); // H * mu_p -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        // update the coariance of the state (ref. L6, KF algorith, 42p)
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");

        // result in console
        cout << "landmarks  = " << landmark[0] << setw(10) << landmark[1] << setw(10) << landmark[2] << setw(10) << endl;
        cout << "robot position = " << cvmGet(mu, 0, 0) << endl;
//        cout << "measurement = " << cvmGet(z,0,0) << setw(10) << cvmGet(z,1,0) << setw(10) << cvmGet(z,2,0) << endl;   
        for( int n = 0; n < num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
        cout << "observation = " << cvmGet(obs,0,0) << setw(10) << cvmGet(obs,1,0) << setw(10) << cvmGet(obs,2,0) << endl;
        cout<< "estimation = " << cvmGet(mu,2,0) << setw(10) << cvmGet(mu,3,0) << setw(10) << cvmGet(mu,4,0) << endl;

        // result in image
        // ground truth of robot position       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(height/2)), 1, cvScalar(100, 0, 255));
        // robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(height/2)), 3, cvScalar(255, 0, 100));
        // uncertainty of robot position, purple line
        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
                       cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[index]), cvRound(height/2)), 3, cvScalarAll(255));
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,index,0)), cvRound(height/2)), 2, cvScalar(0, 200, 255));
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0)), cvRound(height/2)), 2, cvScalar(50, 255, 0));
            // uncertainty of estimation, green line
            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);

        }
   
        cvShowImage("SLAM-1d", iplImg);
        cvWaitKey(0);
       
    }
    cvWaitKey();   
   
    return 0;
}



console:



2차원 SLAM을 위한 Kalman filter 간단 예제

// 2-D SLAM with Kalman Filter
// VIP lab, Sogang University
// 2010-01-18
// ref. Probabilistic Robotics 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

#define num_landmarks 20
#define num_dim ( 2 * (num_landmarks + 2) )

#define step 200
#define width 800
#define height 600


// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}


void displayMatrix(CvMat *mat, char *title = NULL) {
    if(title) cout << title << endl;
    for(int iR = 0; iR < mat->rows; iR++) {
        for(int iC = 0; iC < mat->cols; iC++) {
            printf("%.2f ", cvmGet(mat, iR, iC));
        }
        printf("\n");
    }
    printf("\n");
    return;



void draw2DEllipseFromCovariance
(CvMat* cov, CvPoint* cnt, IplImage *iplImg, CvScalar* curveColor /*= cvScalarAll(255)*/, CvScalar* centerColor /*= cvScalarAll(128) */, int thickness /*= 1*/)
{
   
    if(NULL == cov || 2 != cov->rows || 2 != cov->cols) {
        printf("covariance matrix is not 2x2 !! \n");
        exit(0);
    }
    double eigenvalues[6], eigenvectors[36]; 
    float ev1, ev2, vx, vy, angle;
   
    CvSize axes;
    CvMat evals = cvMat(1, 2, CV_64F, eigenvalues), evecs = cvMat(2, 2, CV_64F, eigenvectors);
   
    cvSVD(cov, &evals, &evecs, 0, CV_SVD_MODIFY_A + CV_SVD_U_T ); 
   
    ev1 = cvmGet(&evals, 0, 0);        ev2 = cvmGet(&evals, 0, 1);
   
    if( ev1 < 0 && ev2 < 0 ) {
        ev1 = -ev1;
        ev2 = -ev2;
    }
    if( ev1 < ev2 ) {
        float tmp = ev1;
        ev1 = ev2;
        ev2 = tmp;
    }
    if( ev1 <= 0 || ev2 <= 0 ) {
        printf("COV Eigenvalue is negativ or zero(!)\n");
        exit(0);
    }
   
    // calc angle 
    angle = (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI); 
   
    axes = cvSize(cvRound(sqrt(ev1)), cvRound(sqrt(ev2)));
    (float)(180 - atan2(eigenvectors[2], eigenvectors[3]) * 180 / CV_PI);
    cvEllipse(iplImg, *cnt, axes, angle, 0, 360, *curveColor, thickness);
   
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y - 1), cvPoint(cnt->x + 2, cnt->y + 1), *centerColor, 1);
    cvLine(iplImg, cvPoint(cnt->x - 1, cnt->y + 1), cvPoint(cnt->x + 2, cnt->y - 1), *centerColor, 1);
   
}


int main (int argc, char * const argv[]) {

    srand(time(NULL));
   
    // set the initial state
    double rob_x = width * 0.1; // robot's initial x-position
    double rob_y = height * 0.4; // robot's initial y-position   
    double rob_vx = 10.0; // robot's initial x-velocity
    double rob_vy = 10.0; // robot's initial y-velocity   
   
    // set the initial covariance of the state uncertainty
    double rob_px_cov = 0.01; // covariance of noise to robot's x-position
    double rob_py_cov = 0.01; // covariance of noise to robot's y-position   
    double rob_vx_cov = 0.01; // covariance of noise to robot's x-velocity
    double rob_vy_cov = 0.01; // covariance of noise to robot's y-velocity   
   
    // set the initial covariance of the measurement noise
    double obs_x_cov = 900; // covarriance of noise to x-measurement of landmarks
    double obs_y_cov = 900; // covarriance of noise to y-measurement of landmarks
   
    // ground truth of state
    double xGroundTruth = rob_x;
    double yGroundTruth = rob_y;
    double vxGroundTruth = rob_vx;
    double vyGroundTruth = rob_vy;
   
    // ground truth of num_landmarks landmarks in the world coordinate
    double landmark[2*num_landmarks];  
    for( int n = 0; n < num_landmarks; n++ )
    {
        landmark[2*n] = width * uniform_random();
        landmark[2*n+1] = height * uniform_random();
    }   
   
    IplImage *iplImg = cvCreateImage(cvSize(width, height) , 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("SLAM-2d");
   
    // H matrix, measurement matrix
    CvMat* H = cvCreateMat(2*num_landmarks, num_dim, CV_64FC1);
    cvZero(H); // initialize H matrix   
    // set H matrix
    for (int r = 0; r < num_landmarks; r++)
    {
        cvmSet(H, 2*r, 0, -1.0); // robot's x-position
        cvmSet(H, 2*r, 2*r+4, 1.0); // landmark's x-position
        cvmSet(H, 2*r+1, 1, -1.0); // robot's y-position
        cvmSet(H, 2*r+1, 2*r+5, 1.0); // landmarks's y-position        
    }   
    displayMatrix(H, "H matrix");
   
    // Q matrix ; covariance of noise to H; uncertainty of control
    CvMat* Q = cvCreateMat(2*num_landmarks, 2*num_landmarks, CV_64FC1);    
    cvZero(Q); // initialize Q matrix
    // set Q matrix
    for (int row = 0; row < Q->rows; row++)
    {
        cvmSet(Q, row, row, obs_x_cov);
    }
    displayMatrix(Q, "Q matrix");   
   
    // G matrix // transition
    CvMat* G = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(G); // initialize G matrix
    // set G matrix
    cvmSet(G, 0, 0, 1.0); // previous x-position
    cvmSet(G, 0, 2, 1.0); // x-velocity
    cvmSet(G, 1, 1, 1.0); // previous y-position
    cvmSet(G, 1, 3, 1.0); // y-velocity   
    for (int row = 2; row < G->rows; row++)
    {
        cvmSet(G, row, row, 1.0); // constance of velocity
    }
    displayMatrix(G, "G matrix");
   
    // R matrix ; covariance of noise to G; uncertainty of observation
    CvMat* R = cvCreateMat(num_dim, num_dim, CV_64FC1);
    cvZero(R); // initialize R matrix
    // set R matrix
    cvmSet(R, 0, 0, rob_px_cov);
    cvmSet(R, 1, 1, rob_py_cov);
    cvmSet(R, 2, 2, rob_vx_cov);
    cvmSet(R, 3, 3, rob_vy_cov);   
    displayMatrix(R, "R matrix");   
       
   
    CvMat* rob_ground = cvCreateMat(num_dim, 1, CV_64FC1); // ground truth of state        
    CvMat* mu = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be estimated
    CvMat* mu_p = cvCreateMat(num_dim, 1, CV_64FC1); // state vector to be predicted

    CvMat* sigma = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* sigma_p = cvCreateMat(num_dim, num_dim, CV_64FC1); // covariance to be updated
    CvMat* z = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // measurement vector
    CvMat* K = cvCreateMat(num_dim, 2*num_landmarks, CV_64FC1); // K matrix // Kalman gain
   
    CvMat* delta = cvCreateMat(z->rows, 1, CV_64FC1); // measurement noise (ref. 42p: (3.5))  
    CvMat* obs = cvCreateMat(2*num_landmarks, 1, CV_64FC1); // observation for each landmark

    // initialize "mu" vector
    cvmSet(mu, 0, 0, rob_x); // set mu(0,0) to "rob_x"
    cvmSet(mu, 1, 0, rob_y); // set mu(1,0) to "rob_y"
    cvmSet(mu, 2, 0, rob_vx); // set mu(2,0) to "rob_vx"
    cvmSet(mu, 3, 0, rob_vy); // set mu(3,0) to "rob_vy"
    for (int n = 0; n < 2*num_landmarks; n++)
    {
        cvmSet(mu, n+4, 0, landmark[n]); // set mu(4,0) to "landmark[0]", ...
    }
    displayMatrix(mu, "mu vector");
   
/*    // initialize "sigma" matrix
    cvmSet(sigma, 0, 0, rob_px_cov);
    cvmSet(sigma, 1, 1, rob_py_cov);
    cvmSet(sigma, 2, 2, rob_vx_cov);
    cvmSet(sigma, 3, 3, rob_vy_cov);
    for (int r = 4; r < sigma->rows; r=r*2)
    {
        cvmSet(sigma, r, r, obs_x_cov);
        cvmSet(sigma, r+1, r+1, obs_y_cov);        
    }
*/    // initialize "sigma" matrix <-- This is the most critical point in tuning
    cvSetIdentity(sigma, cvRealScalar(obs_x_cov));       
    displayMatrix(sigma, "sigma matrix");
   
    // matrices to be used in calculation
    CvMat* Hx = cvCreateMat(H->rows, mu->cols, CV_64FC1);
    CvMat* Gt = cvCreateMat(G->cols, G->rows, CV_64FC1);
    cvTranspose(G, Gt); // transpose(G) -> Gt
    CvMat* sigmaGt = cvCreateMat(sigma->rows, G->rows, CV_64FC1);
    CvMat* GsigmaGt = cvCreateMat(G->rows, G->rows, CV_64FC1); // 10x10
   
    CvMat* Ht = cvCreateMat(H->cols, H->rows, CV_64FC1); // 10x6
    cvTranspose(H, Ht); // transpose(H) -> Ht       
    CvMat* sigmaHt = cvCreateMat(sigma->rows, H->rows, CV_64FC1);    // 10x10 * 10x6
    CvMat* HsigmaHt = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* HsigmaHtplusQ = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
   
    CvMat* invGain = cvCreateMat(H->rows, H->rows, CV_64FC1); // 6x6   
    CvMat* sigmapHt = cvCreateMat(sigma_p->rows, Ht->cols, CV_64FC1); // 10x10 * 10x6    
   
    CvMat* Hmu = cvCreateMat(H->rows, mu->cols, CV_64FC1); // 6x10 * 10x1
    CvMat* miss = cvCreateMat(Hmu->rows, 1, CV_64FC1); // 6x1
    CvMat* adjust = cvCreateMat(mu->rows, 1, CV_64FC1); // 10x1
   
    CvMat* KH = cvCreateMat(K->rows, H->cols, CV_64FC1); // 10x6 * 6x10
    CvMat* I = cvCreateMat(KH->rows, KH->cols, CV_64FC1); // 10x10 identity matrix
    cvSetIdentity(I); // does not seem to be working properly      
    CvMat* change = cvCreateMat(I->rows, I->cols, CV_64FC1); // 10x10
   
    CvPoint trajectory[step];
    CvPoint robot_ground[step];
   
    int frame = int(0.9*step);
   
    for (int t = 0; t < step; t++)
    {
        cout << endl << "step " << t << endl;      
        cvZero(iplImg);
       
        // predict
        // predict the state (ref. L2, KF algorithm, 42p)
        cvMatMul(G, mu, mu_p); // G * mu -> mu_p
       
        // predict the covariance of the state (ref. L3, EKF algorithm, 42p)
        cvMatMul(sigma, Gt, sigmaGt); // sigma * Gt -> sigmaGt
        cvMatMul(G, sigmaGt, GsigmaGt); // G * sigmaGt -> GsigmaGt
        cvAdd(GsigmaGt, R, sigma_p); // GsigmaGt + R -> sigma_p
               
        // estimate Kalman gain (ref. L4, EKF algorithm, 42p)   
        cvMatMul(sigma_p, Ht, sigmaHt); // sigma_p * Ht -> sigmaHt
        cvMatMul(H, sigmaHt, HsigmaHt); // H * sigmaHt -> HsigmaHt
        cvAdd(HsigmaHt, Q, HsigmaHtplusQ); // HsigmaHt + Q -> HsigmaHtplusQ
        cvInvert(HsigmaHtplusQ, invGain); // inv(HsigmaHtplusQ) -> invGain
        cvMatMul(sigma_p, Ht, sigmapHt); // sigma_p * Ht -> sigmapHt
        cvMatMul(sigmapHt, invGain, K); // sigmapHt * invGain -> K
        displayMatrix(K, "K matrix");  
   
       
        // measure
        // set ground truths
        if ( xGroundTruth >= width || xGroundTruth <= 0)
        {
            vxGroundTruth = - vxGroundTruth;
        }   
        if ( yGroundTruth >= height || yGroundTruth <= 0 )
        {
            vyGroundTruth = - vyGroundTruth;
        }   
        xGroundTruth += vxGroundTruth;
        yGroundTruth += vyGroundTruth;
        cvZero(rob_ground);
        cvmSet(rob_ground, 0, 0, xGroundTruth);
        cvmSet(rob_ground, 1, 0, yGroundTruth);
        cvmSet(rob_ground, 2, 0, vxGroundTruth);
        cvmSet(rob_ground, 3, 0, vyGroundTruth);
       
        robot_ground[t] = cvPoint(cvRound(xGroundTruth),cvRound(yGroundTruth)); 
       
        for (int dim = 0; dim < 2*num_landmarks; dim++)
        {
            cvmSet(rob_ground, dim+4, 0, landmark[dim]);
        }
        displayMatrix(rob_ground, "rob_ground");
        // set measurement noise
        for(int n = 0; n < num_landmarks; n++)
        {
            double rn_x = sqrt(obs_x_cov) * gaussian_random();
            double rn_y = sqrt(obs_y_cov) * gaussian_random();           
            cvmSet(delta, 2*n, 0, rn_x);
            cvmSet(delta, 2*n+1, 0, rn_y);
           
        }
//      displayMatrix(delta, "delta vector; measurement noise");
       
        // define z, measurement, vector
        cvMatMul(H, rob_ground, Hx); // H * rob_ground -> Hx
        cvAdd(Hx, delta, z); // Hx + delta -> z
        displayMatrix(z, "z vector");
       
        // observation relative to robot's position
        for( int n = 0; n < 2*num_landmarks; n++ )
        {
            cvmSet(obs, n, 0, cvmGet(mu,0,0) + cvmGet(z,n,0));
        }
       
        // update the state with Kalman gain (ref. L5, EKF algorithm, 42p)
        cvMatMul(H, mu, Hmu); // H * mu -> Hmu
        cvSub(z, Hmu, miss); // z - Hmu -> miss
        cvMatMul(K, miss, adjust); // K * miss -> adjust
        cvAdd(mu_p, adjust, mu); // mu_p + adjust -> mu
        displayMatrix(mu, "mu vector");
       
        trajectory[t] = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
       
       
        // update the covariance of the state
        cvMatMul(K, H, KH); // K * H -> KH
        cvSub(I, KH, change); // I - KH -> change
        cvMatMul(change, sigma_p, sigma); // change * sigma_p -> sigma
        displayMatrix(sigma, "sigma matrix");
       
        // result in console
        cout << "robot position: " << "px = " << cvmGet(mu, 0, 0) << "  py = " << cvmGet(mu, 1, 0) << endl;
        for (int n = 0; n < num_landmarks; n++)
        {
            cout << setw(10) << "landmark" << n+1 << " (" << landmark[2*n] << ", " << landmark[2*n+1] << ") "
            << setw(10) << "observation" << n+1 << " (" << cvmGet(obs,2*n,0) << ", " << cvmGet(obs,2*n+1,0) << ") "
            << setw(10) << "estimation" << n+1 << " (" << cvmGet(mu,4+2*n,0) << ", " << cvmGet(mu,4+2*n+1,0) << ") " << endl;
        }       
       
       
        CvMat* local_uncertain = cvCreateMat(2, 2, CV_64FC1);
        CvMat* map_uncertain [num_landmarks];
        for (int n = 0; n < num_landmarks; n++)
        {
            map_uncertain [n] = cvCreateMat(2, 2, CV_64FC1);
        }
        cvmSet(local_uncertain, 0, 0, cvmGet(sigma,0,0));
        cvmSet(local_uncertain, 0, 1, cvmGet(sigma,0,1));
        cvmSet(local_uncertain, 1, 0, cvmGet(sigma,1,0));
        cvmSet(local_uncertain, 1, 1, cvmGet(sigma,1,1));
       
        displayMatrix(local_uncertain, "local_uncertain");       
       
        for (int n = 0; n < num_landmarks; n++)
        {
            cvmSet(map_uncertain[n], 0, 0, cvmGet(sigma,n+4,n+4));
            cvmSet(map_uncertain[n], 0, 1, cvmGet(sigma,n+4,n+5));
            cvmSet(map_uncertain[n], 1, 0, cvmGet(sigma,n+5,n+4));
            cvmSet(map_uncertain[n], 1, 1, cvmGet(sigma,n+5,n+5));

            displayMatrix(map_uncertain[n], "map_uncertain");
        } 
       
        // result in image
        // ground truth of robot position, red       
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(rob_ground,0,0)), cvRound(cvmGet(rob_ground,1,0))), 2, cvScalar(60, 0, 255), 2);
        // estimated robot position, purple
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,0,0)), cvRound(cvmGet(mu,1,0))), 5, cvScalar(255, 0, 100), 2);
        // uncertainty of robot position, purple line
//        cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,0,0))-sqrt(cvmGet(sigma,0,0)), cvRound(height/2)),
//               cvPoint(cvRound(cvmGet(mu,0,0))+sqrt(cvmGet(sigma,0,0)), cvRound(height/2)), cvScalar(255, 0, 100), 1);
   
        CvPoint local = cvPoint(cvRound(cvmGet(mu,0,0)),cvRound(cvmGet(mu,1,0)));
        CvScalar local_center_color = cvScalar(255, 0, 100);
        CvScalar local_curve_color = cvScalarAll(128);
       
        draw2DEllipseFromCovariance(local_uncertain, &local, iplImg, &local_center_color, &local_curve_color, 1);
       
       
        for( int index = 0; index < num_landmarks; index++    )
        { 
            // landmarks, white
            cvCircle(iplImg, cvPoint(cvRound(landmark[2*index]), cvRound(landmark[2*index+1])), 4, cvScalarAll(255), 2);
            // observation, yellow
//            cvCircle(iplImg, cvPoint(cvRound(cvmGet(obs,2*index,0)), cvRound(cvmGet(obs,2*index+1,0))), 4, cvScalar(0, 200, 255), 1);
            // estimation, green
            cvCircle(iplImg, cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0))), 3, cvScalar(50, 255, 0), 1);
            // uncertainty of estimation, green line
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(mu,index+2,0))-sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)),
//                   cvPoint(cvRound(cvmGet(mu,index+2,0))+sqrt(cvmGet(sigma,index+2,0)), cvRound(height/2)), cvScalar(50, 255, 0), 1);
       
            CvPoint observed = cvPoint(cvRound(cvmGet(mu,4+2*index,0)), cvRound(cvmGet(mu,4+2*index+1,0)));
            CvScalar observed_center_color = cvScalar(50, 255, 0);
            CvScalar observed_curve_color = cvScalar(50, 255, 0);
           
            draw2DEllipseFromCovariance(map_uncertain[index], &observed, iplImg, &observed_center_color, &observed_curve_color, 1);    
        }
       
        for( int p = 1; p <= t; p++ )
        {
            cvLine(iplImg, robot_ground[p-1], robot_ground[p], cvScalar(60, 0, 255), 1);           
            cvLine(iplImg, trajectory[p-1], trajectory[p], cvScalar(255, 0, 100), 1);
        }

        if ( t == frame )
        {
            cvSaveImage("2D SLAM test.bmp", iplImg);
        }
       
        cvShowImage("SLAM-2d", iplImg);
        cvWaitKey(100);   
       
    }
    cout << endl << endl << "process finished" << endl;
    cvWaitKey();   
   
    return 0;
}












posted by maetel
2010. 1. 14. 17:27 Footmarks
RoSEC international summer/winter school
Robotics-Specialized Education Consortium for Graduates sponsored by MKE

로봇 특성화 대학원 사업단 주관
2010 RoSEC International Winter School
2010년 1월 11일(월)부터 1월 16일(토)
한양대학교 HIT(한양종합기술연구원) 6층 제1세미나실(606호)



Robot mechanism
Byung-Ju Yi (Hanyang University, Korea)
한양대 휴먼로보틱스 연구실 이병주 교수님  bj@hanyang.ac.kr
- Classification of robotic mechanism and Design consideration of robotic mechanism
- Design Issue and application examples of master slave robotic system
- Trend of robotic mechanism research

Actuator and Practical PID Control
Youngjin Choi (Hanyang University, Korea)
한양대 휴먼로이드 연구실 최영진 교수님 cyj@hanyang.ac.kr
- Operation Principle of DC/RC/Stepping Motors & Its Practice
- PID Control and Tuning
- Stability of PID Control and Application Examples

Coordination of Robots and Humans
Kazuhiro Kosuge (Tohoku University, Japan)
일본 도호쿠 대학 시스템 로보틱스 연구실 고스게 카즈히로 교수님
- Robotics as systems integration
- Multiple Robots Coordination
- Human Robot Coordination and Interaction

Robot Control
Rolf Johansson (Lund University, Sweden)
스웨덴 룬드 대학 로보틱스 연구실 Rolf.Johansson@control.lth.se
- Robot motion and force control
- Stability of motion
- Robot obstacle avoidance

Lecture from Industry or Government
(S. -R. Oh, KIST)

Special Talk from Government
(Y. J. Weon, MKE)

Mobile Robot Navigation
Jae-Bok Song (Korea University, Korea)
고려대 지능로봇 연구실 송재복 교수님 jbsong@korea.ac.kr
- Mapping
- Localization
- SLAM

3D Perception for Robot Vision
In Kyu Park (Inha University, Korea)
인하대 영상미디어 연구실 박인규 교수님 pik@inha.ac.kr
- Camera Model and Calibration
- Shape from Stereo Views
- Shape from Multiple Views

Lecture from Industry or Government
(H. S. Kim, KITECH)

Roboid Studio
Kwang Hyun Park (Kwangwoon University, Korea)
광운대 정보제어공학과 박광현 교수님 akaii@kw.ac.kr
- Robot Contents
- Roboid Framework
- Roboid Component

Software Framework for LEGO NXT
Sanghoon Lee (Hanyang University, Korea)
한양대 로봇 연구실 이상훈 교수님
- Development Environments for LEGO NXT
- Programming Issues for LEGO NXT under RPF of OPRoS
- Programming Issues for LEGO NXT under Roboid Framework

Lecture from Industry or Government
(Robomation/Mobiletalk/Robotis)

Robot Intelligence : From Reactive AI to Semantic AI
Il Hong Suh (Hanyang University, Korea)
한양대 로봇 지능/통신 연구실 서일홍 교수님
- Issues in Robot Intelligence
- Behavior Control: From Reactivity to Proactivity
- Use of Semantics for Robot Intelligence

AI-Robotics
Henrik I. Christensen (Georgia Tech., USA)

-
Semantic Mapping
- Physical Interaction with Robots
- Efficient object recognition for robots

Lecture from Industry or Government
(M. S. Kim, Director of CIR, 21C Frontier Program)

HRI
Dongsoo Kwon (KAIST, Korea)

- Introduction to human-robot interaction
- Perception technologies of HRI
- Cognitive and emotional interaction

Robot Swarm for Environmental Monitoring
Nak Young Chong (JAIST, Japan)

- Self-organizing Mobile Robot Swarms: Models
- Self-organizing Mobile Robot Swarms: Algorithms
- Self-organizing Mobile Robot Swarms: Implementation


posted by maetel
2009. 11. 17. 15:48 Computer Vision
Kalman Filtering
http://academic.csuohio.edu/simond/courses/eec644/kalman.pdf

72-79p, Embedded Systems Programming f e a tur e, JUNE 2001
http://www.embedded.com/9900168?_requestid=49635

The Kalman filter update equations in C
http://www.embedded.com/9900168?pgno=2
matrix algebra reference
ftp://ftp.embedded.com/pub/2001/simon06


Dan Simon 
http://academic.csuohio.edu/simond/


Kalman filter
: estimates system states that can only be observed indirectly or inaccurately by the system itself.
: estimates the variables of a wide range of processes.
: estimates the states of a linear system.
: minimizes the variance of the estimation error

Linear system
x: state of the system
u: known input to the system
y: measured output
w: process noise
z: measurement noise


http://wiki.answers.com/Q/What_is_a_feedback_system
A feedback system, in general engineering terms, is a system whose output if fed back to the input, and depending on the output, your input is adjusted so as to reach a steady-state. In colloquial language, you adjust your input based on the output of your system so as to achieve a certain end, like minimizing disturbance, cancelling echo (in a speech system) and so on.


Criteria of an Estimator
1) The expected value of the estimate should be equal to the expected value of the state.
2) The estimator should be with the smallest possible error variance.


Requirement of Kalman filter
1) The average value of w is zero and average value of z is zero.
2) No correlation exists between w and z. w_k and z_k are independent random variables.


Kalman filter equations

K matrix: Kalman gain
P matrix: estimation error covariance



http://en.wikipedia.org/wiki/Three_sigma_rule
In statistics, the 68-95-99.7 rule, or three-sigma rule, or empirical rule, states that for a normal distribution, nearly all values lie within 3 standard deviations of the mean.


"steady state Kalman filter"
 - K matrix & P matrix are constant

"extended Kalman filter"
: an extension of linear Kalman filter theory to nonlinear systems

"Kalman smoother"
: to estimate the state as a function of time so to reconstruct the trajectory after the fact


H infinity filter
=> correlated noise problem
=> unknown noise covariances problem

http://academic.csuohio.edu/simond/estimation/


Rudolph Kalman

Peter Swerling, 1958

Karl Gauss's method of least squares, 1795

spacecraft navigation for the Apollo space program


> applications
all forms of navigation (aerospace, land, and marine)
nuclear power plant instrumentation
demographic modeling
manufacturing
the detection of underground radioactivity
fuzzy logic and neural network training



Gelb, A. Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974.

Anderson, B. and J. Moore. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall, 1979.

Grewal, M. and A. Andrews. Kalman Filtering Theory and Practice. Englewood Cliffs, NJ: Prentice-Hall, 1993.

Sorenson, H. Kalman Filtering: Theory and Application. Los Alamitos, CA: IEEE Press, 1985.

Peter Joseph’s Web site @http://ourworld.compuserve.com/homepages/PDJoseph/

posted by maetel
2009. 10. 27. 23:31 Computer Vision
R. L. Thompson, I. D. Reid, L. A. Munoz, and D. W. Murray,
Providing synthetic views for teleoperation using visual pose tracking in multiple cameras,”
IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 31, no. 1, pp. 43–54, 2001.

Abstract - This paper describes a visual tool for teleoperative experimentation involving remote manipulation and contact tasks. Using modest hardware, it recovers in real-time the pose of moving polyhedral objects, and presents a synthetic view of the scene to the teleoperator using any chosen viewpoint and viewing direction. The method of line tracking introduced by Harris is extended to multiple calibrated cameras, and afforced by robust methods and iterative ltering. Experiments are reported which determine the static and dynamic performance of the vision system, and its use in teleoperation is illustrated in two experiments, a peg in hole manipulation task and an impact control task.


Line tracking 
http://en.wikipedia.org/wiki/Passive_radar#Line_tracking
The line-tracking step refers to the tracking of target returns from individual targets, over time, in the range-Doppler space produced by the cross-correlation processing. A standard Kalman filter is typically used. Most false alarms are rejected during this stage of the processing.


- Three difficulties using the Harris tracker
First it was found to be easily broken by occlusions and changing lighting. Robust methods to mitigate this problem have been investigated monocularly by Armstrong and Zisserman [20], [21]. Although this has a marked effect on tracking performance, the second problem found is that the accuracy of the pose recovered in a single camera was poor, with evident correlation between depth and rotation about axes parallel to the image plane. Maitland and Harris [22] had already noted as much when recovering the pose of a pointing device destined for neurosurgical application [23].
They reported much improved accuracy using two cameras; but the object was stationary, had an elaborate pattern drawn on it and was visible at all times to both cameras. The third difficulty, or rather uncertainty, was that the convergence properties and dynamic performances of the monocular and multicamera methods were largely unreported.


"Harris' RAPiD tracker included a constant velocity Kalman filter."


posted by maetel
2009. 10. 22. 16:53 Computer Vision
Probabilistic Robotics
Sebastian Thrun, Wolfram Burgard and Dieter Fox
MIT Press, September 2005



Preface     xvii    
Acknowledgments    xix
I    Basics    1
1    Introduction     3
2    Recursive State Estimation    13
3    Gaussian Filters    39
4    Nonparametric Filters    85
5    Robot Motion    117
6    Robot Perception    149
II    Localization    189
7    Mobile Robot Localization: Markov and Gaussian    191
8    Mobile Robot Localization: Grid And Monte Carlo    237
III    Mapping    279
9    Occupancy Grid Mapping    281
10    Simultaneous Localization and Mapping    309
11    The GraphSLAM Algorithm    337
12    The Sparse Extended Information Filter    385
13    The FastSLAM Algorithm    437
IV    Planning and Control    485
14    Markov Decision Processes    487
15    Partially Observable Markov Decision Processes    513
16    Approximate POMDP Techniques    547
17    Exploration    569    
Bibliography    607   
Index     639


Probability robotics is a subfield of robotics concerned with perception and control.

Introduction

probabilistic robotics
: explicit representation of uncertainty using the calculus of probability theory

perception
action

Bayes filters are a probabilistic tool for estimating the state of dynamic systems.





Bayes Filters are Familiar!
• Kalman filters
• Particle filters
• Hidden Markov models
• Dynamic Bayesian networks
• Partially Observable Markov Decision Processes (POMDPs)


Kalman filter

Gaussian filter

discrete Kalman filter


Kalman filter update in 1-D

correction

prediction



Kalman filter algorithm


EKF = extended Kalman filter
: calculates a Gaussian approximation to the true belief.

Taylor series expansion
"Linearization approximates the nonlinear function g by a linear function that is tangent to g at the mean of the Gaussian."











SLAM





Techniques for Generating Consistent Maps
• Scan matching
• EKF SLAM
• Fast-SLAM
• Probabilistic mapping with a single map and a posterior about poses Mapping + Localization
• Graph-SLAM, SEIFs

Approximations for SLAM
• Local submaps
[Leonard et al.99, Bosse et al. 02, Newman et al. 03]
• Sparse links (correlations)
[Lu & Milios 97, Guivant & Nebot 01]
• Sparse extended information filters
[Frese et al. 01, Thrun et al. 02]
• Thin junction tree filters
[Paskin 03]
• Rao-Blackwellisation (FastSLAM)
[Murphy 99, Montemerlo et al. 02, Eliazar et al. 03, Haehnel et al. 03]

EKF-SLAM Summary
•Quadratic in the number of landmarks: O(n2)
• Convergence results for the linear case.
• Can diverge if nonlinearities are large!
• Have been applied successfully in large-scale environments.
• Approximations reduce the computational complexity.


ch8

eg. Xavier - Localization in a topological map
ref.  Probabilistic Robot Navigation in Partially Observable Environments 
Reid Simmons and Sven Koenig
Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI '95), July, 1995, pp. 1080 - 1087.
  • Open Link in New Tab
  • Download
posted by maetel
2009. 8. 4. 23:07 Computer Vision
SLAM 전반/기본에 관한 자료

Durrant-Whyte & Bailey "Simultaneous localization and mapping"
http://leeway.tistory.com/667


Søren Riisgaard and Morten Rufus Blas
SLAM for Dummies: A Tutorial Approach to Simultaneous Localization and Mapping
http://leeway.tistory.com/688


Joan Solà Ortega (de l’Institut National Polytechnique de Toulouse, 2007)
Towards visual localization, mapping and moving objects tracking by a mobile robot: A geometric and probabilistic approach
ch3@ http://leeway.tistory.com/628


SLAM summer school
2009 Australian Centre for Field Robotics, University of Sydney
http://www.acfr.usyd.edu.au/education/summerschool.shtml
2006 Department of Engineering Science and Robotics Research Group, Oxford
http://www.robots.ox.ac.uk/~SSS06/Website/index.html
2004 Laboratory for Analysis and Architecture of Systems  (LAAS-CNRS) located in Toulouse
http://www.laas.fr/SLAM/
2002 Centre for Autonomous Systems
Numerical Analysis and Computer Science
Royal Institute of Technology
, Stockholm
http://www.cas.kth.se/SLAM/


http://www.doc.ic.ac.uk/%7Eajd/Scene/Release/monoslamtutorial.pdf
Oxford 대학 Active Vision LabVisual Information Processing (VIP) Research Group에서 개발한 SceneLib tutorial인데, Monocular Single Camera를 사용한 SLAM의 기본 개념을 정리해 놓았다.

'Computer Vision' 카테고리의 다른 글

PTAM to be dissected on OS X  (0) 2009.08.17
PTAM test log on Mac OS X  (7) 2009.08.05
Kalman Filter  (0) 2009.07.30
OpenCV 1.0 설치 on Mac OS X  (0) 2009.07.27
cameras on mac os x  (0) 2009.07.27
posted by maetel
2009. 7. 30. 21:12 Computer Vision
Dan Simon 
http://academic.csuohio.edu/simond/
- Kalman Filetering, 72-79p, Embedded Systems Programming f e a tur e, JUNE 2001
http://leeway.tistory.com/728 : 가장 쉽고 간략한 설명 (매트랩 예제와 C로 구현한 칼만 필터 소스 포함)
- Kalman Filtering with State Constraints: A Survey of Linear and Nonlinear Algorithms
http://academic.csuohio.edu/simond/ConstrKF/
- Kalman Filtering
http://www.innovatia.com/software/papers/kalman.htm
- book: Optimal state estimation: Kalman, H [infinity] and nonlinear approaches
http://academic.csuohio.edu/simond/estimation/


Greg Welch and Gary Bishop
Kalman filter
http://www.cs.unc.edu/~welch/kalman/index.html
- SIGGRAPH 2001 Courses Course 8 - An Introduction to the Kalman Filter
http://www.cs.unc.edu/~tracker/ref/s2001/kalman/index.html


Kalman Filters in the MRPT
http://babel.isa.uma.es/mrpt/index.php/Kalman_Filters


Taygeta Scientific
Kalman filter information
http://www.taygeta.com/kalman.html


http://en.wikipedia.org/wiki/Kalman_filter

A New Approach to Linear Filtering and Prediction Problems, by R. E. Kalman, 1960


Rudy Negenborn
Robot Localization and Kalman Filters: On finding your position in a noisy world
http://leeway.tistory.com/696


용가리@네이버: 칼만 필터 - part 1 & part 2


'Computer Vision' 카테고리의 다른 글

PTAM test log on Mac OS X  (7) 2009.08.05
SLAM related generally  (0) 2009.08.04
OpenCV 1.0 설치 on Mac OS X  (0) 2009.07.27
cameras on mac os x  (0) 2009.07.27
Brian Williams, Georg Klein and Ian Reid <Real-Time SLAM Relocalisation>  (0) 2009.07.23
posted by maetel
2009. 7. 22. 16:55 Computer Vision
Durrant-Whyte, H.Bailey, T.
(Australian Centre for Field Robotics, Sydney Univ., NSW)
Simultaneous localization and mapping
Robotics & Automation Magazine, 2006

Simultaneous localization and mapping: part I
Robotics & Automation Magazine, IEEE, Volume: 13,  Issue: 2 (June 2006)

Simultaneous localization and mapping (SLAM): part II
Robotics & Automation Magazine, IEEE, Volume: 13,  Issue: 3
(Sept. 2006)


Table 1. Open-source SLAM software.
Kai Arras
The CAS Robot Navigation Toolbox, a MATLAB simulation toolbox for robot localization and mapping
http://www.cas.kth.se/toolbox/index.html

Tim Bailey
MATLAB simulators for EKF-SLAM, UKF-SLAM, and FastSLAM 1.0 and 2.0. http://www.acfr.usyd.edu.au/homepages/academic/tbailey/software/index.html

Mark Paskin
Java library with several SLAM variants, including Kalman filter, information filter, and thin junction tree forms. Includes a MATLAB interface.
http://www.stanford.edu/~paskin/slam/

Andrew Davison
Scene, a C++ library for map-building and localization. Facilitates real-time single camera SLAM.
http://www.doc.ic.ac.uk/~ajd/Scene/ index.html

José Neira
MATLAB EKF-SLAM simulator that demonstrates joint compatibility branch-and-bound data association.
http://webdiis.unizar.es/~neira/software/slam/slamsim.htm

Dirk Hähnel
C language grid-based version of FastSLAM.
http://www.informatik.uni-freiburg.de/~haehnel/old/download.html

Various
MATLAB code from the 2002 SLAM summer school.
http://www.cas.kth.se/slam/toc.html

Table 2. Online datasets.
Eduardo Nebot
Numerous large-scale outdoor datasets, notably the popular Victoria Park data.
http://www.acfr.usyd.edu.au/homepages/academic/enebot/dataset.htm

Chieh-Chih Wang
Three large-scale outdoor datasets collected by the Navlab11 testbed.
http://www.cs.cmu.edu/~bobwang/datasets.html

Radish (The Robotics Many and varied indoor datasets, including large-area Data Set Repository) data from the CSU Stanislaus Library, the Intel Research Lab in Seattle, the Edmonton Convention Centre, and more.
http://radish.sourceforge.net/

IJRR (The International Journal of Robotics Research)
IJRR maintains a Web page for each article, often containing data and video of results. A good paper example is by Bosse et al. [3], which has data from Killian Court at MIT.
http://www.ijrr.org/contents/23\_12/abstract/1113.html


IEEE Robotics and Automation Society http://www.ieee-ras.org/
IEEE ICRA (International Conference on Robotics and Automation) http://www.icra2009.org/
http://icra2010.grasp.upenn.edu/

International Foundation of Robotics Research http://www.ifrr.org/
ISRR 2009 - 14th International Symposium on Robotics Research http://www.isrr2009.ethz.ch/

IROS 2009: The 2009 IEEE/RSJ International Conference on Intelligent RObots and Systems www.iros09.mtu.edu/
http://www.iros2010.org.tw/

ICARCV 2010 - The 11th International Conference on Control, Automation, Robotics and Vision
http://www.icarcv.org/2010/



History

- 1986, probabilistic SLAM problem (IEEE Robotics and Automation Conference)
Peter Cheeseman, Jim Crowley, and Hugh Durrant-Whyte, Raja Chatila, Oliver Faugeras, Randal Smith
> estimation-theoretic methods, consistent mapping

- consistent probabilistic mapping
Smith and Cheesman [39] and Durrant-Whyte [17]
> statistical basis
"There must be a high degree of correlation between estimates of the location of different landmarks in a map"

- visual navigation & sonar-based navigation
Ayache and Faugeras [1],  Crowley [9] and Chatila and Laumond [6]
> Kalman filter-type algorithms

Smith et al. [40] "The estimations of the landmarks are all necessarily correlated with each other because of the common error in estimated vehicle location."

> joint state (of the vehicle pose and every landmark position) to be updated following each landmark observation & estimator (state vector)

- random-walk behavior with unbounded error growth (without knowledge of the convergence behavior of the map)

> single estimation problem: "The combined mapping and localization problem is convergent."
"The more the correlations between landmarks grew, the better the solution."

- 1995, coining of SLAM (a paper at the International Symposium on Robotics Research) or called CLM (concurrent mapping and localization)
Csorba [10], [11]. the Massachusetts Institute of Technology [29], Zaragoza [4], [5], the ACFR at Sydney [20], [45], and others [7], [13]
> computational efficiency, addressing in data association, loop closure

- 1999 ISRR, convergence between the Kalman-filter-based SLAM methods and the probabilistic localisation and mapping methods introduced by Thrun

- 2000 IEEE ICRA
> algorithmic complexity, data association, implementation



Formulation

SLAM = process by which a mobile robot can build a map of an environment and at the same time use this map to deduce its location
(In SLAM, both the trajectory of the platform and the location of all landmarks are estimated online without the need for any a priori knowledge of location.)

Probabilistic SLAM
The SLAM probability distribution = the joint posterior density of the landmark locations and vehicle state (at time k) given the recorded observations and control inputs up to and including time k together with the initial state of the vehicle

recursive solution
: observation model + motion (state transition) model == Bayes theorem ==> estimate of SLAM distribution

   observation model -> prediction (; measurement update)
+ motion model -> correction (; time update)
+ Markov process
=> map building problem + localization problem
: joint posterior density of the landmark locations and vehicle state

As the map is built, the location accuracy of the robot measured relative to the map is bounded only by the quality of the map and relative measurement sensor.
: Robot relative location accuracy becomes equal to the localization accuracy achievable with a given map.


Solutions

1)
state-space model + additive Gaussian noise
EKF = extended Kalman filter

2)
a set of samples of a more general non-Gaussian probability distribution to describe vehicle motion
Rao-Blackwellized particle filter or FastSLAM algorithm

3)
information-state form

ref. Sebastian Thrun, Yufeng Liu, Daphne Koller, Andrew Y. Ng, Zoubin Ghahramani, Hugh Durrant-Whyte
Simultaneous Localization and Mapping With Sparse Extended Information Filters



1) EKF-SLAM



2) Rao-Blackwellized filter




posted by maetel
2009. 7. 21. 16:16 Computer Vision
임현, 이영삼 (인하대 전기공학부)
이동로봇의 동시간 위치인식 및 지도작성(SLAM)
제어 로봇 시스템 학회지 제15권 제2호 (2009년 6월)
from kyu


> definition
mapping: 환경을 인식가능한 정보로 변환하고
localization: 이로부터 자기 위치를 추정하는 것

> issues
- uncertainty <= sensor
- data association (데이터 조합): 차원이 높은 센서 정보로부터 2-3차원 정도의 정보를 추려내어 이를 지속적으로 - 대응시키는 것
- 관찰된 특징점 자료들을 효율적으로 관리하는 방법


> localization (위치인식)
: 그 위치가 미리 알려진 랜드마크를 관찰한 정보를 토대로 자신의 위치를 추정하는 것
: 초기치 x0와 k-1시점까지의 제어 입력, 관측벡터와 사전에 위치가 알려진 랜드마크를 통하여 매 k시점마다 로봇의 위치를 추정하는 것
- 로봇의 위치추정의 불확실성은 센서의 오차로부터 기인함.

> mapping (지도작성)
: 기준점과 상대좌표로 관찰된 결과를 누적하여 로봇이 위치한 환경을 모델링하는 것
: 위치와 관측정보 그리고 제어입력으로부터 랜드마크 집합을 추정하는 것
- 지도의 부정확성은 센서의 오차로부터 기인함.

> Simultaneous Localization and Mapping (SLAM, 동시간 위치인식 및 지도작성)
: 위치한 환경 내에서 로봇의 위치를 추정하는 것
: 랜드마크 관측벡터와 초기값 그리고 적용된 모든 제어입력이 주어진 상태에서 랜드마크의 위치와 k시점에서의 로봇 상태벡터 xk의 결합확률
- 재귀적 방법 + Bayes 정리
- observation model (관측 모델) + motion model (상태 공간 모델, 로봇의 움직임 모델)
- motion model은 상태 천이가 Markov 과정임을 의미함. (현재 상태는 오직 이전 상태와 입력 벡터로서 기술되고, 랜드마크 집합과 관측에 독립임.)
- prediction (time-update) + correction (measurement-update)
- 불확실성은 로봇 주행거리계와 센서 오차로부터 유발됨.


conditional Bayes rule
http://en.wikipedia.org/wiki/Bayes%27_theorem
 P(A|B \cap C) = \frac{P(A \cap B \cap C)}{P(B \cap C)} = \frac{P(B|A \cap C) \, P(A|C) \, P(C)}{P(C) \, P(B|C)} = \frac{P(B|A \cap C) \, P(A|C)}{P(B|C)}\,.

Markov process

total probability theorem: "law of alternatives"
http://en.wikipedia.org/wiki/Total_probability_theorem
\Pr(A)=\sum_{n} \Pr(A\cap B_n)\,
\Pr(A)=\sum_{n} \Pr(A\mid B_n)\Pr(B_n).\,

> Extended Kalman filter (EKF, 확장 칼만 필터)


http://en.wikipedia.org/wiki/Ground_truth

posted by maetel
2009. 7. 15. 16:49 Computer Vision
Klein, G. and Murray, D. 2007.
Parallel Tracking and Mapping for Small AR Workspaces
In Proceedings of the 2007 6th IEEE and ACM international Symposium on Mixed and Augmented Reality - Volume 00 (November 13 - 16, 2007). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 1-10. DOI= http://dx.doi.org/10.1109/ISMAR.2007.4538852

Georg Klein
David Murray
Active Vision Laboratory, Department of Engineering Science, University of Oxford

Source CodeUsage Example


1. parallel threads of tracking and mapping
2. mapping from smaller keyframes: batch techniques (Bundle Adjustment)
3. Initializing the map from 5-point Algorithm
4. Initializing new points with epipolar search
5. mapping thousands of points


Joint Compatibility Branch and Bound (JCBB)
http://en.wikipedia.org/wiki/JCBB

RANdom SAmple Consensus (RANSAC)
http://en.wikipedia.org/wiki/RANSAC

coarse-to-fine approach

batch method
bundle adjustment
http://en.wikipedia.org/wiki/Bundle_adjustment

Structure-from-Motion (SfM)

five-point stereo
http://en.wikipedia.org/wiki/Eight-point_algorithm

5-point algorithm
http://portal.acm.org/citation.cfm?id=987623

Henrik Stew´enius, Christopher Engels, David Nist´er
Recent Developments on Direct Relative Orientation


epipolar feature search

intensity-patch descriptor

(feature-to-feature or camera-to-feature) correlation-based search

NCC (normalized cross correlation) search
http://en.wikipedia.org/wiki/Cross-correlation#Normalized_cross-correlation


Unibrain Fire-i digital camera

http://en.wikipedia.org/wiki/YUV411

FAST-10 corner detection
http://wapedia.mobi/en/Corner_detection
http://en.wikipedia.org/wiki/Corner_detection

decaying velocity model

barrel radial distortion
http://en.wikipedia.org/wiki/Distortion_(optics)

lie group SE(3)

affine warp
warping matrix <- (1) back-projecting unit pixel displacements in the source keyframe pyramid level onto the patch's plane and then (2) projecting these into the current (target) frame

inverse compositional image alignment 

Tukey biweight objective function

M-estimator
http://en.wikipedia.org/wiki/M-estimator
Zhengyou Zhang, M-estimators

Shi-Tomasi corner detector

http://en.wikipedia.org/wiki/Levenberg-Marquardt

cubic-cost matrix factorization
http://en.wikipedia.org/wiki/Cubic_function



posted by maetel
2009. 4. 9. 21:24 Computer Vision
M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit.
FastSLAM: A factored solution to the simultaneous localization and mapping problem.
In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002. AAAI.



posted by maetel
2009. 3. 31. 21:10 Computer Vision

Real-time simultaneous localisation and mapping with a single camera

Davison, A.J.  
Dept. of Eng. Sci., Oxford Univ., UK;

This paper appears in: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on
Publication Date: 13-16 Oct. 2003
On page(s): 1403-1410 vol.2
ISBN: 0-7695-1950-4
INSPEC Accession Number: 7971070
Digital Object Identifier: 10.1109/ICCV.2003.1238654
Current Version Published: 2008-04-03


 

posted by maetel
2009. 3. 27. 21:33 Computer Vision

Scalable Monocular SLAM
Eade, E.   Drummond, T.  
Cambridge University;

This paper appears in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on
Publication Date: 17-22 June 2006
Volume: 1,  On page(s): 469- 476
ISSN: 1063-6919
ISBN: 0-7695-2597-0
Digital Object Identifier: 10.1109/CVPR.2006.263
Current Version Published: 2006-07-05

 
Ethan Eade & Tom Drummond
Machine Intelligence Laboratory
the Division of Information Engineering at Cambridge University Engineering Department




monocular SLAM
particle filter + top-down search => real-time, large number  of landmarks

the first to apply this FastSLAM-type particle filter to single-camera SLAM


1. Introduction


SLAM = Simultaneous Localization and Mapping
: process of causally estimating both egomotion and structure in an online system

 SLAM using visual data in computer vision

SFM (= structure from motion): reconstructing scene geometry
+ causal or recursive estimation techniques

perspective-projection cameras

filtering methods to allow indirect observation models

Kalman filtering framework

Extended Kalman filter = EKF (-> to linearize the observation and dynamics models of the system)

causal estimation with recursive algorithms (cp. estimation depending only on observations up to the current time)
=> online operation (cp. SFM on global nonlinear optimization)


Davision's SLAM with a single camera
> EKF estimation framework
> top-down Bayesian estimation approach searching for landmarks in image regions constrained by estimate > uncertainty (instead of performing extensive bottom-up image processing and feature matching)
> Bayesian partial-initialization scheme for incorporating new landmarks
- cannot scale to large environment


EKF = the Extended Kalman filter
-  N*N covariace matrix for N landmarks
- updated with N*N computation cost

> SLAM system using a single camera as the only sensor
> frame-rate operation with many landmarks
> FastSLAM-style particle filter (the first use of such an approach in a monocular SLAM setting)
> top-down active search
> an efficient algorithm for discovering the depth of new landmarks that avoids linearization errors
> a novel method for using partially initialized landmarks to help constrain camera pose


FastSLAM
: based on the Rao-Blackwellized Particle Filter

2. Background

2.1 Scalable SLAM

> submap
bounded complexity -> bounded computation and space requirements

Montemerlo & Thrun
If the entire camera motion is known then the estimates of the positions of different landmarks become independent of each other.







Rao-Blackwellized Particle Filter



ZNCC = the Zero mean Normalized Cross-Correlation function epipolar constraint


epipolar constraint

http://en.wikipedia.org/wiki/Epipolar_geometry


posted by maetel
2009. 3. 27. 21:05 Computer Vision

Hugh F. Durrant-Whyte, Australian Centre for Field Robotics
http://en.wikipedia.org/wiki/Hugh_F._Durrant-Whyte

John J. Leonard, Center for Ocean Engineering, MIT

Sebastian Thrun, Stanford Artificial Intelligence Laboratory, Stanford University
http://en.wikipedia.org/wiki/Sebastian_Thrun

David Nistér, Center for Visualization and Virtual Environments, University of Kentucky

Ethan Eade, Machine Intelligence lab, Engineering Department, Cambridge University

Tom Drummond, Machine Intelligence Laboratory, Engineering Department, Cambridge University

Javier Civera, Departamento de Informática e Ingeniería de Sistemas, Universidad de Zaragoza

Andrew J. Davison, Reader in Robot Vision at the Department of Computing, Imperial College London

Jose Maria Martinez Montiel, Robotics and Real Time Group, Universidad de Zaragoza

Robert Castle, Active Vision Laboratory, Robotics Research Group, Oxford University

임현Embedded control system 연구실, 전기공학부, 인하대학교

김정호, Robotics and Computer Vision 연구실 (권인소), 한국과학기술원

labs
 
Active Vision Goup, Robotics Research Group, Engineering Department, Oxford University

Computer Vision & Robotics Group, Machine Intelligence Laboratory, Department of Engineering, University of Cambridge

Image Information Processing Lab 영상정보처리연구실 (홍기상), 포항공대

Intelligent Control and Systems Lab 지능제어 및 시스템 연구실 (김상우), 포항공대


posted by maetel
2009. 2. 14. 17:27 Computer Vision
IEEE Transactions on Robotics, Volume 24, Number 5, October 2008
: Visual SLAM Special Issue

Guest Editorial: Special Issue on Visual SLAM


simultaneous localization mapping (SLAM)
in autonomous mobile robotics
using laser range-finder sensors
to build 2-D maps of planar environments

SLAM with standard cameras:
feature detection
data association
large-scale state estimation

SICK laser scanner


Kalman filter
Particle filter
submapping

http://en.wikipedia.org/wiki/Particle_filter
particle filter = sequential Monte Carlo methods (SMC)


http://en.wikipedia.org/wiki/Image_registration
the process of transforming the different sets of data into one coordinate system


http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
the process of creating geometrically accurate maps of the environment
to build up a map within an unknown environment while at the same time keeping track of their current position

R.C. Smith and P. Cheeseman (1986)

Hugh F. Durrant-Whyte (early 1990s)

Sebastian Thrun

mobile robotics
autonomous vehicle

한국로봇산업협  http://www.korearobot.or.kr/




posted by maetel