마침내 오픈랩이 다시 돌아왔습니다. 벌써 4번째 행사네요.
꼭 방문하셔서 제니텀의 최근 작업들을 경험하면서 즐거운시간 가지시길 바랍니다.
Zenitum’s Open Lab is back! Our Open Lab 4 will showcase our latest
work in the field of augmented reality, including 3D reconstruction, and
various techniques for recognizing and tracking images.
camera로부터의 입력 이미지를 OpenCV 함수로 받아 IplImage 형태로 저장한 것과 OpenGL 함수로 그린 그래픽 정보를 합성하기
Way #1.
OpenCV의 카메라 입력으로 받은 image frame을 texture로 만들어 OpenGL의 디스플레이 창에 배경 (평면에 texture mapping)으로 넣고 여기에 그래픽을 그려 display하는 방법
ref. http://cafe.naver.com/opencv/12266
테스팅 중 발견한 문제점: cvRetrieveFrame() 함수를 while 아래에서 돌려 cvShowImage() 함수로 보여 주는 대신 glutDisplayFunc() 함수에서 불러 glutMainLoop() 함수로 돌리면 시간이 많이 걸린다. (* cvGrabFrame() 함수의 경우는 괜찮음.)
Textures are simply rectangular arrays of data - for example, color data, luminance data, or color and alpha data. The individual values in a texture array are often called texels.
The data describing a texture may consist of one, two, three, or four elements per texel, representing anything from a modulation constant to an (R, G, B, A) quadruple.
A texture object stores texture data and makes it readily available. You can now control many textures and go back to textures that have been previously loaded into your texture resources.
2. lens distortion(kappa1, kappa2)을 가지고 rectification
패턴 인식이 성공적인 경우 당연히 카메라 캘리브레이션 결과가 정확해지며, 이로부터 가상의 물체를 합성하기 위해 필요한 object 또는 graphic coordinate을 실시간으로 계산할 수 있다. 현재 우리 프로그램에서 패턴 인식이 실패하는 원인은 직선 검출의 오차인데, 이 오차의 원인으로는 여러가지가 있지만 가장 큰 것은 렌즈 왜곡이다. (현재 렌즈 왜곡을 고려하지 않고 있다.) 그래서 실제로는 하나의 직선에 대해 여러 개 (2-3개)의 직선을 검출하며 (NMS 알고리즘만으로는 이 오차를 줄이는 데 한계를 보이고 있어), 이로부터 계산된 교차점들의 위치 좌표 오차는 cross ratio 계산에 결정적인 오차로 작용한다. 현재 방식의 패턴 생성과 패턴 인식은 cross ratios 값에 절대적으로 의존하고 있기 때문에 이 문제를 반드시 해결해야 한다. 그러므로 렌즈 왜곡을 고려하여 입력 이미지를 펴서 (rectification) 기존의 패턴 인식 알고리즘을 적용하자.
Learning OpenCV: 396p
"OpenCV provides us with a ready-to-use undistortion algorithm that
takes a raw image and the distortion coefficients from
cvCalibrateCamera2() and produces a corrected image (see Figure 11-12).
We can access this algorithm either through the function cvUndistort2(),
which does everything we need in one shot, or through the pair of
routines cvInitUndistortMap()
and cvRemap(),
which allow us to handle things a little more efficiently for video or
other situations where we have many images from the same camera. ( * We
should take a moment to clearly make a distinction here between
undistortion, which mathematically removes lens distortion, and rectifi
cation, which mathematically aligns the images with respect to each
other. )
(1) 고정되어 있는 것으로 가정한 카메라의 내부 파라미터 값들을 구하고 (2) 실시간으로 들어오는 이미지 프레임마다 카메라의 회전과 이동을 계산하기 위하여 Tsai 알고리즘을 쓰기로 하고, C 또는 C++로 구현된 소스코드 또는 라이브러리를 찾아서 붙여 보기로 한다.
Try #1.
처음에는 CMU의 Reg Willson가 C로 짠 Tsai Camera Calibration 코드 에서 필요한 부분을 include하여 쓰려고 했는데, C++ 문법에 맞지 않는 구식 C 문법으로 코딩된 부분이 많아서 고치는 데 애를 먹었다. (Xcode의 C++ 프로젝트에서 .c 파일을 include하면 compile은 되지만, linking error가 난다. 때문에 .c를 .cpp로 바꾸어야 함.) 그런데 결정적으로, "cal_main.cpp" 파일에 정의된, 캘리브레이션의 최종 결과값을 주는 함수들이 호출하는 optimization을 실행하는 함수 lmdif_()가 Fortan 파일 "lmdif.f"에 정의되어 있고, Fortran을 C로 변환해 주는 "f2c.h"에 의해 이것을 "lmdif.c"로 하여 가지고 있다는 문제가 있었다. lmdif.c를 lmdif.cpp 형태로 만들기 위해서는 Fortran 언어와 Fortran을 C++로 변환하는 방법을 알아야 하므로, 결국 포기했다.
Try #2.
Michigan State University Charles B. Owen의 Display-Relative
Calibration (DRC)을 구현한 DRC 프로그램( DRC.zip )에서 카메라 캘리브레이션에 Tsai의 알고리즘 libtsai.zip을 쓰고 있다. 이 라이브러리는 위의 C 코드를 C++로 수정하면서 "CTsai"라는 클래스를 사용하고 여러 함수들을 수정/보완/결합한 것인데, Visual Studio 용 프로젝트 프로그램을 만들면서 Windows 환경에 기반하여 MFC를 활용하였다. 그래서 이것을 나의 Mac OS X 기반 Xcode 프로젝트에서 그대로 가져다 쓸 수는 없다. 용법은 다음과 같다.
클래스 형태의 템플릿( CLmdif )으로 선언된 "lmdif"의 member function "Lmdif"를 호출할 때,
min/Lmdif.h:48
template<class T> class CLmdif : private CLmdif_
{
int Lmdif(T *p_user, bool (T::*p_func)(int m, int n, const double *parms, double *err),
int m, int n, double *x, double *fvec, double *diag, int *ipvt, double *qtf)
};
후자인 같은 member function, ncc_compute_exact_f_and_Tz_error()를 인자로 넣고 있고 (위 부분 코드들 중 오렌지 색 부분), 컴파일 하면 이 부분을 <unknown type>으로 인식하지 못 하겠다는 에러 메시지를 보낸다. 그리고 다음과 같은 형태를 추천한다고 한다.
function pointer의 형태가 틀린 모양인데, 오렌지색 부분을 그냥 함수가 아닌 어떤 class의 non-static member function을 가리키는 pointer로 &CTsai::ncc_compute_exact_f_and_Tz_error 이렇게 바꾸어 주면, 에러 메시지가 다음과 같이 바뀐다.
error: no matching function for call to 'CLmdif<CTsai>::Lmdif(CTsai* const, bool (*)(int, int, const double*, double*), int&, const int&, double [3], NULL, NULL, NULL, NULL)'
연두색 부분 대신 CTsai::ncc_compute_exact_f_and_Tz_error 이렇게 바꾸어 주면, 에러 메시지가 다음과 같다.
error: no matching function for call to 'CLmdif<CTsai>::Lmdif(CTsai* const, bool (&)(int, int, const double*, double*), int&, const int&, double [3], NULL, NULL, NULL, NULL)'
해결:
편법으로, class CLmdif를 클래스 형 템플릿이 아닌 그냥 클래스로 바꾸어서 선언하고 연두색 부분처럼 호출하면 에러는 안 나기에 일단 이렇게 넘어가기로 한다.
문제점#2.
코드에서 Windows OS 기반 MFC를 사용하고 있어 Mac OS X에서 에러가 난다.
해결:
MFC를 사용하는 "StdAfx.h"는 모두 주석 처리한다.
문제점#3.
Lmdif.h
... 기타 등등의 문제점들을 해결하고, 캘리브레이션을 수행한 결과가 맞는지 확인하자.
source code:
if (
CRimage.size() > 0 ) // if there is a valid point with its cross
ratio
{
correspondPoints(indexI, indexW, p, CRimage,
linesYorder.size(), linesXorder.size(), world, CRworld, dxList.size(),
dyList.size(), iplMatch, scale );
}
cvShowImage( "match", iplMatch );
cvSaveImage( "match.bmp", iplMatch );
아래 사진은 구해진 카메라 내부/외부 파라미터들을 가지고 (1) 실제 패턴의 점에 대응하는 이미지 프레임 (image coordinate) 상의 점을 찾아 (reprojection) 보라색 원으로 그리고, (2) 실제 패턴이 있는 좌표 (world coordinate)를 기준으로 한 graphic coordinate에 직육면체 cube를 노란색 선으로 그린 결과이다.
이미지 프레임과 실제 패턴 상의 점을 1 대 1로 비교하여 연결한 16쌍의 대응점
구한 카메라 파라미터를 가지고 실제 패턴 위의 점들을 이미지 프레임에 reproject한 결과 (보라색 점)와 실제 패턴의 좌표를 기준으로 한 그래픽이 이미지 프레임 상에 어떻게 나타나는지 그린 결과 (노란색 상자)
위 왼쪽 사진에서 보여지는 16쌍의 대응점들의 좌표값을 "이미지 좌표(x,y) : 패턴 좌표 (x,y,z)"로 출력한 결과:
camera parameter
focus = 3724.66
principal axis (x,y) = 168.216, 66.5731
kappa1 (lens distortion) = -6.19473e-07
skew_x = 1
대응점 연결에 오차가 없으면, 즉, 패턴 인식이 잘 되면, Tsai 알고리즘에 의한 카메라 파라미터 구하기가 제대로 되고 있음을 확인할 수 있다. 하지만, 현재 full optimization (모든 파라미터들에 대해 최적화 과정을 수행하는 것)으로 동작하게 되어 있고, 프레임마다 모든 파라미터들을 새로 구하고 있기 때문에, 속도가 매우 느리다. 시험 삼아 reprojection과 간단한 graphic을 그리는 과정은 속도에 큰 영향이 없지만, 그전에 카메라 캘리브레이션을 하는 데 필요한 계산 시간이 길다. 입력 프레임이 들어오는 시간보다 훨씬 많은 시간이 걸려 실시간 구현이 되지 못 하고 있다.
따라서, (1) 내부 파라미터는 첫 프레임에서 한 번만 계산하고 (2) 이후 매 프레임마다 외부 파라미터 (카메라의 회전과 이동)만을 따로 계산하는 것으로 코드를 수정해야 한다.
Test on the correspondences of feature points
특징점 대응 시험
교점의 cross ratio 값을 구하고, 그 값과 가장 가까운 cross ratio 값을 가지는 점을 패턴에서 찾아 대응시킨다.
Try #1. one-to-all
입력 영상에서 검출한 직선들로부터 생기는 각 교점에서 수평 방향으로 다음 세 개의 교점, 수직 방향으로 다음 세 개의 교점을
지나는 직선에 대한 cross ratio (x,y)값을
구한다. 이상적으로, 1에서 구한 cross ratio 값과 일치하는 cross ratio 값을 가지는 패턴의 격자점이 입력 영상의 해당
교차점과 실제로 대응하는 점이라고 볼 수 있다.
직선 검출에 오차나 오류가 적을 경우, 아래 테스트 결과에서 보듯 입력 영상의 교차점에 대해 실제 패턴의 직선을 1대 1로
즉각적으로 찾는다. 즉, 입력 영상의 한 점에서의 수평 방향 cross ratio 값에 대해 패턴의 모든 수평선들의 cross
ratio 값을 일일이 대조하여 가장 근접한 값을 가지는 직선을 대응시키는 방식이다. (아래 오른쪽 사진은 같은 방식으로 수직
방향 cross ratio 값을 가지고 대응되는 직선을 찾는 경우임.) (point-to-line)
수평선 위의 점들에 대한 cross ratio 값만 비교한 결과
수선 위의 점들에 대한 cross ratio 값만 비교한 결과
입력 영상에서 하나의 교차점의 x방향 cross ratio 값과 같은 cross ratio 값을 가지는 세로선을
실제 패턴에서 찾고, y방향 cross
ratio 값에 대해서 가로선을 찾으면, 패턴 위에 그 세롯선과 가로선이 교차하는 점 하나가 나온다. 입력 이미지 상의 한 점에 대해 패턴의 모든 직선을 (가로선의 개수+세로선의 개수) 번 비교하여 대응점을 연결하는 것이다. (point-to-point)
(패턴 인식이 성공적인 경우)
(잘못된 대응점 연결이 발생한 경우)
source code:
void matchXY ( vector<CvPoint2D32f> &p, vector<CvPoint2D32f> &CRimage, int numIx, int numIy, vector<CvPoint3D32f> &world, vector<CvPoint2D32f> &CRworld, int numPx, int numPy, IplImage* iplMatch, CvPoint2D32f scale )
{
for( int i = 0; i < numIx; i++ ) // points in x-direction on the input image
{
// check if x-component of the point is valid
if( -1 == CRimage[i*numIy+0].x )
{
cout << endl << "could not make matching in x-direction" << endl;
continue;
}
// CvScalar generateRandomColor(unsigned char thR, unsigned char thG, unsigned char thB) defined in "matching.h"
CvScalar colorMatch = generateRandomColor(50,50,50);
for( int j = 0; j < numIy; j++ ) // points in y-direction on the input image
{
// check if y-component of the point is valid
if( -1 == CRimage[i*numIy+j].y )
{
cout << endl << "could not make matching in y-direction" << endl;
continue;
}
// to find the x-index of the corresponding point
int indexPx = 0;
float errX_min = fabs( CRimage[i*numIy+j].x- CRworld[indexPx*numPy+0].x );
// search points in x-direction on the real pattern
for( int wx = 0; wx < numPx; wx++ )
{
float errX = CRimage[i*numIy+j].x - CRworld[wx*numPy+0].x;
if ( fabs(errX) < errX_min )
{
errX_min = fabs(errX);
indexPx = wx;
}
}
// to find the y-index of the corresponding point
int indexPy = 0;
float errY_min = fabs( CRimage[i*numIy+j].y - CRworld[0*numPy+indexPy].y );
// search points in y-direction on the real pattern
for( int wy = 0; wy < numPy; wy++ )
{
float errY = CRimage[i*numIy+j].y - CRworld[0*numPy+wy].y;
if ( fabs(errY) < errY_min )
{
errY_min = fabs(errY);
indexPy = wy;
}
}
// cout << endl << i << ", " << j << " point in the input frame is matched with "
// << indexPy << "-th point in the real pattern" << endl;
// draw the line to connect "world" point and "image" point
CvPoint pointImage = cvPoint(cvRound(p[i*numIy+j].x), cvRound(IMG_HEIGHT + p[i*numIy+j].y));
CvPoint pointPattern = cvPoint(cvRound(world[indexPx*numPy+indexPy].x*scale.x), cvRound(world[indexPx*numPy+indexPy].y*scale.y));
그러므로 현재는 (1) 입력 영상에서 한 직선 위에 있는 것으로 추산된 일련의 점들에서의 cross ratio 값들의 수치적
경향을 고려하지 않고 있으며, (2) 입력 영상에 실제 패턴의 어느 부분(위치나 범위)이 잡힌 것인지를 판단하지 않고 무조건 전체
패턴의 모든 격자점들에 대해서 cross ratio 값을 비교하고 있다.
현재 코드는 검은색 바탕에 흰색 사각형을 그리게 되는데, 소수 값을 정수 값 (픽셀의 위치 좌표)으로 변환하는 과정에서 오차가 발생한다. 얼핏 격자 무늬로 보이는 오른쪽 그림을 확대한 왼쪽 그림을 보면, 격자 사이가 벌어지거나 겹치는 부분이 생긴다는 것을 알 수 있다.
그리는 방법을 달리했더니 해결된다. 이미지와 같은 높이의 세로 막대들을 하얀색으로 먼저 그리고 나서, 이미지와 같은 너비의 가로 막대들을 하얀색으로 그리고, 막대들이 서로 교차하는 (하얀색으로 두 번 그린 셈인) 부분을 다시 검은색으로 그렸다. (이건 뭔... 컴퓨터 비전도 이미지 프로세싱도 아니고 그렇다고 컴퓨터 그래픽스라고 보기도 우습고, 중학교 수학 경시대회 난이도 중상 정도의 문제를 푸는 기분이다. )
세로선 40개, 가로선 30개로 생성된 패턴 (400x300pixels)
오른쪽의 패턴을 7배 확대한 영상
/* Draw grid pattern
using kyu's pattern generator with optimal cross ratios
2010, lym
// calculate cross ratios in the world coordinate on real pattern
void crossRatioWorld( vector<CvPoint2D32f>& CRworld, vector<CvPoint3D32f>& world, int dxListSize, int dyListSize, CvPoint2D32f scale )
{
// vector<CvPoint2D32f> crossRatioWorld; // cross ratios in the world coordinate on real pattern
float crX = -1.0, crY = -1.0;
for( int i = 0; i < dxListSize; i++ )
{
for( int j = 0; j < dyListSize; j++ )
{
CRworld.push_back(cvPoint2D32f(crX, crY));
}
}
// "cr[iP] = p1 * p3 / ((p1 + p2) * (p2 + p3))" in psoBasic.cpp: 316L
// that is (b-a)(d-c)/(c-a)(d-b) with 4 consecutive points, a, b, c, and d
float a, b, c, d;
// cross ratios in horizontal lines
for( int i = 0; i < dxListSize-3; i++ )
{
a = world[i*dyListSize].x;
b = world[(i+1)*dyListSize].x;
c = world[(i+2)*dyListSize].x;
d = world[(i+3)*dyListSize].x;
swPark_2000rti 439쪽: In the initial identification process, we first extract and identify vertical and horizontal lines of the pattern by comparing their cross-ratios, and then we compute the intersections of the lines. Theoretically with this method, we can identify feature points in every frame automatically, but several situations cause problems in the real experiments.
박승우_1999전자공학회지 94쪽: 초기 인식과정에서는 패턴 상의 교점을 인식하기 위해 패턴의 제작과정에서 설명한 것처럼 영상에서 구해진 가로선과 세로선의 Cross-ratio를 패턴의 가로선과 셀로선이 가지는 Cross-ratio와 비교함으로써 몇번째 선인지를 인식하게 된다. 이러한 방법을 이용해 영상으로부터 자동으로 특징점을 찾고 인식할 수 있지만, 실제 적용 상에서는 몇 가지 제한점이 따르게 된다.
0. NMS (Non Maximum Suppression)을 적용한 Hough transform에 의한 Line 찾기
OpenCV 라이브러리의 HoughLines2() 함수는 전에 기술한 바( http://leeway.tistory.com/801 )와 같이 실제 패턴에서는 하나의 직선 위에 놓인 점들에 대해 이미지 프레임에서 검출된 edges을 가지고 여러 개의 직선을 찾는 결과를 보인다. 이는 HoughLines2()
함수가 출력하는, 직선을 정의하는 두 파라미터 rho와 theta에 대해 ( x*cos(theta) + y*sin(theta) = rho ) 계산된 값들이 서로 비슷하게 나오는 경우에 최적값을 선별하는 과정을 거치지 않고 모든 값들을 그대로 내보내기 때문이다. 그래서 OpenCV의 이 함수를 이용하지 않고, 따로 Hough transform을 이용하여 선을 찾는 함수를 만들되 여기에 NMS (Non Maximum Suppression)를 적용하도록 해 보았다. 하지만 이 함수를 실시간 비디오 카메라 입력에 대해 매 프레임마다 실행시키면 속도가 매우 느려져 쓸 수 없었다. 그래서, 속도 면에서 월등한 성능을 보이는 OpenCV의 HoughLines2()
함수를 그대로 따 오고 대신 여기에 NMS 부분을 추가하여 수정한 함수를 매 입력 프레임마다 호출하는 방법을 택하였고, 실시간 처리가 가능해졌다. (->소스코드)
산출된 수직선들을 이미지 프레임의 왼쪽에서부터 오른쪽으로 나타난 순서대로 번호를 매기고 (아래 그림의 붉은색 번호), 수평선들을 위로부터 아래로 나타난 순서대로 번호를 매긴다 (아래 그림의 푸른색 번호). 이 과정에서 수직선의 경우 x절편, 수평선의 경우 y절편의 값을 기준으로 하여 계산하였다.
아래 코드에서 "line.x0"가 "line" 직선의 x절편임
// rearrange lines from left to right
void indexLinesY ( CvSeq* lines, IplImage* image )
{
// retain the values of "rho" & "theta" of found lines
int numLines = lines->total;
// line_param line[numLines]; 이렇게 하면 나중에 이 변수를 밖으로 빼낼 때 (컴파일 에러는 안 나지만) 문제가 됨.
line_param *line = new line_param[numLines];
for( int n = 0; n < numLines; n++ )
{
float* newline = (float*)cvGetSeqElem(lines,n);
line[n].rho = newline[0];
line[n].theta = newline[1];
}
// rearrange "line" array in geometrical order
float temp_rho, temp_theta;
for( int n = 0; n < numLines-1; n++ )
{
for ( int k = n+1; k < numLines; k++ )
{
float x0_here = line[n].rho / cos(line[n].theta);
float x0_next = line[k].rho / cos(line[k].theta);
if( x0_here > x0_next ) {
temp_rho = line[n].rho; temp_theta = line[n].theta;
line[n].rho = line[k].rho; line[n].theta = line[k].theta;
line[k].rho = temp_rho; line[k].theta = temp_theta;
}
}
}
// calculate the other parameters of the rearranged lines
for( int n = 0; n < numLines; n++ )
{
line[n].a = cos(line[n].theta);
line[n].b = sin(line[n].theta);
line[n].x0 = line[n].rho / line[n].a;
line[n].y0 = line[n].rho / line[n].b;
void indexLinesY( CvSeq* lines, IplImage* image ) 함수를 line_param*
indexLinesY( CvSeq* lines, IplImage* image )라고 바꾸어 structure로 선언한
line_param 형태의 배열을 출력하도록 하고, 이 출력값을 교점을 구하는 함수의 입력으로 하면
line_param
line[numLines];
이렇게 함수 안에서 선언했던 부분이 함수 밖으로 출력되어 다른 함수의 입력이 될 때 입력값이
제대로 들어가지 않는다. 다음과 같이 바꾸어 주어야 함.
이미지 프레임에서 찾은 수평선들을 보면 제일 위쪽의 직선이 0번이 아니라 4번부터 순번이 매겨져 있다. 프레임 바깥에 (위쪽에) 세 개의 직선이 더 있다는 뜻인데...
수직성 상의 edges 검출 영상
수평선 상의 edges 검출 영상
수직선들을 왼쪽부터 오른쪽으로, 수평선들을 위에서 아래로 정열한 결과
왼쪽 두 개는
line detection에 입력으로 쓰인 영상이고, 마지막 것은 이로부터 순서대로 정열한 직선을 규정하는 매개변수 출력값이다. 0번부터 3번 수평선의 y절편 값이 음수로 나타나고 있다.
2. 교점의 순서 매기기
격자 무늬의 직선들의 교점(intersections)을 과정1에서 계산한 직선의 순번을 이용하여 indexing한다. 빨간 세로선 0번과 파란 가로선 0번의 교점은 00번, 이런 식으로.
// index intersection points of lines in X and Y
CvPoint* indexIntersections ( line_param* lineX, line_param* lineY, int numLinesX, int numLinesY, IplImage* image )
// find intersections of lines, "linesX" & "linesY", and draw them in "image"
{
int numPoints = (numLinesX+1) * (numLinesY+1);
CvPoint *p = new CvPoint[numPoints]; // the intersection point of lineX[i] and lineY[j]
char txt[100]; // text to represent the index number of an intersection
입력 영상 input을 단일 채널 temp로 바꾸어 1차 DoG 필터링을 하여 검출된 edges를 양 방향 세기 비교와 NMS를 통해 수평 방향과 수직 방향으로 나눈 영상 detected edges를 입력으로 하여 Hough transform에 NMS를 적용하여 line detection을 한 결과를 input 창에 그리고, 이미지 프레임 좌표를 기준으로 검출된 직선들에 순서를 매겨 이로부터 교차점의 위치와 순번을 계산하여 input 창에 표시한다.
현재 상태의 문제점: (1) 패턴과 카메라 모두 정지하여 입력 영상(상좌)이 고정된 경우에, DoG 필터링한 결과(중)는 비교적 안정적이지만 수평, 수직 방향 세기 비교와 NMS를 통해 각 방향에 대해 뽑은 edges를 표시한 영상(하)은 프레임이 들어올 때마다 변화가 있다. 그래서 이 두 영상을 입력으로 하여 직선 찾기를 한 결과(상좌 빨간색 선들)와 이로부터 계산한 교차점들의 위치 및 순번(상좌 연두색 동그라미와 하늘색 숫자)도 불안정하다. (2) 또한 패턴과의 거리에 대해 카메라 렌즈의 초점이 맞지 않으면 결과가 좋지 않다.
/* Test: feature points identification in implementing a virtual studio
1) grid pattern design with cross ratios
2) lines detection by Hough transform with Non Maximum Suppression,
modifying cvHoughLines2() function in OpenCV library
#include <OpenCV/OpenCV.h> // framework on Mac
//#include <cv.h>
//#include <highgui.h>
//#include <cxmisc.h>
#include <iostream>
#include <vector>
using namespace std;
//#include "nms.h" // Non Maximum Suppression to extract vertical and horizontal edges separately
//#include "nmshough.h" // Hough transform with Non Maximum Suppression to detect lines
struct line_param // structure to contain parameters to define a line
{
// eqn of a line: a*x + b*y = rho, when a = cos(theta) & b = sin(theta)
float rho, theta;
float a, b;
float x0, y0; // x-intercept, y-intercepts
};
// rearrange vertical lines from left to right
void indexLinesY (vector<line_param>& line, CvSeq* lines, IplImage* image )
{
int numLines = lines->total; // total number of detected lines
line.resize(numLines); // define the size of "line" vector as "numLines"
char txt[100]; // text to represent the index number of an ordered line
// get "rho" and "theta" values of lines detected by Hough transform and NMS
for( int n = 0; n < numLines; n++ )
{
float* newline = (float*)cvGetSeqElem(lines,n);
line[n].rho = newline[0];
line[n].theta = newline[1];
}
// rearrange "line" array in geometrical order, that is, by values of x-intercept in the image frame coordinate
float temp_rho, temp_theta;
for( int n = 0; n < numLines-1; n++ )
{
for ( int k = n+1; k < numLines; k++ )
{
float x0_here = line[n].rho / cos(line[n].theta);
float x0_next = line[k].rho / cos(line[k].theta);
// rearrange horizontal lines from up to bottom
void indexLinesX (vector<line_param>& line, CvSeq* lines, IplImage* image )
{
int numLines = lines->total; // total number of detected lines
line.resize(numLines); // define the size of "line" vector as "numLines"
char txt[100]; // text to represent the index number of an ordered line
// get "rho" and "theta" values of lines detected by Hough transform and NMS
for( int n = 0; n < numLines; n++ )
{
float* newline = (float*)cvGetSeqElem(lines,n);
line[n].rho = newline[0];
line[n].theta = newline[1];
}
// rearrange "line" array in geometrical order, that is, by values of y-intercept in the image frame coordinate
float temp_rho, temp_theta;
for( int n = 0; n < numLines-1; n++ )
{
for ( int k = n+1; k < numLines; k++ )
{
float y0_here = line[n].rho / sin(line[n].theta);
float y0_next = line[k].rho / sin(line[k].theta);
// index intersection points of lines in X and Y
void indexIntersections (vector<CvPoint>& p, vector<line_param>& lineX, vector<line_param>& lineY, IplImage* image )
{ // find "p", intersections of lines, "linesX" & "linesY", and draw them in "image"
int numLinesX = lineX.size(), numLinesY = lineY.size(); // total number of detected lines
int numPoints = numLinesX * numLinesY; // total number of intersection of the lines
p.resize(numPoints); // define the size of "p" vector as "numPoints"
char txt[100]; // text to represent the index number of an intersection point
// calculate intersection points of vertical and horizontal lines
for( int i = 0; i < numLinesX; i++ )
{
for( int j = 0; j < numLinesY; j++ )
{
int indexP = numLinesY * i + j; // index number of intersection points in geometrical order
float Px = ( lineX[i].rho*lineY[j].b - lineY[j].rho*lineX[i].b ) / ( lineX[i].a*lineY[j].b - lineX[i].b*lineY[j].a ) ;
float Py = ( lineX[i].rho - lineX[i].a*Px ) / lineX[i].b ;
p[indexP].x = cvRound(Px);
p[indexP].y = cvRound(Py);
// display the points in an image
cvCircle( image, p[indexP], 3, CV_RGB(0,255,50) /* , <#int line_type#>, <#int shift#> */ );
sprintf(txt, "%d-%d", i, j); cvPutText(image, txt, p[indexP], &cvFont(0.7), CV_RGB(50,255,250));
}
}
return;
}
int main()
{
IplImage* iplInput = 0; // input image
IplImage* iplGray = 0; // grey image converted from input image
IplImage *iplTemp = 0; // converted image from input image with a change of bit depth
IplImage* iplDoGx = 0, *iplDoGxClone; // filtered image by DoG in x-direction
IplImage* iplDoGy = 0, *iplDoGyClone; // filtered image by DoG in y-direction
IplImage* iplEdgeX = 0, *iplEdgeY = 0; // edge-detected image by filtering in each direction, to be used as input in line-fitting
double minValx, maxValx; // minimum & maximum of pixel intensity values
double minValy, maxValy;
double minValt, maxValt;
int width, height; // window size of input frame
int kernel = 1; float edgethres; // parameters of NMS function
double rho = 0.8; // distance resolution in pixel-related units
double theta = 0.8; // angle resolution measured in radians
// "A line is returned by the function if the corresponding accumulator value is greater than threshold."
// int threshold = 24, rN = 5, tN = 5; // for grid pattern of 11x7 squares
int threshold = 20, rN = 5, tN = 5; // for grid pattern of lines with cross ratios
double h[] = { -1, -7, -15, 0, 15, 7, 1 }; // 1-D kernel of DoG filter
CvMat DoGx = cvMat( 1, 7, CV_64FC1, h ); // DoG filter in x-direction
CvMat* DoGy = cvCreateMat( 7, 1, CV_64FC1 ); // DoG filter in y-direction
cvTranspose( &DoGx, DoGy ); // transpose(&DoGx) -> DoGy
// output information of lines found by Hough transform with NMS
CvMemStorage* storageX = cvCreateMemStorage(0), *storageY = cvCreateMemStorage(0);
CvSeq* linesX = 0, *linesY = 0;
vector<CvPoint> p; // ordered intersection points on the "linesXorder" & "linesYorder"
// create windows
cvNamedWindow("input");
cvNamedWindow( "temp" );
char title_fx[200], title_fy[200];
sprintf(title_fx, "filtered image by DoGx");
sprintf(title_fy, "filtered image by DoGy");
cvNamedWindow(title_fx);
cvNamedWindow(title_fy);
char title_ex[200], title_ey[200];
sprintf(title_ex, "detected edges in x direction");
sprintf(title_ey, "detected edges in y direction");
cvNamedWindow(title_ex);
cvNamedWindow(title_ey);
// initialize capture from a camera
CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
int frame = 0; // number of grabbed frames
while(1)
{
// get video frames from the camera
if ( !cvGrabFrame(capture) ) {
printf("Could not grab a frame\n\7");
exit(0);
}
else {
cvGrabFrame( capture ); // capture a frame
iplInput = cvRetrieveFrame(capture); // retrieve the caputred frame
cvSaveImage("original.bmp", iplInput);
if(iplInput) {
if(0 == frame) {
// create an image header and allocate the image data
iplGray = cvCreateImage(cvGetSize(iplInput), 8, 1);
iplTemp = cvCreateImage(cvGetSize(iplInput), IPL_DEPTH_32F, 1);
iplDoGx = cvCreateImage(cvGetSize(iplInput), IPL_DEPTH_32F, 1);
iplDoGy = cvCreateImage(cvGetSize(iplInput), IPL_DEPTH_32F, 1);
iplDoGyClone = cvCloneImage(iplDoGy), iplDoGxClone = cvCloneImage(iplDoGx);
iplEdgeX = cvCreateImage(cvGetSize(iplInput), 8, 1);
iplEdgeY = cvCreateImage(cvGetSize(iplInput), 8, 1);
width = iplInput->width, height = iplInput->height;
cvMoveWindow( "temp", 100+width+10, 100 );
cvMoveWindow( title_fx, 100, 100+height+30 );
cvMoveWindow( title_fy, 100+width+10, 100+height+30 );
cvMoveWindow( title_ey, 100, 100+(height+30)*2 );
cvMoveWindow( title_ex, 100+width+10, 100+(height+30)*2 );
}
// convert the input color image to gray one
cvCvtColor(iplInput, iplGray, CV_BGR2GRAY); // convert an image from one color space to another
cvSaveImage("gray.bmp", iplGray);
// convert one array to another with optional linear transformation
cvConvert(iplGray, iplTemp);
// increase the frame number
frame++;
}
// #1. DoG filtering
// convolve an image with the DoG kernel
// void cvFilter2D(const CvArr* src, CvArr* dst, const CvMat* kernel, CvPoint anchor=cvPoint(-1, -1)
cvFilter2D( iplTemp, iplDoGx, &DoGx ); // convolve an image with the DoG kernel in x-direction
cvFilter2D( iplTemp, iplDoGy, DoGy ); // convolve an image with the DoG kernel in y-direction
// convert negative values to positive to filter the image in reverse direction
cvAbs(iplDoGx, iplDoGx); cvAbs(iplDoGy, iplDoGy);
// normalize the pixel values
cvMinMaxLoc( iplDoGx, &minValx, &maxValx ); // find global minimum and maximum in image array
cvMinMaxLoc( iplDoGy, &minValy, &maxValy );
cvMinMaxLoc( iplTemp, &minValt, &maxValt );
cvScale( iplDoGx, iplDoGx, 1.0 / maxValx );
cvScale( iplDoGy, iplDoGy, 1.0 / maxValy );
cvScale( iplTemp, iplTemp, 1.0 / maxValt );
// display windows
cvShowImage( "temp", iplTemp );
cvShowImage( title_fx, iplDoGx ); cvShowImage( title_fy, iplDoGy );
// save images to files
cvSaveImage("temp.bmp", iplTemp);
cvSaveImage("DoGx.bmp", iplDoGx); cvSaveImage("DoGy.bmp", iplDoGy);
// #2. separate selected edges into vertical and horizontal
// arrange vertical lines from left to right
cout << "vertical" << endl;
indexLinesY(linesYorder, linesY, iplInput );
// arrange horizontal lines from up to bottom
cout << "horizontal" << endl;
indexLinesX(linesXorder, linesX, iplInput );
// calculate and index intersection points
indexIntersections(p, linesXorder, linesYorder, iplInput);
DSVideoLib
A DirectShow wrapper supporting concurrent access to framebuffers from
multiple threads. Useful for developing applications that require live
video input from a variety of capture devices (frame grabbers,
IEEE-1394 DV camcorders, USB webcams).
galaxy:~ lym$ port search openvrml
openvrml @0.17.12 (graphics, x11)
a cross-platform VRML and X3D browser and C++ runtime library
galaxy:~ lym$ port info openvrml
openvrml @0.17.12 (graphics, x11)
Variants: js_mozilla, mozilla_plugin, no_opengl, no_x11, player, universal,
xembed
OpenVRML is a free cross-platform runtime for VRML and X3D available under the
GNU Lesser General Public License. The OpenVRML distribution includes libraries
you can use to add VRML/X3D support to an application. On platforms where GTK+
is available, OpenVRML also provides a plug-in to render VRML/X3D worlds in Web
browsers.
Homepage: http://www.openvrml.org/
Build Dependencies: pkgconfig
Library Dependencies: boost, libpng, jpeg, fontconfig, mesa, libsdl
Platforms: darwin
Maintainers: raphael@ira.uka.de openmaintainer@macports.org
galaxy:~ lym$ port deps openvrml
openvrml has build dependencies on:
pkgconfig
openvrml has library dependencies on:
boost
libpng
jpeg
fontconfig
mesa
libsdl
galaxy:~ lym$ port variants openvrml
openvrml has the variants:
js_mozilla: Enable support for JavaScript in the Script node with Mozilla
no_opengl: Do not build the GL renderer
xembed: Build the XEmbed control
player: Build the GNOME openvrml-player
mozilla_plugin: Build the Mozilla plug-in
no_x11: Disable support for X11
universal: Build for multiple architectures
openvrml 설치
galaxy:~ lym$ sudo port install openvrml
Password:
---> Fetching boost-jam
---> Attempting to fetch boost-jam-3.1.17.tgz from http://nchc.dl.sourceforge.net/boost
---> Verifying checksum(s) for boost-jam
---> Extracting boost-jam
---> Applying patches to boost-jam
---> Configuring boost-jam
---> Building boost-jam
---> Staging boost-jam into destroot
---> Installing boost-jam @3.1.17_0
---> Activating boost-jam @3.1.17_0
---> Cleaning boost-jam
---> Fetching boost
---> Attempting to fetch boost_1_39_0.tar.bz2 from http://nchc.dl.sourceforge.net/boost
---> Verifying checksum(s) for boost
---> Extracting boost
---> Applying patches to boost
---> Configuring boost
---> Building boost
---> Staging boost into destroot
---> Installing boost @1.39.0_2
---> Activating boost @1.39.0_2
---> Cleaning boost
---> Fetching libsdl
---> Attempting to fetch SDL-1.2.13.tar.gz from http://distfiles.macports.org/libsdl
---> Verifying checksum(s) for libsdl
---> Extracting libsdl
---> Applying patches to libsdl
---> Configuring libsdl
---> Building libsdl
---> Staging libsdl into destroot
---> Installing libsdl @1.2.13_6
---> Activating libsdl @1.2.13_6
---> Cleaning libsdl
---> Fetching glut
---> Verifying checksum(s) for glut
---> Extracting glut
---> Configuring glut
---> Building glut
---> Staging glut into destroot
---> Installing glut @3.7_3
---> Activating glut @3.7_3
---> Cleaning glut
---> Fetching xorg-dri2proto
---> Attempting to fetch dri2proto-2.1.tar.bz2 from http://distfiles.macports.org/xorg-dri2proto
---> Verifying checksum(s) for xorg-dri2proto
---> Extracting xorg-dri2proto
---> Configuring xorg-dri2proto
---> Building xorg-dri2proto
---> Staging xorg-dri2proto into destroot
---> Installing xorg-dri2proto @2.1_0
---> Activating xorg-dri2proto @2.1_0
---> Cleaning xorg-dri2proto
---> Fetching xorg-glproto
---> Attempting to fetch glproto-1.4.10.tar.bz2 from http://distfiles.macports.org/xorg-glproto
---> Verifying checksum(s) for xorg-glproto
---> Extracting xorg-glproto
---> Configuring xorg-glproto
---> Building xorg-glproto
---> Staging xorg-glproto into destroot
---> Installing xorg-glproto @1.4.10_0
---> Activating xorg-glproto @1.4.10_0
---> Cleaning xorg-glproto
---> Fetching mesa
---> Attempting to fetch MesaLib-7.4.3.tar.bz2 from http://nchc.dl.sourceforge.net/mesa3d
---> Attempting to fetch MesaGLUT-7.4.3.tar.bz2 from http://nchc.dl.sourceforge.net/mesa3d
---> Attempting to fetch AppleSGLX-57.tar.bz2 from http://xquartz.macosforge.org/downloads/src/
---> Verifying checksum(s) for mesa
---> Extracting mesa
---> Applying patches to mesa
---> Configuring mesa
---> Building mesa
---> Staging mesa into destroot
---> Installing mesa @7.4.3_0+hw_render
---> Activating mesa @7.4.3_0+hw_render
---> Cleaning mesa
---> Fetching openvrml
---> Attempting to fetch openvrml-0.17.12.tar.gz from http://nchc.dl.sourceforge.net/openvrml
---> Verifying checksum(s) for openvrml
---> Extracting openvrml
---> Configuring openvrml
---> Building openvrml
---> Staging openvrml into destroot
---> Installing openvrml @0.17.12_0
---> Activating openvrml @0.17.12_0
---> Cleaning openvrml
cd ~/Desktop/ARToolKit/lib/SRC/ARvrml
make
cd ~/Desktop/ARToolKit/examples/simpleVRML
make
cd ~/Desktop/ARToolKit/bin
./simpleVRML
ARToolKit-2.72.1 설치 후 테스트
graphicsTest on the bin directory
-> This test confirms that your camera support ARToolKit graphics module with OpenGL.
videoTest on the bin directory
-> This test confirms that your camera supports ARToolKit video module and ARToolKit graphics module.
simpleTest on the bin directory
-> You need to notice that better the format is similar to ARToolKit tracking format, faster is the
acquisition (RGB more efficient).
/Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
galaxy:~ lym$ /Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
Camera parameter load error !!
logout
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
*** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 159.250000 131.750000 104.800000 1.012757
350.47574 0.00000 158.25000 0.00000
0.00000 363.04709 120.75000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
Opening Data File Data/object_data2
About to load 2 Models
Read in No.1
Read in No.2
Objectfile num = 2
arGetTransMat() 안에서 다음과 같이 pattern의 transformation 값을 출력해 보면,
camera transformation: 134.438993 63.934746 582.012800
camera transformation: 134.445606 63.981777 582.120969
camera transformation: 134.474482 63.995219 582.242088
camera transformation: 134.599202 63.998890 582.630168
camera transformation: 134.501440 63.963350 582.269908
camera transformation: 134.464995 64.013854 582.242347
camera transformation: 134.490045 63.956372 582.209032
camera transformation: 134.375223 63.789206 581.551681
camera transformation: 133.561691 63.159733 577.815148
camera transformation: 133.063396 62.927971 575.690113
camera transformation: 133.355195 63.043104 577.132167
camera transformation: 134.613795 63.954793 582.183804
camera transformation: 132.159546 64.070513 574.724387
camera transformation: 132.448489 64.937645 575.654565
camera transformation: 130.686699 65.617613 570.876666
camera transformation: 130.650742 65.840462 571.732330
camera transformation: 130.636143 65.874965 573.631585
camera transformation: 129.504212 56.174073 571.607662
camera transformation: 125.830031 48.411508 566.542108
camera transformation: 121.581157 45.285999 569.393613
camera transformation: 123.683377 47.387303 571.546352
camera transformation: 127.458933 44.409366 568.928211
camera transformation: 127.303034 44.345058 568.159484
camera transformation: 127.320462 44.350160 568.224561
camera transformation: 127.317729 44.349189 568.212422
camera transformation: 127.317729 44.349189 568.212422
camera transformation: 125.300218 43.641056 559.530004
camera transformation: 127.269746 44.332084 568.002352
camera transformation: 127.314772 44.348305 568.201544
camera transformation: 127.328986 44.353467 568.264290
camera transformation: 127.328986 44.353467 568.264290
camera transformation: 134.859914 41.818072 563.541940
camera transformation: 135.040310 41.877534 564.294626
camera transformation: 135.043507 41.878547 564.307919
camera transformation: 135.043507 41.878547 564.307919
camera transformation: 135.043507 41.878547 564.307919
camera transformation: 130.805179 40.514050 546.854285
camera transformation: 134.889481 41.829859 563.688319
camera transformation: 135.047962 41.880133 564.327580
camera transformation: 135.047962 41.880133 564.327580
camera transformation: 135.047962 41.880133 564.327580
camera transformation: 145.248889 34.185486 561.683418
camera transformation: 145.056709 34.137696 560.948388
camera transformation: 145.056709 34.137696 560.948388
camera transformation: 145.056709 34.137696 560.948388
camera transformation: 145.056709 34.137696 560.948388
camera transformation: 141.044529 33.130566 545.431075
camera transformation: 144.985976 34.118918 560.662976
camera transformation: 145.057722 34.137896 560.951561
camera transformation: 145.057722 34.137896 560.951561
camera transformation: 145.057722 34.137896 560.951561
camera transformation: 153.656796 18.847826 551.173961
camera transformation: 153.459454 18.820515 550.460694
camera transformation: 153.463400 18.821020 550.474774
camera transformation: 153.463400 18.821020 550.474774
camera transformation: 153.463400 18.821020 550.474774
camera transformation: 150.756045 18.471968 541.053654
camera transformation: 153.457933 18.819963 550.450362
camera transformation: 153.471652 18.822038 550.502303
camera transformation: 153.471652 18.822038 550.502303
camera transformation: 153.471652 18.822038 550.502303
camera transformation: 165.753777 10.789852 542.625784
camera transformation: 165.872430 10.798618 543.003766
camera transformation: 165.861243 10.797709 542.967712
camera transformation: 165.861243 10.797709 542.967712
camera transformation: 165.861243 10.797709 542.967712
camera transformation: 159.707526 10.325843 522.933657
camera transformation: 165.749957 10.789214 542.611588
camera transformation: 165.878724 10.799263 543.027862
camera transformation: 165.858931 10.797578 542.960740
camera transformation: 165.858931 10.797578 542.960740
camera transformation: 172.080657 1.469299 534.761847
camera transformation: 172.099660 1.470041 534.825697
camera transformation: 172.105059 1.470117 534.842380
camera transformation: 172.105059 1.470117 534.842380
camera transformation: 172.105059 1.470117 534.842380
camera transformation: 166.665623 1.366321 518.259388
camera transformation: 171.958367 1.467311 534.398567
camera transformation: 172.100170 1.470062 534.827885
camera transformation: 172.101379 1.469965 534.828683
camera transformation: 172.101379 1.469965 534.828683
camera transformation: 181.319872 -6.361278 526.585438
camera transformation: 181.274748 -6.360755 526.433490
camera transformation: 181.253058 -6.360225 526.371230
camera transformation: 181.253058 -6.360225 526.371230
camera transformation: 181.253058 -6.360225 526.371230
camera transformation: 178.239568 -6.285195 517.597418
camera transformation: 181.243052 -6.360170 526.334529
camera transformation: 181.262355 -6.360482 526.395503
camera transformation: 181.262355 -6.360482 526.395503
camera transformation: 181.262355 -6.360482 526.395503
camera transformation: 187.108940 -10.223686 510.799056
camera transformation: 187.181645 -10.227215 510.978572
camera transformation: 187.181645 -10.227215 510.978572
camera transformation: 187.181645 -10.227215 510.978572
camera transformation: 187.181645 -10.227215 510.978572
camera transformation: 183.952885 -10.095289 502.048962
camera transformation: 187.138129 -10.225454 510.860204
camera transformation: 187.186616 -10.227454 510.990564
camera transformation: 187.186616 -10.227454 510.990564
camera transformation: 187.186616 -10.227454 510.990564
camera transformation: 174.882900 -17.728211 507.700497
camera transformation: 175.151320 -17.750571 508.526338
camera transformation: 175.156303 -17.750970 508.543547
camera transformation: 175.156303 -17.750970 508.543547
camera transformation: 175.156303 -17.750970 508.543547
camera transformation: 173.093356 -17.563969 502.840939
camera transformation: 175.132943 -17.749048 508.472818
camera transformation: 175.147617 -17.750226 508.517538
camera transformation: 175.147617 -17.750226 508.517538
camera transformation: 175.147617 -17.750226 508.517538
camera transformation: 153.570679 -27.610874 523.025575
camera transformation: 154.835853 -27.811536 527.263384
camera transformation: 154.855814 -27.814620 527.336320
camera transformation: 154.855814 -27.814620 527.336320
camera transformation: 154.855814 -27.814620 527.336320
camera transformation: 152.299460 -27.392362 519.749682
camera transformation: 154.752070 -27.798192 526.972641
camera transformation: 154.827218 -27.810047 527.230484
camera transformation: 154.858447 -27.815063 527.341550
camera transformation: 154.840860 -27.812225 527.285658
camera transformation: 135.840483 -41.072535 553.345517
camera transformation: 136.128073 -41.152366 554.504789
camera transformation: 136.136155 -41.154777 554.540872
camera transformation: 136.136155 -41.154777 554.540872
camera transformation: 136.136155 -41.154777 554.540872
camera transformation: 130.527637 -39.583364 532.044687
camera transformation: 135.988306 -41.113532 553.943304
camera transformation: 136.142041 -41.156307 554.559791
camera transformation: 136.142041 -41.156307 554.559791
camera transformation: 136.142041 -41.156307 554.559791
camera transformation: 114.644838 -45.377904 573.264490
camera transformation: 115.199061 -45.580086 576.005279
camera transformation: 115.229438 -45.591515 576.169804
camera transformation: 115.246340 -45.597695 576.252987
camera transformation: 115.246340 -45.597695 576.252987
camera transformation: 113.754556 -45.048098 568.589400
camera transformation: 115.200177 -45.581112 576.034506
camera transformation: 115.235757 -45.593964 576.202107
camera transformation: 115.245920 -45.597558 576.251805
camera transformation: 115.245920 -45.597558 576.251805
camera transformation: 99.671642 -40.181365 582.198352
camera transformation: 100.462758 -40.471438 586.962569
camera transformation: 100.537384 -40.500194 587.472050
camera transformation: 100.549497 -40.504513 587.541289
camera transformation: 100.549497 -40.504513 587.541289
camera transformation: 97.303318 -39.355658 570.579586
camera transformation: 100.336305 -40.427915 586.316769
camera transformation: 100.547949 -40.504219 587.544305
camera transformation: 100.548709 -40.504451 587.544697
camera transformation: 100.548709 -40.504451 587.544697
camera transformation: 89.621585 -31.117271 596.138707
camera transformation: 90.219712 -31.290746 599.985966
camera transformation: 90.329912 -31.322731 600.684517
camera transformation: 90.328693 -31.322303 600.672256
camera transformation: 90.327473 -31.321874 600.659986
camera transformation: 87.759503 -30.579438 586.130418
camera transformation: 90.158155 -31.275121 599.754395
camera transformation: 90.313555 -31.317626 600.555676
camera transformation: 90.312882 -31.317428 600.555444
camera transformation: 90.312882 -31.317428 600.555444
camera transformation: 71.029270 -24.465717 602.374578
camera transformation: 70.940132 -24.442041 601.728877
camera transformation: 70.905596 -24.432504 601.444114
camera transformation: 70.905596 -24.432504 601.444114
camera transformation: 70.905596 -24.432504 601.444114
camera transformation: 68.768061 -23.836713 585.531494
camera transformation: 70.761778 -24.392775 600.274063
camera transformation: 70.901723 -24.431512 601.417396
camera transformation: 70.899923 -24.430914 601.396803
camera transformation: 70.899923 -24.430914 601.396803
camera transformation: 48.950365 -26.084962 601.595042
camera transformation: 48.933292 -26.081172 601.647515
camera transformation: 48.907404 -26.070042 601.356804
camera transformation: 48.907438 -26.070153 601.365086
camera transformation: 48.908143 -26.070507 601.373649
camera transformation: 47.153553 -25.289153 579.698461
camera transformation: 48.752213 -26.003037 599.555228
camera transformation: 48.887848 -26.061948 601.158725
camera transformation: 48.906954 -26.070190 601.376902
camera transformation: 48.906954 -26.070190 601.376902
camera transformation: 36.527862 -27.859885 601.451937
camera transformation: 36.678154 -27.949073 603.625633
camera transformation: 36.699226 -27.961495 603.914756
camera transformation: 36.699226 -27.961495 603.914756
camera transformation: 36.699226 -27.961495 603.914756
camera transformation: 34.828000 -26.821094 576.608529
camera transformation: 36.532837 -27.864475 601.649382
camera transformation: 36.672854 -27.945472 603.510826
camera transformation: 36.696060 -27.959637 603.870801
camera transformation: 36.696060 -27.959637 603.870801
camera transformation: 35.748520 -26.890392 599.448608
camera transformation: 35.952229 -27.020554 603.539403
camera transformation: 35.983429 -27.041974 604.319312
camera transformation: 35.983462 -27.043056 604.402832
camera transformation: 35.983320 -27.043073 604.409701
camera transformation: 33.960297 -25.864785 576.135985
camera transformation: 35.748413 -26.917867 602.033525
camera transformation: 35.951659 -27.027518 604.189188
camera transformation: 35.971207 -27.037715 604.385121
camera transformation: 35.972959 -27.037579 604.303563
camera transformation: 38.696380 -24.720939 614.161095
camera transformation: 38.183800 -24.450617 606.346543
camera transformation: 38.134450 -24.424857 605.615128
camera transformation: 38.135159 -24.425102 605.615481
camera transformation: 38.135159 -24.425102 605.615481
camera transformation: 36.853820 -23.745396 586.509814
camera transformation: 38.056856 -24.382855 604.374995
camera transformation: 38.136416 -24.425786 605.632439
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 36.853820 -23.745396 586.509814
camera transformation: 38.056856 -24.382855 604.374995
camera transformation: 38.136416 -24.425786 605.632439
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 36.853820 -23.745396 586.509814
camera transformation: 38.056856 -24.382855 604.374995
camera transformation: 38.136416 -24.425786 605.632439
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 38.135450 -24.425204 605.617310
camera transformation: 7.229969 -28.426098 487.494944
camera transformation: 1.061351 -37.746923 490.163596
camera transformation: 1.039532 -38.012456 494.388390
camera transformation: -1.890196 -40.741575 489.196950
camera transformation: -1.905179 -40.851093 490.795875
camera transformation: -1.913698 -40.848960 490.823090
camera transformation: -1.913698 -40.848960 490.823090
camera transformation: -1.913698 -40.848960 490.823090
camera transformation: -1.275810 -46.033944 570.939355
camera transformation: -1.360000 -46.027133 571.481545
camera transformation: -1.360001 -46.027141 571.481641
camera transformation: -1.360001 -46.027141 571.481641
camera transformation: -1.360001 -46.027141 571.481641
camera transformation: -4.856311 -45.941600 492.994856
camera transformation: -3.554997 -42.126784 489.440430
camera transformation: -5.463370 -44.468718 490.659087
camera transformation: -9.441343 -43.715187 491.482322
camera transformation: -9.587181 -44.110448 496.792789
camera transformation: -9.609268 -44.160040 497.482136
camera transformation: -9.608472 -44.160158 497.478898
camera transformation: -9.608472 -44.160158 497.478898
camera transformation: -10.481053 -50.965780 590.361852
camera transformation: -10.621224 -50.926162 590.569728
camera transformation: -10.629070 -50.921812 590.551891
camera transformation: -10.629097 -50.921897 590.546394
camera transformation: -10.636885 -50.917353 590.524294
camera transformation: -6.014300 -46.196831 498.526090
camera transformation: -6.009832 -46.082883 497.231266
camera transformation: -6.010114 -46.077167 497.168853
camera transformation: -6.010114 -46.077167 497.168853
camera transformation: -6.010114 -46.077167 497.168853
camera transformation: -3.463019 -44.233782 493.963337
camera transformation: -4.785040 -43.402498 496.156243
camera transformation: -4.783746 -43.479250 496.922893
camera transformation: -4.785978 -43.496124 497.102079
camera transformation: -4.785978 -43.496124 497.102079
camera transformation: -2.223724 -40.852272 492.668115
camera transformation: -2.255911 -40.924862 493.830850
camera transformation: -2.271698 -40.922372 493.898586
camera transformation: -2.272499 -40.926932 493.964230
camera transformation: -2.273289 -40.926828 493.967891
camera transformation: -0.843473 -38.095411 491.335923
camera transformation: -0.931192 -38.413990 495.584437
camera transformation: -0.934646 -38.412155 495.581977
camera transformation: -0.936060 -38.406104 495.520218
camera transformation: -0.936060 -38.406104 495.520218
camera transformation: 1.160322 -40.264248 495.616755
camera transformation: 1.161880 -40.256411 495.494743
camera transformation: 1.161880 -40.256411 495.494743
camera transformation: 1.161880 -40.256411 495.494743
camera transformation: 1.161880 -40.256411 495.494743
camera transformation: 3.723426 -39.508401 504.989547
camera transformation: 2.678958 -37.770843 496.488485
camera transformation: 2.410065 -37.547308 495.340346
camera transformation: 2.410310 -37.557735 495.465695
camera transformation: 2.412940 -37.558864 495.465748
camera transformation: 4.203019 -39.053885 496.367027
camera transformation: 4.204413 -39.092548 496.951603
camera transformation: 4.204475 -39.096509 497.013231
camera transformation: 4.203762 -39.096434 497.017567
camera transformation: 4.203762 -39.096434 497.017567
camera transformation: 5.836132 -34.680737 472.145396
camera transformation: 5.595480 -36.378936 496.482830
camera transformation: 5.600604 -36.497013 497.981744
camera transformation: 5.599892 -36.506204 498.105048
camera transformation: 5.599892 -36.506204 498.105048
camera transformation: 8.653772 -36.053810 496.365853
camera transformation: 8.656689 -36.070246 496.641526
camera transformation: 8.655321 -36.066243 496.579657
camera transformation: 8.655321 -36.066243 496.579657
camera transformation: 8.655321 -36.066243 496.579657
camera transformation: 11.681594 -37.460802 533.449922
camera transformation: 8.750170 -37.668607 528.373758
camera transformation: 8.706002 -36.202737 498.655489
camera transformation: 8.659246 -36.073637 496.682038
camera transformation: 8.656511 -36.065632 496.558303
camera transformation: 10.115806 -34.679194 497.429046
camera transformation: 10.106989 -34.657666 497.079133
camera transformation: 10.103787 -34.649678 496.950432
camera transformation: 10.103787 -34.649678 496.950432
camera transformation: 10.103787 -34.649678 496.950432
camera transformation: 13.357781 -37.088943 550.459166
camera transformation: 10.836519 -37.433833 555.303929
camera transformation: 10.101605 -35.371871 512.485432
camera transformation: 10.113543 -34.698273 497.824009
camera transformation: 10.102475 -34.651602 497.000985
camera transformation: 18.350948 -27.329877 502.107301
camera transformation: 18.321398 -27.294670 501.341088
camera transformation: 18.322018 -27.294771 501.336240
camera transformation: 18.321814 -27.294491 501.330843
camera transformation: 18.321814 -27.294491 501.330843
camera transformation: 22.829063 -30.847622 591.172408
camera transformation: 22.590924 -30.974564 597.758595
camera transformation: 22.590913 -30.974577 597.758539
camera transformation: 22.590902 -30.974590 597.758482
camera transformation: 22.590891 -30.974603 597.758426
camera transformation: 37.103910 -10.551708 515.807167
camera transformation: 47.377631 9.966732 526.726739
camera transformation: 49.596898 16.198552 526.553013
camera transformation: 56.476216 22.342972 528.741435
Feature List
* A simple framework for creating real-time augmented reality applications
* A multiplatform library (Windows, Linux, Mac OS X, SGI)
* Overlays 3D virtual objects on real markers ( based on computer vision algorithm)
* A multi platform video library with:
o multiple input sources (USB, Firewire, capture card) supported
o multiple format (RGB/YUV420P, YUV) supported
o multiple camera tracking supported
o GUI initializing interface
* A fast and cheap 6D marker tracking (real-time planar detection)
* An extensible markers patterns approach (number of markers fct of efficency)
* An easy calibration routine
* A simple graphic library (based on GLUT)
* A fast rendering based on OpenGL
* A 3D VRML support
* A simple and modular API (in C)
* Other language supported (JAVA, Matlab)
* A complete set of samples and utilities
* A good solution for tangible interaction metaphor
* OpenSource with GPL license for non-commercial usage
"ARToolKit is able to perform this camera tracking in real
time, ensuring that the virtual objects always appear overlaid on the tracking markers."
how to
1. 매 비디오 프레임 마다 사각형 모양을 찾기
2. 검은색 사각형에 대한 카메라의 상대적 위치를 계산
3. 그 위치로부터 컴퓨터 그래픽 모델이 어떻게 그려질지를 계산
4. 실제 영상의 마커 위에 모델을 그림
limitations
1. 추적하는 마커가 영상 안에 보일 때에만 가상 물체를 합성할 수 있음
2. 이 때문에 가상 물체들의 크기나 이동이 제한됨
3. 마커의 패턴의 일부가 가려지는 경우 가상 물체를 합성할 수 없음
4. range(거리)의 제한: 마커의 모양이 클수록 멀리 떨어진 패턴까지 감지할 수 있으므로 추적할 수 있는 volume(범위)이 더 커짐
(이때 거리는 pattern complexity (패턴의 복잡도)에 따라 달라짐: 패턴이 단순할수록 한계 거리가 길어짐)
5. 추적 성능이 카메라에 대한 마커의 상대적인 orientation(방향)에 따라 달라짐
: 마커가 많이 기울어 수평에 가까워질수록 보이는 패턴의 부분이 줄어들기 때문에 recognition(인식)이 잘 되지 않음(신뢰도가 떨어짐)
6. 추적 성능이 lighting conditions (조명 상태)에 따라 달라짐
: 조명에 의해 종이 마커 위에 reflection and glare spots (반사)가 생기면 마커의 사각형을 찾기가 어려워짐
: 종이 대신 반사도가 적은 재료를 쓸 수 있음
Development
Initialization
1. Initialize the video capture and read in the marker pattern files and camera parameters. -> init()
Main Loop
2. Grab a video input frame. -> arVideoGetImage()
3. Detect the markers and recognized patterns in the video input frame. -> arDetectMarker()
4. Calculate the camera transformation relative to the detected patterns. -> arGetTransMat)
5. Draw the virtual objects on the detected patterns. -> draw()
Shutdown
6. Close the video capture down. -> cleanup()
Default camera properties are contained in the camera parameter file
camera_para.dat, that is read in each time an application is started.
The program calib_dist is used to measure
the image center point and lens distortion, while calib_param produces the other
camera properties. (Both of these programs can be found in the bin directory and
their source is in the utils/calib_dist and utils/calib_cparam
directories.)
ARToolKit gives the position of the marker in the camera coordinate system, and uses OpenGL matrix system for the
position of the virtual object.
/**
* \brief get the video image.
*
* This function returns a buffer with a captured video image.
* The returned data consists of a tightly-packed array of
* pixels, beginning with the first component of the leftmost
* pixel of the topmost row, and continuing with the remaining
* components of that pixel, followed by the remaining pixels
* in the topmost row, followed by the leftmost pixel of the
* second row, and so on.
* The arrangement of components of the pixels in the buffer is
* determined by the configuration string passed in to the driver
* at the time the video stream was opened. If no pixel format
* was specified in the configuration string, then an operating-
* system dependent default, defined in <AR/config.h> is used.
* The memory occupied by the pixel data is owned by the video
* driver and should not be freed by your program.
* The pixels in the buffer remain valid until the next call to
* arVideoCapNext, or the next call to arVideoGetImage which
* returns a non-NULL pointer, or any call to arVideoCapStop or
* arVideoClose.
* \return A pointer to the pixel data of the captured video frame,
* or NULL if no new pixel data was available at the time of calling.
*/
AR_DLL_API ARUint8* arVideoGetImage(void);
ARParam
param.h
/** \struct ARParam
* \brief camera intrinsic parameters.
*
* This structure contains the main parameters for
* the intrinsic parameters of the camera
* representation. The camera used is a pinhole
* camera with standard parameters. User should
* consult a computer vision reference for more
* information. (e.g. Three-Dimensional Computer Vision
* (Artificial Intelligence) by Olivier Faugeras).
* \param xsize length of the image (in pixels).
* \param ysize height of the image (in pixels).
* \param mat perspective matrix (K).
* \param dist_factor radial distortions factor
* dist_factor[0]=x center of distortion
* dist_factor[1]=y center of distortion
* dist_factor[2]=distortion factor
* dist_factor[3]=scale factor
*/
typedef struct {
int xsize, ysize;
double mat[3][4];
double dist_factor[4];
} ARParam;
/**
* \brief main function to detect the square markers in the video input frame.
*
* This function proceeds to thresholding, labeling, contour extraction and line corner estimation
* (and maintains an history).
* It's one of the main function of the detection routine with arGetTransMat.
* \param dataPtr a pointer to the color image which is to be searched for square markers.
* The pixel format depend of your architecture. Generally ABGR, but the images
* are treated as a gray scale, so the order of BGR components does not matter.
* However the ordering of the alpha comp, A, is important.
* \param thresh specifies the threshold value (between 0-255) to be used to convert
* the input image into a binary image.
* \param marker_info a pointer to an array of ARMarkerInfo structures returned
* which contain all the information about the detected squares in the image
* \param marker_num the number of detected markers in the image.
* \return 0 when the function completes normally, -1 otherwise
*/
int arDetectMarker( ARUint8 *dataPtr, int thresh,
ARMarkerInfo **marker_info, int *marker_num );
You need to notice that arGetTransMat give the position
of the marker in the camera coordinate
system (not the reverse). If you want the position of the
camera in the marker coordinate system you
need to inverse this transformation (arMatrixInverse()).
XXXBK: not be sure of this function: this function must just convert 3x4
matrix to classical perspective openGL matrix. But in the code, you
used arParamDecompMat that seem decomposed K and R,t, aren't it ? why do
this decomposition since we want just intrinsic parameters ? and if not
what is arDecomp ?
/**
* \brief compute camera position in function of detected markers.
*
* calculate the transformation between a detected marker and the real camera,
* i.e. the position and orientation of the camera relative to the tracking mark.
* \param marker_info the structure containing the parameters for the marker for
* which the camera position and orientation is to be found relative to.
* This structure is found using arDetectMarker.
* \param center the physical center of the marker. arGetTransMat assumes that the marker
* is in x-y plane, and z axis is pointing downwards from marker plane.
* So vertex positions can be represented in 2D coordinates by ignoring the
* z axis information. The marker vertices are specified in order of clockwise.
* \param width the size of the marker (in mm).
* \param conv the transformation matrix from the marker coordinates to camera coordinate frame,
* that is the relative position of real camera to the real marker
* \return always 0.
*/
double arGetTransMat( ARMarkerInfo *marker_info,
double center[2], double width, double conv[3][4] )
/**
* \brief Inverse a non-square matrix.
*
* Inverse a matrix in a non homogeneous format. The matrix
* need to be euclidian.
* \param s matrix input
* \param d resulted inverse matrix.
* \return 0 if the inversion success, -1 otherwise
* \remark input matrix can be also output matrix
*/
int arUtilMatInv( double s[3][4], double d[3][4] );
Design Patterns for Augmented Reality Systems - 2004
Asa Macwilliams, Thomas Reicher, Gudrun Klinker, Bernd Brügge
Conference: Workshop on Exploring the Design and Engineering of Mixed Reality Systems - MIXER
Figure 2: Relationships between the individual patterns for augmented reality systems. Several approaches are used in
combination within an augmented reality system. One approach might require the use of another approach or prevent
its usage.
// cout << "cross ratio = " << cross_ratio << endl;
printf("cross ratio = %f\n", cross_ratio);
}
return 0;
}
cross ratio = 1.088889
cross ratio = 2.153846
cross ratio = 1.185185
cross ratio = 1.094737
cross ratio = 2.166667
cross ratio = 1.160714
cross ratio = 1.274510
cross ratio = 1.562500
cross ratio = 1.315789
cross ratio = 1.266667
cross ratio = 1.266667
cross ratio = 1.446429
cross ratio = 1.145455
cross ratio = 1.441176
cross ratio = 1.484848
cross ratio = 1.421875
cross ratio = 1.123457
cross ratio = 1.600000
cross ratio = 1.142857
cross ratio = 1.960784
cross ratio = 1.142857
cross ratio = 1.350000
cross ratio = 1.384615
cross ratio = 1.529412
cross ratio = 1.104575
cross ratio = 1.421875
cross ratio = 1.711111
cross ratio = 1.178571
cross ratio = 1.200000
cross ratio = 1.098039
cross ratio = 2.800000
cross ratio = 1.230769
cross ratio = 1.142857
다른 식 적용
cross ratio = 0.040000
cross ratio = 0.666667
cross ratio = 0.107143
cross ratio = 0.064935
cross ratio = 0.613636
cross ratio = 0.113636
cross ratio = 0.204545
cross ratio = 0.390625
cross ratio = 0.230769
cross ratio = 0.203620
cross ratio = 0.205882
cross ratio = 0.316406
cross ratio = 0.109375
cross ratio = 0.300000
cross ratio = 0.360000
cross ratio = 0.290909
cross ratio = 0.090909
cross ratio = 0.400000
cross ratio = 0.100000
cross ratio = 0.562500
cross ratio = 0.100000
cross ratio = 0.257143
cross ratio = 0.285714
cross ratio = 0.363636
cross ratio = 0.074380
cross ratio = 0.290909
cross ratio = 0.466667
cross ratio = 0.125000
cross ratio = 0.156250
/* Test: pattern design in implementing a virtual studio
to design grid lines using cross-ratio as represented in "swPark_2000rti": (16)
C(x1,x2,x3,x4) = (x2-x1)(x4-x3) / (x4-x2)(x3-x1)
based on two plots in figure 7, 439p
2010, lym
*/
cout << "# of cross-ratios in vertical lines = " << sizeof(crossratio_vertical) / sizeof(double) << endl;
cout << "# of cross-ratios in horizontal lines = " << sizeof(crossratio_horizontal) / sizeof(double) << endl;
// initialize grid lines of the pattern
double x[40] = { 0.0 }; // 40 vertical lines
double y[20] = { 0.0 }; // 20 horizontal lines
double r = 0.0; // cross ratio of 4 consecutive lines
double a,b,c,d; // temporary variables to be used in calculating a cross ratio
// set the positions of the first three vertical lines
x[0] = 1.0;
x[1] = x[0] + 1.0;
x[2] = x[1] + 2.0;
// set the positions of the first three horizontal lines
y[0] = 1.0;
y[1] = y[0] + 2.0;
y[2] = y[1] + 1.0;
// for vertical lines
int number = 40;
cout << endl << "x[0]=" << x[0] << " x[1]=" << x[1] << " x[2]=" << x[2] << endl;
for( int n = 0; n < number-3; n++ )
{
// cout << "n = " << n << endl;
r = crossratio_vertical[n];
/* a = x[n];
b = x[n+1];
c = x[n+2];
// cout << "a=" << a << " b=" << b << " c=" << c << " ratio =" << r;
Publication IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer SciencesVol.E83-ANo.10pp.1921-1928 Publication Date: 2000/10/20 Online ISSN: Print ISSN: 0916-8508 Type of Manuscript: Special Section PAPER (Special Section on Information Theory and Its Applications) Category: Image Processing Keyword: cross ratio,
Markov process,
error analysis,
reliability evaluation,
virtual studio,
Full Text:
MVA2000 IAPR Workshop on Machine Vision Applications, Nov. 28-30,2000,
The University of Tokyo, Japan
13-28
Optimal Grid Pattern for
Automated Matching Using Cross Ratio
Chikara Matsunaga (Broadcast
Division, FOR-A Co. Ltd.)
Kenichi Kanatanit (Department of Computer
Science, Gunma University)
Summary: With
a view to virtual studio applications, we design an optimal grid
pattern such that the observed image of a small portion of it can be
matched to its corresponding position in the pattern easily. The grid
shape is so determined that the cross ratio of adjacent
intervals is different everywhere. The cross ratios are generated by an
optimal Markov process that maximizes the accuracy of matching. We test
our camera calibration system using the resulting grid pattern in a
realistic setting and show that the performance is greatly improved by
applying techniques derived from the designed properties of the pattern.
Camera calibration is a first step in all vision and media applications.
> pre-calibration (Tsai) vs. self-calibration (Pollefeys)
=> "simultaneous calibration" by placing an easily distinguishable planar pattern in the scene
Introducing a statistic model of image noise, we generate the grid intervals by an optimal Markov process that maximizes the accuracy of matching.
: The pattern is theoretically designed by statistical analysis
If the cross rations are given, the sequence is determined as follows.
To find a sequence of cross ratios such that the sequence of numbers is a homogeneous increasing with the average interval being 1 and the minimum width as specified.
=> To generate the sequence of cross ratios stochastically, according to a probability distribution defined in such a way that the resulting sequence of numbers has the desired properties
=> able to optimize the probability distribution so that the matching performance is maximized by analyzing the statistical properties of image noise
출처: C. Matsunaga, Y. Kanazawa, and K. Kanatani, Optimal grid pattern for automated camera calibration using cross ratio , IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol. E83-A, No. 10, pp. 1921--1928, 2000. 중 1926쪽 Fig.8 4배 확대 캡처
camera self-calibration 카메라 자동 보정
: 어떤 물체의 삼차원 VRML 모델을 여러 장의 영상에서 얻고자 하는 경우 그 영상들을 얻는 데 사용된 카메라에 대한 위치, 방향, 초점거리 등의 정보를 구하는 과정
projective geometric method 투영기하방법
1. 삼차원 모델 복원을 통한 증강 현실
SFM = structure from motion
: 카메라 파라미터와 영상열 (image sequence) 각 프레임의 카메라간의 상대적인 위치를 계산하고, 이러한 정보를 이용하여 영상열에 보이는 물체의 대략적인 3차원 구조를 계산
trilinearity
: 임의의 3차원 구조를 보고 있는 세 개의 투시뷰 (perspective view) 사이의 대수학적 연결 관계
trifocal tensor
: trilinearity를 수학적으로 모델링한 것
(영상에서 특징점과 직선을 정합하고, 투영 카메라를 계산하며, 투영 구조를 복원하는 데 이용됨)
(기존 epipolar geometry를 이용한 방법보다 더 정확한 결과를 얻을 수 있는 것으로 알려짐)
SFM 시스템의 핵심 기술을 영상열에서 정확하게 카메라를 계산하는 것이다.
1) projective reconstruction 투영 기하 복원
영상열에서 추출되는 특징점과 특징선들을 정확하게 연결하여 영상열에서 관찰되는 2차원 특징들과 우리가 복원하여 모델을 만들고자 하는 3차원 구조와의 초기 관계를 (실제 영상을 획득한 카메라의 파라미터들과 카메라간의 상대적인 파악함으로써) 계산
Trifocal
tensor의 계산은 아주 정밀한 값을 요구하므로 잘못된 특징점이나 특징선의 연결이 들어가서는 안 된다. 이러한 잘못된 연결
(Outlier)를 제거하는 방법으로 LMedS (Least Median Square)나 RANSAC (Random
Sampling Consensus) 기법이 사용된다.
세 개의 뷰 단위로
연속된 영상열에서 계속적으로 계산된 trifocal tensor들과 특징점과 특징선들은 임의의 기준 좌표계를 중심으로 정렬하는
다중 뷰 정렬 (Multi-view Registration) 처리를 통하여 통합된다. 이렇게 통합된 값들은 투영 통합 최적화
(Projective Bundle Adjustment)를 거쳐 투영 기하 아래에서 발생한 에러를 최소화한다.
2) camera auto-calibration 카메라 자동 보정
투영 기하 정보를 유클리드 기하 정보로 변환하기 위해서 필요한 것으로, 2차원 영상 정보의 기하학적인 특징을 이용하여 투영 기하에서 유클리드 기하로 3차원 구조를 변환하는 동시에 실세계에서의 카메라 변수(초점 거리, 카메라 중심, 축 비율, 비틀림 상수)와 카메라 간의 상대적인 위치를 정확하게 계산
카메라 자동 보정의 방법론에서는 카메라 보정을 위한 패턴을 따로 디자인하여 사용하지 않는다. 이 경우 실시간 계산의 기능은 없어지게 된다.
3) (유클리드) 구조 복원
모델의 3차원 기하 정보를 구함
: 자동 보정 단계를 통하여 복원된 3차원 정보를 그래픽 모델로 만들기 위해서 먼저 3차원 데이터를 데이터 삼각법 (Triangulation)을 이용하여 다각형 메쉬 모델 (Polygonal mesh model)로 만든 후에, 텍스처 삽입을 통해서 모델의 현실성을 증가시킴
2. 카메라 보정에 의한 증강 현실
1) 보정 패턴에 고정된 좌표계(W)와 카메라 좌표계(C) 그리고 그래픽을 위한 좌표계(G) 사이의 관계를 미리 설정해 둠
2) 카메라와 보정 패턴 사이의 상대적인 좌표변환 관계는 영상처리를 통하여 매 프레임마다 계산을 통해 얻음
3) (그래픽 좌표계와 보정 패턴 좌표계는 미리 결정되어 있어서 컴퓨터 그래픽에 의해 합성될 가상물체들의 상대적인 위치가 카메라 보정 단계 이전에 이미 알려져 있는 것이므로,) 카메라로부터 얻은 영상을 분석하여 보정 패턴을 인식하고 고정 패턴의 삼차원 좌표와 그 좌표의 영상 위치를 알아낸 후 그 둘 사이의 관계를 이용하여 카메라 보정값을 얻음
cross ratio 비조화비
4) trifocal tensor를 계산하여 카메라의 초기 정보를 계산하여 초기 영상복원을 구함
5) 영상열의 각 영상에 대해서 이전 시점의 영상 사이의 정합점들을 영상 정합 (image based matching: normalized cross correlation (NCC)를 이용하는) 방법을 통해 구함
6) RANSAC 알고리듬을 기초로 새로운 영상에 대한 카메라 행렬을 구하고, 새롭게 나타난 정합점들에 대해서 삼차원 좌표를 구함 (잘못 얻어진 정합점들을 제거하는 과정을 포함)
7) Euclidean Reconstruction: 투영 기하 공간에서 정의된 값들을 유클리드 공간에서 정의되는 값으로 변환
카메라 자동 보정 및 투영기하공간에서의 계산식들은 모두 비선형 방정식으로 되어 있기 때문에 최소자승 오차법 (Least square error method)으로 구해지는 값들이 원래 방정식을 제대로 따르지 못하는 경우가 많이 생긴다. 따라서 비선형 최적화 과정이 항상 필요하며 유클리드 공간으로의 변환과정의 최종 단계에서 그리고 투영기하공간에서 복원값들 구할 때 최적화 과정을 적절히 배치할 필요가 있다.
Denis Chekhlov, Andrew Gee, Andrew Calway, Walterio Mayol-Cuevas
Ninja on a Plane: Automatic Discovery of Physical Planes for Augmented Reality Using Visual SLAM
저 자: 김기홍, 김홍기, 정혁, 김종성, 손욱호 / 가상현실연구팀
발행일자: 2007.08.15
발행권호: 22권 4호 (통권 106)
페 이 지: 96
논문구분: 융합 시대를 주도할 디지털콘텐츠 기술 특집 논문
초 록
혼합현실 기술을 휴대가 용이한 모바일 기기상에서 효과적으로 구현하기 위해서는 기기에 부착된 카메라의 위치를 인식하는 기술을 시작으로 입력된 실세계 공간에 가상의 디지털 정보를 정합하고 표현하는 기술, 사용자가 표현된 혼합현실 환경과 현실감있게 상호작용하는 기술, 그리고 다양한 응용분야에 맞게 혼합현실 콘텐츠를 저작하는 기술에 이르기까지 여러 가지 세부 기술들이 요구된다. 본 논문에서는 언급한 세부 기술들에 대한 개요와 국내외적으로 진행되고 있는 관련 기술들의 동향을 구체적인 사례를 통해 소개한다.
OSGART
http://www.osgart.org/ http://www.artoolworks.com/community/osgart/
C++ cross-platform development library that simplifies the development of Augmented Reality or Mixed Reality applications by combining computer vision based tracking libraries (e.g. ARToolKit, ARToolKitPlus, SSTT and BazAR) with the 3D scene graph libary OpenSceneGraph
DART
(The Designer's Augmented Reality Toolkit) http://www.cc.gatech.edu/dart/
a set of software tools that support
rapid design and implementation of augmented reality
experiences and applications
ULTRA
(Ultra portable augmented reality for industrial maintenance applications) http://www.ist-ultra.org/
내가 설치할 컴퓨터 사양:
Model Name: Mac mini
Model Identifier: Macmini3,1
Processor Name: Intel Core 2 Duo
Processor Speed: 2 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 3 MB
Memory: 1 GB
Bus Speed: 1.07 GHz
Boot ROM Version: MM31.0081.B00
그래픽 카드:
NVIDIA GeForce 9400
"Intel Core 2 Duo processors 2.4GHz+ are fine."이라고 했는데, 2.0이면 되지 않을까? 그래픽 카드는 동일한 것이니 문제 없고.
1. library dependency 확인
1. TooN - a header library for linear algebra
2. libCVD - a library for image handling, video capture and computer vision
3. Gvars3 - a run-time configuration/scripting library, this is a sub-project of libCVD.
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/toon co TooN
실행 결과:
cvs checkout: warning: failed to open /Users/lym/.cvspass for reading: No such file or directory
cvs checkout: Updating TooN
U TooN/Authors
U TooN/COPYING
U TooN/Cholesky.h
U TooN/Doxyfile
U TooN/GPL.txt
U TooN/LU.h
U TooN/Lapack_Cholesky.h
U TooN/Makefile.in
U TooN/README
U TooN/SVD.h
U TooN/SymEigen.h
U TooN/TODO
U TooN/TooN.h
U TooN/configure
U TooN/configure.ac
U TooN/determinant.h
U TooN/gauss_jordan.h
U TooN/gaussian_elimination.h
U TooN/generated.h
U TooN/helpers.h
U TooN/irls.h
U TooN/lapack.h
U TooN/make_make_vector.awk
U TooN/make_typeof.awk
U TooN/se2.h
U TooN/se3.h
U TooN/sl.h
U TooN/so2.h
U TooN/so3.h
U TooN/wls.h
cvs checkout: Updating TooN/Documentation
cvs checkout: Updating TooN/benchmark
U TooN/benchmark/generate_solvers.m
U TooN/benchmark/solve_ax_equals_b.cc
U TooN/benchmark/solvers.cc
cvs checkout: Updating TooN/doc
U TooN/doc/COPYING_FDL
U TooN/doc/Makefile
U TooN/doc/documentation.h
U TooN/doc/linoperatorsdoc.h
cvs checkout: Updating TooN/internal
U TooN/internal/allocator.hh
U TooN/internal/builtin_typeof.h
U TooN/internal/comma.hh
U TooN/internal/config.hh
U TooN/internal/config.hh.in
U TooN/internal/dchecktest.hh
U TooN/internal/debug.hh
U TooN/internal/deprecated.hh
U TooN/internal/diagmatrix.h
U TooN/internal/make_vector.hh
U TooN/internal/matrix.hh
U TooN/internal/mbase.hh
U TooN/internal/objects.h
U TooN/internal/operators.hh
U TooN/internal/overfill_error.hh
U TooN/internal/reference.hh
U TooN/internal/size_mismatch.hh
U TooN/internal/slice_error.hh
U TooN/internal/typeof.hh
U TooN/internal/vbase.hh
U TooN/internal/vector.hh
cvs checkout: Updating TooN/optimization
U TooN/optimization/brent.h
U TooN/optimization/conjugate_gradient.h
U TooN/optimization/downhill_simplex.h
U TooN/optimization/golden_section.h
cvs checkout: Updating TooN/test
U TooN/test/SXX_test.cc
U TooN/test/as_foo.cc
U TooN/test/brent_test.cc
U TooN/test/cg_test.cc
U TooN/test/cg_view.gnuplot
U TooN/test/chol.cc
U TooN/test/diagslice.cc
U TooN/test/dynamic_test.cc
U TooN/test/gauss_jordan.cc
U TooN/test/gaussian_elimination_test.cc
U TooN/test/golden_test.cc
U TooN/test/identity_test.cc
U TooN/test/lutest.cc
U TooN/test/make_vector.cc
U TooN/test/makevector.cc
U TooN/test/mat_test.cc
U TooN/test/mat_test2.cc
U TooN/test/mmult_test.cc
U TooN/test/normalize_test.cc
U TooN/test/normalize_test2.cc
U TooN/test/scalars.cc
U TooN/test/simplex_test.cc
U TooN/test/simplex_view.gnuplot
U TooN/test/sl.cc
U TooN/test/svd_test.cc
U TooN/test/sym.cc
U TooN/test/test2.cc
U TooN/test/test3.cc
U TooN/test/test_foreign.cc
U TooN/test/un_project.cc
U TooN/test/vec_test.cc
생성된 TooN 폴더에 들어가서
%%% ./configure
실행 결과:
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for dgesvd_ in -llapack... yes
checking for decltype... no
checking for __typeof__... yes
checking for __attribute__((deprecated))... yes
You're on the development branch of TooN 2.0. Everything will probably work, but
the interface is a bit different from TooN-1.x
If you want TooN-1, then get it using:
cvs -z3 -d:pserver:anoncvs@cvs.savannah.nongnu.org:/cvsroot/toon co -r Maintenance_Branch_1_x TooN
or update what you currently have using:
cvs up -r Maintenance_Branch_1_x
or head over to:
http://mi.eng.cam.ac.uk/~er258/cvd/
Otherwise, please report any bugs you come across.
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/toon co -D "Mon May 11 16:29:26 BST 2009" TooN
실행 결과:
cvs checkout: warning: failed to open /Users/lym/.cvspass for reading: No such file or directory
? TooN/Makefile
? TooN/config.log
? TooN/config.status
cvs checkout: Updating TooN
U TooN/Cholesky.h
U TooN/Doxyfile
U TooN/LU.h
U TooN/Makefile.in
U TooN/SVD.h
U TooN/SymEigen.h
U TooN/TooN.h
U TooN/configure
U TooN/configure.ac
cvs checkout: `TooN/determinant.h' is no longer in the repository
U TooN/gauss_jordan.h
U TooN/gaussian_elimination.h
U TooN/helpers.h
U TooN/irls.h
U TooN/se2.h
U TooN/se3.h
U TooN/sl.h
U TooN/so2.h
U TooN/so3.h
U TooN/util.h
U TooN/wls.h
cvs checkout: Updating TooN/Documentation
cvs checkout: Updating TooN/benchmark
U TooN/benchmark/generate_solvers.m
cvs checkout: Updating TooN/doc
U TooN/doc/documentation.h
U TooN/doc/matrixdoc.h
cvs checkout: Updating TooN/internal
U TooN/internal/allocator.hh
cvs checkout: `TooN/internal/comma.hh' is no longer in the repository
RCS file: /sources/toon/TooN/internal/config.hh,v
retrieving revision 1.12
retrieving revision 1.8
Merging differences between 1.12 and 1.8 into config.hh
TooN/internal/config.hh already contains the differences between 1.12 and 1.8
U TooN/internal/config.hh.in
cvs checkout: `TooN/internal/dchecktest.hh' is no longer in the repository
U TooN/internal/debug.hh
cvs checkout: `TooN/internal/deprecated.hh' is no longer in the repository
U TooN/internal/diagmatrix.h
U TooN/internal/matrix.hh
U TooN/internal/mbase.hh
U TooN/internal/objects.h
U TooN/internal/operators.hh
cvs checkout: `TooN/internal/overfill_error.hh' is no longer in the repository
U TooN/internal/reference.hh
U TooN/internal/slice_error.hh
U TooN/internal/vector.hh
cvs checkout: Updating TooN/optimization
U TooN/optimization/conjugate_gradient.h
U TooN/optimization/downhill_simplex.h
U TooN/optimization/golden_section.h
cvs checkout: Updating TooN/test
U TooN/test/identity_test.cc
cvs checkout: `TooN/test/simplex_test.cc' is no longer in the repository
cvs checkout: `TooN/test/simplex_view.gnuplot' is no longer in the repository
U TooN/test/vec_test.cc
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/libcvd co -D "Mon May 11 16:29:26 BST 2009" libcvd
실행 결과:
cvs checkout: warning: failed to open /Users/lym/.cvspass for reading: No such file or directory
cvs checkout: Updating libcvd
U libcvd/Authors
U libcvd/Doxyfile
U libcvd/LICENSE
U libcvd/Makefile.in
U libcvd/TODO
U libcvd/config.guess
U libcvd/config.sub
U libcvd/configure
U libcvd/configure.in
U libcvd/generate_dependencies.bash
U libcvd/install-sh
U libcvd/subimage_test.cc
cvs checkout: Updating libcvd/build
cvs checkout: Updating libcvd/build/vc2005
U libcvd/build/vc2005/config.h
U libcvd/build/vc2005/libcvd.sln
U libcvd/build/vc2005/libcvd.vcproj
cvs checkout: Updating libcvd/build/vc2008
U libcvd/build/vc2008/libcvd.sln
U libcvd/build/vc2008/libcvd.vcproj
cvs checkout: Updating libcvd/cvd
U libcvd/cvd/abs.h
U libcvd/cvd/bresenham.h
U libcvd/cvd/brezenham.h
U libcvd/cvd/byte.h
U libcvd/cvd/camera.h
U libcvd/cvd/colourspace.h
U libcvd/cvd/colourspace_convert.h
U libcvd/cvd/colourspace_frame.h
U libcvd/cvd/colourspacebuffer.h
U libcvd/cvd/colourspaces.h
U libcvd/cvd/connected_components.h
U libcvd/cvd/convolution.h
U libcvd/cvd/cpu_hacks.h
U libcvd/cvd/cvd_image.h
U libcvd/cvd/cvd_timer.h
U libcvd/cvd/deinterlacebuffer.h
U libcvd/cvd/deinterlaceframe.h
U libcvd/cvd/diskbuffer2.h
U libcvd/cvd/diskbuffer2_frame.h
U libcvd/cvd/documentation.h
U libcvd/cvd/draw.h
U libcvd/cvd/eventobject.h
U libcvd/cvd/exceptions.h
U libcvd/cvd/fast_corner.h
U libcvd/cvd/gl_helpers.h
U libcvd/cvd/glwindow.h
U libcvd/cvd/haar.h
U libcvd/cvd/harris_corner.h
U libcvd/cvd/helpers.h
U libcvd/cvd/image.h
U libcvd/cvd/image_convert.h
U libcvd/cvd/image_convert_fwd.h
U libcvd/cvd/image_interpolate.h
U libcvd/cvd/image_io.h
U libcvd/cvd/image_ref.h
U libcvd/cvd/integral_image.h
U libcvd/cvd/interpolate.h
U libcvd/cvd/irls.h
U libcvd/cvd/la.h
U libcvd/cvd/localvideobuffer.h
U libcvd/cvd/localvideoframe.h
U libcvd/cvd/message_queue.h
U libcvd/cvd/nonmax_suppression.h
U libcvd/cvd/random.h
U libcvd/cvd/readaheadvideobuffer.h
U libcvd/cvd/rgb.h
U libcvd/cvd/rgb8.h
U libcvd/cvd/rgba.h
U libcvd/cvd/ringbuffer.h
U libcvd/cvd/runnable.h
U libcvd/cvd/runnable_batch.h
U libcvd/cvd/se2.h
U libcvd/cvd/se3.h
U libcvd/cvd/serverpushjpegbuffer.h
U libcvd/cvd/serverpushjpegframe.h
U libcvd/cvd/so2.h
U libcvd/cvd/so3.h
U libcvd/cvd/synchronized.h
U libcvd/cvd/tensor_voting.h
U libcvd/cvd/thread.h
U libcvd/cvd/timeddiskbuffer.h
U libcvd/cvd/timer.h
U libcvd/cvd/utility.h
U libcvd/cvd/vector_image_ref.h
U libcvd/cvd/videobuffer.h
U libcvd/cvd/videobufferflags.h
U libcvd/cvd/videodisplay.h
U libcvd/cvd/videofilebuffer.h
U libcvd/cvd/videofilebuffer_frame.h
U libcvd/cvd/videoframe.h
U libcvd/cvd/videosource.h
U libcvd/cvd/vision.h
U libcvd/cvd/wls.h
U libcvd/cvd/wls_c.h
U libcvd/cvd/wls_cholesky.h
U libcvd/cvd/yc.h
cvs checkout: Updating libcvd/cvd/IRIX
U libcvd/cvd/IRIX/O2buffer.h
U libcvd/cvd/IRIX/O2videoframe.h
U libcvd/cvd/IRIX/sgi-video.h
cvs checkout: Updating libcvd/cvd/Linux
U libcvd/cvd/Linux/capture_logic.cxx
U libcvd/cvd/Linux/dvbuffer.h
U libcvd/cvd/Linux/dvbuffer3.h
U libcvd/cvd/Linux/dvframe.h
U libcvd/cvd/Linux/v4l1buffer.h
U libcvd/cvd/Linux/v4l1frame.h
U libcvd/cvd/Linux/v4l2buffer.h
U libcvd/cvd/Linux/v4l2frame.h
U libcvd/cvd/Linux/v4lbuffer.h
U libcvd/cvd/Linux/v4lcontrol.h
cvs checkout: Updating libcvd/cvd/OSX
U libcvd/cvd/OSX/qtbuffer.h
U libcvd/cvd/OSX/qtframe.h
cvs checkout: Updating libcvd/cvd/internal
U libcvd/cvd/internal/aligned_mem.h
U libcvd/cvd/internal/assembly.h
U libcvd/cvd/internal/builtin_components.h
U libcvd/cvd/internal/convert_pixel_types.h
U libcvd/cvd/internal/disk_image.h
U libcvd/cvd/internal/gl_types.h
U libcvd/cvd/internal/image_ref_implementation.hh
U libcvd/cvd/internal/is_pod.h
U libcvd/cvd/internal/load_and_save.h
U libcvd/cvd/internal/name_CVD_rgb_types.h
U libcvd/cvd/internal/name_builtin_types.h
U libcvd/cvd/internal/pixel_operations.h
U libcvd/cvd/internal/pixel_traits.h
U libcvd/cvd/internal/rgb_components.h
U libcvd/cvd/internal/scalar_convert.h
U libcvd/cvd/internal/simple_vector.h
cvs checkout: Updating libcvd/cvd/internal/io
U libcvd/cvd/internal/io/bmp.h
U libcvd/cvd/internal/io/fits.h
U libcvd/cvd/internal/io/jpeg.h
U libcvd/cvd/internal/io/png.h
U libcvd/cvd/internal/io/pnm_grok.h
U libcvd/cvd/internal/io/save_postscript.h
U libcvd/cvd/internal/io/text.h
U libcvd/cvd/internal/io/tiff.h
cvs checkout: Updating libcvd/cvd/internal/pnm
cvs checkout: Updating libcvd/cvd/lock
cvs checkout: Updating libcvd/cvd/python
U libcvd/cvd/python/interface.h
U libcvd/cvd/python/selector.h
U libcvd/cvd/python/types.h
cvs checkout: Updating libcvd/cvd_src
U libcvd/cvd_src/bayer.cxx
U libcvd/cvd_src/brezenham.cc
U libcvd/cvd_src/colourspace_convert.cxx
U libcvd/cvd_src/connected_components.cc
U libcvd/cvd_src/convolution.cc
U libcvd/cvd_src/corner_10.h
U libcvd/cvd_src/corner_12.h
U libcvd/cvd_src/corner_9.h
U libcvd/cvd_src/cvd_timer.cc
U libcvd/cvd_src/deinterlacebuffer.cc
U libcvd/cvd_src/diskbuffer2.cc
U libcvd/cvd_src/draw.cc
U libcvd/cvd_src/draw_toon.cc
U libcvd/cvd_src/eventobject.cpp
U libcvd/cvd_src/exceptions.cc
U libcvd/cvd_src/fast_corner.cxx
U libcvd/cvd_src/fast_corner_9_nonmax.cxx
U libcvd/cvd_src/faster_corner_10.cxx
U libcvd/cvd_src/faster_corner_12.cxx
U libcvd/cvd_src/faster_corner_9.cxx
U libcvd/cvd_src/faster_corner_utilities.h
U libcvd/cvd_src/globlist.cxx
U libcvd/cvd_src/gltext.cpp
U libcvd/cvd_src/glwindow.cc
U libcvd/cvd_src/half_sample.cc
U libcvd/cvd_src/image_io.cc
U libcvd/cvd_src/mono.h
U libcvd/cvd_src/nonmax_suppression.cxx
U libcvd/cvd_src/sans.h
U libcvd/cvd_src/serif.h
U libcvd/cvd_src/slower_corner_10.cxx
U libcvd/cvd_src/slower_corner_11.cxx
U libcvd/cvd_src/slower_corner_12.cxx
U libcvd/cvd_src/slower_corner_7.cxx
U libcvd/cvd_src/slower_corner_8.cxx
U libcvd/cvd_src/slower_corner_9.cxx
U libcvd/cvd_src/synchronized.cpp
U libcvd/cvd_src/tensor_voting.cc
U libcvd/cvd_src/thread.cpp
U libcvd/cvd_src/timeddiskbuffer.cc
U libcvd/cvd_src/utility_helpers.h
U libcvd/cvd_src/videodisplay.cc
U libcvd/cvd_src/videofilebuffer.cc
U libcvd/cvd_src/videosource.cpp
U libcvd/cvd_src/yuv411_to_stuff.cxx
U libcvd/cvd_src/yuv420.cpp
U libcvd/cvd_src/yuv422.cpp
U libcvd/cvd_src/yuv422.h
cvs checkout: Updating libcvd/cvd_src/IRIX
U libcvd/cvd_src/IRIX/O2buffer.cxx
U libcvd/cvd_src/IRIX/sgi-video.cxx
cvs checkout: Updating libcvd/cvd_src/Linux
U libcvd/cvd_src/Linux/dvbuffer.cc
U libcvd/cvd_src/Linux/dvbuffer3_dc1394v1.cc
U libcvd/cvd_src/Linux/dvbuffer3_dc1394v2.cc
U libcvd/cvd_src/Linux/kernel-video1394.h
U libcvd/cvd_src/Linux/v4l1buffer.cc
U libcvd/cvd_src/Linux/v4l2buffer.cc
U libcvd/cvd_src/Linux/v4lbuffer.cc
U libcvd/cvd_src/Linux/v4lcontrol.cc
cvs checkout: Updating libcvd/cvd_src/OSX
U libcvd/cvd_src/OSX/qtbuffer.cpp
cvs checkout: Updating libcvd/cvd_src/SSE2
cvs checkout: Updating libcvd/cvd_src/Win32
U libcvd/cvd_src/Win32/glwindow.cpp
U libcvd/cvd_src/Win32/win32.cpp
U libcvd/cvd_src/Win32/win32.h
cvs checkout: Updating libcvd/cvd_src/fast
U libcvd/cvd_src/fast/fast_10_detect.cxx
U libcvd/cvd_src/fast/fast_10_score.cxx
U libcvd/cvd_src/fast/fast_11_detect.cxx
U libcvd/cvd_src/fast/fast_11_score.cxx
U libcvd/cvd_src/fast/fast_12_detect.cxx
U libcvd/cvd_src/fast/fast_12_score.cxx
U libcvd/cvd_src/fast/fast_7_detect.cxx
U libcvd/cvd_src/fast/fast_7_score.cxx
U libcvd/cvd_src/fast/fast_8_detect.cxx
U libcvd/cvd_src/fast/fast_8_score.cxx
U libcvd/cvd_src/fast/fast_9_detect.cxx
U libcvd/cvd_src/fast/fast_9_score.cxx
U libcvd/cvd_src/fast/prototypes.h
cvs checkout: Updating libcvd/cvd_src/i686
U libcvd/cvd_src/i686/byte_to_double_gradient.s
U libcvd/cvd_src/i686/byte_to_float_gradient.s
U libcvd/cvd_src/i686/byte_to_short_difference.s
U libcvd/cvd_src/i686/convert_rgb_to_y.cc
U libcvd/cvd_src/i686/convolve_float.s
U libcvd/cvd_src/i686/convolve_float4.s
U libcvd/cvd_src/i686/convolve_gaussian.cc
U libcvd/cvd_src/i686/float_add_mul_add.s
U libcvd/cvd_src/i686/float_add_mul_add_unaligned.s
U libcvd/cvd_src/i686/float_assign_mul.s
U libcvd/cvd_src/i686/float_difference.s
U libcvd/cvd_src/i686/float_innerproduct.s
U libcvd/cvd_src/i686/gradient.cc
U libcvd/cvd_src/i686/halfsample.s
U libcvd/cvd_src/i686/int_difference.s
U libcvd/cvd_src/i686/median_3x3.cc
U libcvd/cvd_src/i686/rgb_to_gray.s
U libcvd/cvd_src/i686/short_difference.s
U libcvd/cvd_src/i686/testconf
U libcvd/cvd_src/i686/utility_byte_differences.cc
U libcvd/cvd_src/i686/utility_double_int.cc
U libcvd/cvd_src/i686/utility_float.cc
U libcvd/cvd_src/i686/yuv411_to_stuff_MMX.C
U libcvd/cvd_src/i686/yuv411_to_stuff_MMX_64.C
U libcvd/cvd_src/i686/yuv420p_to_rgb.s
U libcvd/cvd_src/i686/yuv422_to_grey.s
U libcvd/cvd_src/i686/yuv422_to_rgb.s
U libcvd/cvd_src/i686/yuv422_wrapper.cc
cvs checkout: Updating libcvd/cvd_src/noarch
U libcvd/cvd_src/noarch/convert_rgb_to_y.cc
U libcvd/cvd_src/noarch/convolve_gaussian.cc
U libcvd/cvd_src/noarch/default_memalign.cpp
U libcvd/cvd_src/noarch/gradient.cc
U libcvd/cvd_src/noarch/median_3x3.cc
U libcvd/cvd_src/noarch/posix_memalign.cpp
U libcvd/cvd_src/noarch/utility_byte_differences.cc
U libcvd/cvd_src/noarch/utility_double_int.cc
U libcvd/cvd_src/noarch/utility_float.cc
U libcvd/cvd_src/noarch/yuv422_wrapper.cc
cvs checkout: Updating libcvd/cvd_src/nothread
U libcvd/cvd_src/nothread/runnable_batch.cc
cvs checkout: Updating libcvd/cvd_src/posix
cvs checkout: Updating libcvd/cvd_src/thread
U libcvd/cvd_src/thread/runnable_batch.cc
cvs checkout: Updating libcvd/doc
U libcvd/doc/cameracalib2cm.pdf
U libcvd/doc/tutorial.h
cvs checkout: Updating libcvd/make
U libcvd/make/compile_deps.awk
U libcvd/make/log_to_changelog.awk
U libcvd/make/march_flags
cvs checkout: Updating libcvd/pnm_src
U libcvd/pnm_src/bmp.cxx
U libcvd/pnm_src/fits.cc
U libcvd/pnm_src/jpeg.cxx
U libcvd/pnm_src/png.cc
U libcvd/pnm_src/pnm_grok.cxx
U libcvd/pnm_src/save_postscript.cxx
U libcvd/pnm_src/text.cxx
U libcvd/pnm_src/text_write.cc
U libcvd/pnm_src/tiff.cxx
U libcvd/pnm_src/tiffwrite.cc
cvs checkout: Updating libcvd/progs
U libcvd/progs/calibrate.cxx
U libcvd/progs/cvd_display_image.cxx
U libcvd/progs/cvd_image_viewer.cxx
U libcvd/progs/img_play.cxx
U libcvd/progs/img_play_bw.cxx
U libcvd/progs/img_play_deinterlace.cxx
U libcvd/progs/img_play_generic.cxx
U libcvd/progs/se3_exp.cxx
U libcvd/progs/se3_inv.cxx
U libcvd/progs/se3_ln.cxx
U libcvd/progs/se3_post_mul.cxx
U libcvd/progs/se3_pre_mul.cxx
U libcvd/progs/video_play.cc
U libcvd/progs/video_play_bw.cc
U libcvd/progs/video_play_source.cc
cvs checkout: Updating libcvd/python
U libcvd/python/setup.py
cvs checkout: Updating libcvd/python/CVD
U libcvd/python/CVD/cvd.cpp
cvs checkout: Updating libcvd/test
U libcvd/test/diskbuffer2.cxx
U libcvd/test/dvbuffer3_bayerrgb.cxx
U libcvd/test/dvbuffer3_mono.cxx
U libcvd/test/dvbuffer_controls.cxx
U libcvd/test/dvbuffer_mono.cxx
U libcvd/test/dvbuffer_rgb.cxx
U libcvd/test/dvbuffer_yuvrgb.cxx
U libcvd/test/fast_test.cxx
U libcvd/test/floodfill_test.cc
U libcvd/test/font.cpp
U libcvd/test/o2buffer.cxx
U libcvd/test/test_images.cxx
U libcvd/test/v4l1buffer_bayer.cxx
U libcvd/test/v4l1buffer_mono.cxx
U libcvd/test/v4l1buffer_rgb.cxx
U libcvd/test/v4l2buffer.cxx
U libcvd/test/v4lbuffer_bayerrgb.cxx
U libcvd/test/v4lbuffer_mono.cxx
U libcvd/test/videoprog.cxx
cvs checkout: Updating libcvd/test/fast_test_image
U libcvd/test/fast_test_image/noise.pgm
cvs checkout: Updating libcvd/test/images
U libcvd/test/images/1-byte-bin.pgm
U libcvd/test/images/1-byte-bin.ppm
U libcvd/test/images/1-byte-txt.pgm
U libcvd/test/images/1-byte-txt.ppm
U libcvd/test/images/2-byte-bin.pgm
U libcvd/test/images/2-byte-bin.ppm
U libcvd/test/images/2-byte-txt.pgm
U libcvd/test/images/2-byte-txt.ppm
U libcvd/test/images/colour.jpg
U libcvd/test/images/grey.jpg
cvs checkout: Updating libcvd/test/images/tiff
U libcvd/test/images/tiff/grey-bool-inverted.tiff
U libcvd/test/images/tiff/grey-uint16-normal.tiff
U libcvd/test/images/tiff/grey-uint8-normal.tiff
U libcvd/test/images/tiff/rgb-uint16.tiff
U libcvd/test/images/tiff/rgb-uint8.tiff
cvs checkout: Updating libcvd/util
%% cvs -z3 -d:pserver:anonymous@cvs.savannah.nongnu.org:/sources/libcvd co -D "Mon May 11 16:29:26 BST 2009" gvars3
실행 결과:
cvs checkout: warning: failed to open /Users/lym/.cvspass for reading: No such file or directory
cvs checkout: Updating gvars3
U gvars3/Authors
U gvars3/GVars2.h.historic
U gvars3/LICENSE
U gvars3/Makefile.in
U gvars3/config.guess
U gvars3/config.sub
U gvars3/configure
U gvars3/configure.ac
U gvars3/fltk2_test
U gvars3/fltk_test
U gvars3/install-sh
U gvars3/main.cc
cvs checkout: Updating gvars3/build
cvs checkout: Updating gvars3/build/vc2005
U gvars3/build/vc2005/gvars3-headless.vcproj
U gvars3/build/vc2005/gvars3.sln
U gvars3/build/vc2005/gvars3.vcproj
cvs checkout: Updating gvars3/build/vc2008
U gvars3/build/vc2008/gvars3-headless.vcproj
U gvars3/build/vc2008/gvars3.sln
U gvars3/build/vc2008/gvars3.vcproj
cvs checkout: Updating gvars3/gvars2_compat
cvs checkout: Updating gvars3/gvars3
U gvars3/gvars3/GStringUtil.h
U gvars3/gvars3/GUI.h
U gvars3/gvars3/GUI_Fltk.h
U gvars3/gvars3/GUI_Fltk2.h
U gvars3/gvars3/GUI_Motif.h
U gvars3/gvars3/GUI_Widgets.h
U gvars3/gvars3/GUI_non_readline.h
U gvars3/gvars3/GUI_readline.h
U gvars3/gvars3/config.h.in
U gvars3/gvars3/default.h
U gvars3/gvars3/gv3_implementation.hh
U gvars3/gvars3/gvars3.h
U gvars3/gvars3/instances.h
U gvars3/gvars3/serialize.h
U gvars3/gvars3/type_name.h
cvs checkout: Updating gvars3/src
U gvars3/src/GStringUtil.cc
U gvars3/src/GUI.cc
U gvars3/src/GUI_Fltk.cc
U gvars3/src/GUI_Fltk2.cc
U gvars3/src/GUI_Motif.cc
U gvars3/src/GUI_impl.h
U gvars3/src/GUI_impl_headless.cc
U gvars3/src/GUI_impl_noreadline.cc
U gvars3/src/GUI_impl_readline.cc
U gvars3/src/GUI_language.cc
U gvars3/src/GUI_non_readline.cc
U gvars3/src/GUI_none.cc
U gvars3/src/GUI_readline.cc
U gvars3/src/gvars2.cc
U gvars3/src/gvars3.cc
U gvars3/src/inst.cc
U gvars3/src/inst_headless.cc
U gvars3/src/serialize.cc
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for decltype... no
checking for typeof... yes
You're on the development branch of TooN 2.0. Everything will probably work, but
the interface is a bit different from TooN-1.x
If you want TooN-1, then get it using:
cvs -z3 -d:pserver:anoncvs@cvs.savannah.nongnu.org:/cvsroot/toon co -r Maintenance_Branch_1_x TooN
or update what you currently have using:
cvs up -r Maintenance_Branch_1_x
or head over to:
http://mi.eng.cam.ac.uk/~er258/cvd/
Otherwise, please report any bugs you come across.
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for a BSD-compatible install... /usr/bin/install -c
checking whether ln -s works... yes
checking for ranlib... ranlib
checking how to run the C++ preprocessor... g++ -E
checking if compiler flag -Wall works... yes
checking if compiler flag -Wextra works... yes
checking if compiler flag -pipe works... yes
checking if compiler flag -ggdb works... yes
checking if compiler flag -fPIC works... yes
checking build system type... i386-apple-darwin9.7.0
checking host system type... i386-apple-darwin9.7.0
checking for best optimize flags...
checking if compiler flag -O3 works... yes
checking CPU type... unknown
------------------------------------
Checking processor specific features
------------------------------------
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking whether byte ordering is bigendian... no
checking for MMX support... yes
checking for MMXEXT support... yes
checking for SSE support... yes
checking for SSE2 support... yes
checking for SSE3 support... yes
checking for void*... yes
checking size of void*... 4
checking for inline asm statement... yes
checking assembler supports .type pseudo-op... no
-----------------------------------------------
Checking for operating system specific features
-----------------------------------------------
checking dc1394/dc1394.h usability... no
checking dc1394/dc1394.h presence... no
checking for dc1394/dc1394.h... no
checking for main in -ldc1394... no
checking for /opt/local... yes
checking for /sw... no
configure: Adding /usr/X11R6/include to the build path.
checking Carbon and QuickTime framework... yes
-------------------------------
Checking for optional libraries
-------------------------------
checking for X... libraries /usr/X11/lib, headers /usr/X11/include
checking for glDrawPixels in -lGL... yes
checking GL/glu.h usability... yes
checking GL/glu.h presence... yes
checking for GL/glu.h... yes
checking for gluGetString in -lGLU... yes
checking for tr1::shared_ptr... yes
checking for TooN... yes
checking Old TooN... no
checking for dgesvd_ in -lacml... no
checking if Accelerate framework is needed for LAPACK...
checking for dgesvd_... yes
checking for working pthreads... yes
checking for pthread_yield... no
checking for pthread_yield_np... yes
checking png.h usability... yes
checking png.h presence... yes
checking for png.h... yes
checking for png_init_io in -lpng... yes
checking jpeglib.h usability... yes
checking jpeglib.h presence... yes
checking for jpeglib.h... yes
checking for jpeg_destroy_decompress in -ljpeg... yes
checking JPEG read buffer size... 1 (safe reading)
checking tiffio.h usability... yes
checking tiffio.h presence... yes
checking for tiffio.h... yes
checking for TIFFReadRGBAImage in -ltiff... yes
checking for TIFFReadRGBAImageOriented in -ltiff... yes
checking for doxygen... no
-----------------------------------
Checking for platform compatibility
-----------------------------------
checking glob.h usability... yes
checking glob.h presence... yes
checking for glob.h... yes
checking for glob... yes
checking for GLOB_BRACE and GLOB_TILDE in glob.h... yes
checking whether feenableexcept is declared... no
checking for posix_memalign... no
--------------------------------
Checking for extra build options
--------------------------------
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking for g++... g++
checking for C++ compiler default output file name... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C++ preprocessor... g++ -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking iostream usability... yes
checking iostream presence... yes
checking for iostream... yes
checking build system type... i386-apple-darwin9.7.0
checking host system type... i386-apple-darwin9.7.0
configure: Adding /sw to the build path.
configure: Adding /opt/local to the build path.
checking if compiler flag -Wall works... yes
checking if compiler flag -Wextra works... yes
--------------------------
Checking for options
--------------------------
checking for TooN... yes
checking for TooN-2... yes
checking if compiler flag -pthread works... yes
checking for rl_done in -lreadline... yes
------------------------------------------------
Checking for widget libraries (provides GUI_...)
------------------------------------------------
configure: WARNING: No GUI functionality enabled
Options:
toon readline
내 (OS X의) 경우, PTAM/Build/OS X에 있는 모든 (두 개의) 파일 Makefile과 VideoSource_OSX.cc를 PTAM 폴더에 옮겼다.
3-2. video source 셋업
카메라에 맞는 video input file을 컴파일하도록 Makefile을 수정해 주어야 한다.
맥의 경우, (아마도 Logitech Quickcam Pro 5000 을 기준으로 하는) 하나의 소스 파일만이 존재하므로 그대로 두면 될 듯.
3-3. video source 추가
다른 비디오 소스들은 libCVD에 클래스로 만들어져 있다고 한다. 여기에 포함되어 있지 않은 경우에는 VideoSource_XYZ.cc 라는 식의 이름을 갖는 파일을 만들어서 넣어 주어야 한다.
3-4. compile
PTAM 폴더에 들어가서
%% make
실행 결과:
g++ -g -O3 main.cc -o main.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
g++ -g -O3 VideoSource_OSX.cc -o VideoSource_OSX.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
g++ -g -O3 GLWindow2.cc -o GLWindow2.o -c -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT
In file included from OpenGL.h:20,
from GLWindow2.cc:1:
/usr/local/include/cvd/gl_helpers.h:38:19: error: GL/gl.h: No such file or directory
/usr/local/include/cvd/gl_helpers.h:39:20: error: GL/glu.h: No such file or directory
/usr/local/include/cvd/gl_helpers.h: In function 'void CVD::glPrintErrors()':
/usr/local/include/cvd/gl_helpers.h:569: error: 'gluGetString' was not declared in this scope
make: *** [GLWindow2.o] Error 1
PTAM이 OpenGL을 사용하고 있는데, OpenGL이 Mac에 기본으로 설치되어 있으므로 신경쓰지 않았던 부분이다. 물론 system의 public framework으로 들어가 있음을 확인할 수 있다. 그런데 UNIX 프로그램에서 접근할 수는 없는가? (인터넷에서 검색해 보아도 따로 설치할 수 있는 다운로드 링크나 방법을 찾을 수 없다.)
에러 메시지에 대한 정확한 진단 ->
philphys: 일단 OpenGL은 분명히 있을 건데 그 헤더파일과 라이브러리가 있는 곳을 지정해 주지 않아서 에러가 나는 것 같아. 보통 Makefile에 이게 지정되어 있어야 하는데 실행결과를 보니까 전혀 지정되어 있지 않네. 중간에 보면 -I /MY_CUSTOM_INCLUDE_PATH/ 라는 부분이 헤더 파일의 위치를 지정해 주는 부분이고 또 라이브러리는 뒤에 링크할 때 지정해 주게 되어 있는데 거기까지는 가지도 못 했네.
즉, "링커가 문제가 아니라, 컴파일러 옵션에 OpenGL의 헤더파일이 있는 디렉토리를 지정해 주어야 할 것 같다"고 한다.
문제의 Makefile을 들여다보고
# DO NOT DELETE THIS LINE -- make depend depends on it.
# Edit the lines below to point to any needed include and link paths
# Or to change the compiler's optimization flags CC = g++ -g -O3 COMPILEFLAGS = -I /MY_CUSTOM_INCLUDE_PATH/ -D_OSX -D_REENTRANT LINKFLAGS = -framework OpenGL -framework VecLib -L/MY_CUSTOM_LINK_PATH/ -lGVars3 -lcvd
(다음은 philphys 인용) 파란색 부분 - 각 소스코드를 컴파일한 다음 컴파일된 오브젝트 코드를 실행파일로 링크하는 부분. 여기서는 $(LINKFLAGS)에 링커 프로그램에 전달되는 옵션이 들어간다. 초록색 부분 - 컴파일할 오브젝트 코드의 리스트 분홍색 부분 - CC는 컴파일러 프로그램과 기본 옵션. COMPILEFLAGS는 컴파일러에 전달하는 옵션들, 여기에 헤더 파일의 위치를 정할 수 있다. LINKFLAGS는 컴파일된 오브젝트 코드를 실행파일로 링크하는 링커에 들어가는 옵션. 여기에 라이브러리의 위치와 사용할 라이브러리를 지정해 준다. 라이브러리의 위치는 -L 옵션으로, 구체적인 라이브러리 이름은 -l 옵션으로.
maetel: 사실 프레임웍 안에 gl.h 파일이 있는 위치는 다음과 같다.
/System/Library/Frameworks/OpenGL.framework/Versions/A/Headers
philphys: "근데 코드에서는 그 헤더를 "GL/gl.h"로 찾는다는 게 문제. 이건 프레임웍 방식이 아닌 고전적인 유닉스 Xwindows의 OpenGL 방식이다. 즉 방금 보인 gl.h, glu.h 등이 있는 /usr/X11R6/include 를 COMPILERFLAGS에 -I 옵션으로 넣어 줘야 하는 게 아닐까."
philphys: /usr/X11R6/include 밑에 GL 폴더가 있고 거기에 필요한 헤더파일들이 모두 들어 있다. 그래서 코드에선 "GL/gl.h" 하는 식으로 explicit하게 GL 폴더를 찾게 된다.
그러고 보면 아래와 같은 설명이 있었던 것이다.
Since the Linux code compiles directly against the nVidia driver's GL headers, use of a different GL driver may require some modifications to the code.
CameraCalibrator 파일을 실행시켜 카메라 캘리브레이션을 시도했더니 GUI 창이 뜨는데 연결된 웹캠(Logitech QuickCam Pro 4000)으로부터 입력을 받지 못 한다.
4-0. 증상
CameraCalibrator 실행파일을 열면, 다음과 같은 터미널 창이 새로 열린다.
Last login: Fri Aug 7 01:14:05 on ttys001
%% /Users/lym/PTAM/CameraCalibrator ; exit;
Welcome to CameraCalibrator
--------------------------------------
Parallel tracking and mapping for Small AR workspaces
Copyright (C) Isis Innovation Limited 2008
Parsing calibrator_settings.cfg ....
! GUI_impl::Loadfile: Failed to load script file "calibrator_settings.cfg".
VideoSource_OSX: Creating QTBuffer....
IMPORTANT
This will open a quicktime settings planel.
You should use this settings dialog to turn the camera's
sharpness to a minimum, or at least so small that no sharpening
artefacts appear! In-camera sharpening will seriously degrade the
performance of both the camera calibrator and the tracking system.
>
그리고 Video란 이름의 GUI 창이 열리는데, 이때 아무런 설정을 바꾸지 않고 그대로 OK를 누르면 위의 터미널 창에 다음과 같은 메시지가 이어지면서 자동 종료된다.
.. created QTBuffer of size [640 480]
2009-08-07 01:20:57.231 CameraCalibrator[40836:10b] ***_NSAutoreleaseNoPool(): Object 0xf70e2c0 of class NSThread autoreleasedwith no pool in place - just leaking
Stack: (0x96827f0f 0x96734442 0x9673a1b4 0xbc2db7 0xbc7e9a 0xbc69d30xbcacbd 0xbca130 0x964879c9 0x90f8dfb8 0x90e69618 0x90e699840x964879c9 0x90f9037c 0x90e7249c 0x90e69984 0x964879c9 0x90f8ec800x90e55e05 0x90e5acd5 0x90e5530f 0x964879c9 0x94179eb9 0x282b48 0xd9f40xd6a6 0x2f16b 0x2fea4 0x26b6)
! Code for converting from format "Raw RGB data"
not implemented yet, check VideoSource_OSX.cc.
logout
[Process completed]
그러므로 3-3의 문제 -- set up video source (비디오 소스 셋업) --로 돌아가야 한다.
즉, VideoSource_OSX.cc 파일을 수정해서 다시 컴파일한 후 실행해야 한다.
Other video source classes are available with libCVD. Finally, if a custom video source not supported by libCVD is required, the code for it will have to be put into some VideoSource_XYZ.cc file (the interface for this file is very simple.)
삽질...
터미널에서 calibrator_settings.cfg 파일을 로드하지 못 했다고 하기에, 그 파일을 찾아보았다.
%% find . -name "calibrator_settings.cfg" -print
./calibrator_settings.cfg
알고 보니 ./는 현재 디렉토리를 말하나 보다. PTAM 폴더 밑에 바로 있다. 왜 못 봤을까...
열어 보니 다음과 같다.
// This file is parsed by the CameraCalibrator executable
// Put any custom gvars settings you want in here
// For example: to increase the camera calibrator's blur parameter,
// uncomment the following line
// CameraCalibrator.BlurSigma=2.0
VideoSource::VideoSource()
{
cout << " VideoSource_OSX: Creating QTBuffer...." << endl;
cout << " IMPORTANT " << endl;
cout << " This will open a quicktime settings planel. " << endl
<< " You should use this settings dialog to turn the camera's " << endl
<< " sharpness to a minimum, or at least so small that no sharpening " << endl
<< " artefacts appear! In-camera sharpening will seriously degrade the " << endl
<< " performance of both the camera calibrator and the tracking system. " << endl;
QTBuffer<yuv422>* pvb;
try
{
pvb= new QTBuffer<yuv422>(ImageRef(640,480), 0, true);
}
catch (CVD::Exceptions::All a)
{
cerr << " Error creating QTBuffer; expection: " << a.what << endl;
exit(1);
}
mptr = pvb;
mirSize = pvb->size();
cout << " .. created QTBuffer of size " << mirSize << endl;
};
그러나 위에서 보듯 size가 맞지 않고 frame이 겹치는 등 넘어갈 수 없는 문제가 있다. (오른쪽 이미지는 같은 상황에서의 비교를 위해 맥용 드라이버 macam에서의 입력을 캡쳐한 것.)
터미널에서의 결과:
%% /Users/lym/PTAM/CameraCalibrator ; exit;
Welcome to CameraCalibrator
--------------------------------------
Parallel tracking and mapping for Small AR workspaces
Copyright (C) Isis Innovation Limited 2008
Parsing calibrator_settings.cfg ....
! GUI_impl::Loadfile: Failed to load script file "calibrator_settings.cfg".
VideoSource_OSX: Creating QTBuffer....
IMPORTANT
This will open a quicktime settings planel.
You should use this settings dialog to turn the camera's
sharpness to a minimum, or at least so small that no sharpening
artefacts appear! In-camera sharpening will seriously degrade the
performance of both the camera calibrator and the tracking system.
> .. created QTBuffer of size [640 480]
2009-08-12 23:10:38.309 CameraCalibrator[5110:10b] *** _NSAutoreleaseNoPool(): Object 0xf8054e0 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x96670f4f 0x9657d432 0x965831a4 0xc17db7 0xc1ce9a 0xc1b9d3 0xc1fcbd 0xc1f130 0x924b09c9 0x958e8fb8 0x957c4618 0x957c4984 0x924b09c9 0x958eb37c 0x957cd49c 0x957c4984 0x924b09c9 0x958e9c80 0x957b0e05 0x957b5cd5 0x957b030f 0x924b09c9 0x90bd4eb9 0x282b48 0xd9e4 0xd5a6 0x2f15b 0x2fe94 0x25b6)
4-1-2. YUV format 확인
Logitech QuickCam Pro 4000는 YUV420P라는 글을 어디선가 보고, 코드의 yuv422 부분을 yuv420p로 바꾸었으나 증상은 그대로이다.
skype forum:
There is another problem here with Xgl unfortunately, as it
isn'tsupporting the YUV420P (I420) overlay colour/pixel format either.
Soeven if the camera captures successfully (which looks okay on
thosesettings), then you still won't see anything.
왼쪽: macam에서 캡쳐한 이미지 / 오른쪽: PTAM CameraCalibrator에서 보여지는 이미지
Logitech QuickCam Pro 4000
Welcome to CameraCalibrator
--------------------------------------
Parallel tracking and mapping for Small AR workspaces
Copyright (C) Isis Innovation Limited 2008
Parsing calibrator_settings.cfg ....
VideoSource_OSX: Creating QTBuffer....
IMPORTANT
This will open a quicktime settings planel.
You should use this settings dialog to turn the camera's
sharpness to a minimum, or at least so small that no sharpening
artefacts appear! In-camera sharpening will seriously degrade the
performance of both the camera calibrator and the tracking system.
> .. created QTBuffer of size [640 480]
2009-08-13 04:02:50.464 CameraCalibrator[6251:10b] ***
_NSAutoreleaseNoPool(): Object 0x9df180 of class NSThread autoreleased
with no pool in place - just leaking
Stack: (0x96670f4f 0x9657d432 0x965831a4 0xbc2db7 0xbc7e9a 0xbc69d3
0xbcacbd 0xbca130 0x924b09c9 0x958e8fb8 0x957c4618 0x957c4984
0x924b09c9 0x958eb37c 0x957cd49c 0x957c4984 0x924b09c9 0x958e9c80
0x957b0e05 0x957b5cd5 0x957b030f 0x924b09c9 0x90bd4eb9 0x282b48 0xd414
0xcfd6 0x2f06b 0x2fda4)
4-2. Camera Calibrator 실행
Camera calib is [ 1.51994 2.03006 0.499577 0.536311 -0.0005 ]
Saving camera calib to camera.cfg...
.. saved.
5. PTAM 실행
Welcome to PTAM
---------------
Parallel tracking and mapping for Small AR workspaces
Copyright (C) Isis Innovation Limited 2008
Parsing settings.cfg ....
VideoSource_OSX: Creating QTBuffer....
IMPORTANT
This will open a quicktime settings planel.
You should use this settings dialog to turn the camera's
sharpness to a minimum, or at least so small that no sharpening
artefacts appear! In-camera sharpening will seriously degrade the
performance of both the camera calibrator and the tracking system.
> .. created QTBuffer of size [640 480]
2009-08-13 20:17:54.162 ptam[1374:10b] *** _NSAutoreleaseNoPool(): Object 0x8f5850 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x96670f4f 0x9657d432 0x965831a4 0xbb9db7 0xbbee9a 0xbbd9d3 0xbc1cbd 0xbc1130 0x924b09c9 0x958e8fb8 0x957c4618 0x957c4984 0x924b09c9 0x958eb37c 0x957cd49c 0x957c4984 0x924b09c9 0x958e9c80 0x957b0e05 0x957b5cd5 0x957b030f 0x924b09c9 0x90bd4eb9 0x282b48 0x6504 0x60a6 0x11af2 0x28da 0x2766)
ARDriver: Creating FBO... .. created FBO.
MapMaker: made initial map with 135 points.
MapMaker: made initial map with 227 points.
CameraCalibrator를 실행하면 뜨는 GUI 창에서 다음과 같이 기본 설정이 적용되고,
Source: iSight
Compression type: Component Video - CCIR-601 uyvy
rms error 0.3 이하로 깔끔하게 수렴된 카메라 파라미터의 계산값이 다음과 같은 식으로 나올 때,
Camera calib is [ 1.22033 1.62577 0.489375 0.641251 0.544352 ]
PTAM을 실행하면 위의 Logitech QuickCam 두 기종보다는 features를 보다 잘 잡는다. (아직은 많이 부족하지만...)
Welcome to PTAM
---------------
Parallel tracking and mapping for Small AR workspaces
Copyright (C) Isis Innovation Limited 2008
Parsing settings.cfg ....
VideoSource_OSX: Creating QTBuffer....
IMPORTANT
This will open a quicktime settings planel.
You should use this settings dialog to turn the camera's
sharpness to a minimum, or at least so small that no sharpening
artefacts appear! In-camera sharpening will seriously degrade the
performance of both the camera calibrator and the tracking system.
> .. created QTBuffer of size [640 480]
ARDriver: Creating FBO... .. created FBO.
MapMaker: made initial map with 242 points.
MapMaker: made initial map with 259 points.
MapMaker: made initial map with 323 points.
MapMaker: made initial map with 626 points.
In Proceedings
of the International Conference on Computer Vision, Rio de Janeiro, Brazil, 2007 demo 1 demo 2
• real-time, high-accuracy localisation and mapping during tracking
• real-time (re-)localisation when when tracking fails
• on-line learning of image patch appearance so that no prior training or map structure is required and features are added and removed during operation.
Lepetit's image patch classifier (feature appearance learning)
=> integrating the classifier more closely into the process of map-building
(by using classification results to aid in the selection of new points to add to the map)
> recovery from tracking failure: local vs. global
local - particle filter -> rich feature descriptor
global - proximity using previous key frames
- based on SceneLib (Extended Kalman Filter)
- rotational (and a degree of perspective) invariance via local patch warping
- assuming the patch is fronto-parallel when first seen http://freshmeat.net/projects/scenelib/
active search
innovation covariance
joint compatibility test
randomized lists key-point recognition algorithm
1. randomized: (2^D - 1) tests -> D tests
2. independent treatment of classes
3. binary leaf scores (2^D * C * N bits for all scores)
4. intensity offset
5. explicit noise handing
ref.
Davison, A. J. and Molton, N. D. 2007. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 6 (Jun. 2007), 1052-1067. DOI= http://dx.doi.org/10.1109/TPAMI.2007.1049
Lepetit, V., Lagger, P., and Fua, P. 2005. Randomized Trees for Real-Time Keypoint Recognition. In Proceedings
of the 2005 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition (Cvpr'05) - Volume 2 - Volume 02 (June 20 - 26, 2005). CVPR. IEEE Computer Society, Washington, DC, 775-781. DOI= http://dx.doi.org/10.1109/CVPR.2005.288
Klein, G. and Murray, D. 2007. Parallel Tracking and Mapping for Small AR Workspaces
In Proceedings of the 2007 6th IEEE and ACM international Symposium on Mixed and Augmented Reality - Volume 00
(November 13 - 16, 2007). Symposium on Mixed and Augmented Reality.
IEEE Computer Society, Washington, DC, 1-10. DOI=
http://dx.doi.org/10.1109/ISMAR.2007.4538852
1. parallel threads of tracking and mapping
2. mapping from smaller keyframes: batch techniques (Bundle Adjustment)
3. Initializing the map from 5-point Algorithm
4. Initializing new points with epipolar search
5. mapping thousands of points
affine warp
warping matrix <- (1) back-projecting unit pixel displacements in the source keyframe pyramid level onto the patch's plane and then (2) projecting these into the current (target) frame