블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2010. 4. 22. 20:05 Computer Vision
Graphics and Media Lab
CMC department, Moscow State University
http://graphics.cs.msu.ru/en/science/research/calibration/cpp
posted by maetel
2010. 4. 14. 16:40 Computer Vision
매킨토시에서 OpenCV 버전 2로 업그레이드
http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port

현재 연구실 맥미니 사양 (Leopard)
 System Version:    Mac OS X 10.5.8 (9L30)
 Kernel Version:    Darwin 9.8.0


0. MacPorts 업그레이드


1) MacPorts 포트 정보 확인

$ port info macports

MacPorts @1.8.2 (sysutils)
Variants:             darwin_10, darwin_7, darwin_8, darwin_8_i386,
                      darwin_8_powerpc, darwin_9, darwin_9_i386,
                      darwin_9_powerpc, universal

Description:          MacPorts provides the infrastructure that allows easy
                      installation and management of freely available software
                      on Mac OS X 10.4 or newer systems.
Homepage:             http://www.macports.org/

Platforms:            darwin, freebsd
License:              unknown
Maintainers:          macports-mgr@lists.macosforge.org


2) 기존 MacPorts 버전 확인 및 새 버전 다운로드

$ sudo port selfupdate

MacPorts base version 1.710 installed
Downloaded MacPorts base version 1.800

Installing new MacPorts release in /opt/local as root:admin - TCL-PACKAGE in /Library/Tcl; Permissions: 0755


3) MacPorts 새 버전 설치

$ sudo port -v selfupdate
Password:



4) OpenCV 포트 확인

$ port search opencv

opencv @2.0.0 (graphics, science)
    Intel(R) Open Source Computer Vision Library

이 포트를 설치하면 64 비트 OpenCV 2.0이 된다고 하는데, 이 말은 눈표범에 해당하는 게 아닌가 한다. 그냥 표범인 지금 맥미니는 (확실친 않지만) 32비트인 것으로 확인되었다.  어쨌든, 무엇보다 snow leopard 스노우 러퍼드 (Mac OS X 10.6) 사용자는 quicktime (iSight)과 carbon (GUI) support를 포기해야 한다고 되어 있다. 나는 그냥 러퍼드 (Mac OS X 10.5)이지만, 왠지 불안하므로 맥포트로 OpenCV를 업그레이드 하는 방법은 일단 제외하기로 한다. 대신, 공식 위키 안내대로 CMake를 이용하여 OpenCV 새 버전을 설치하기로 한다.


1. CMake를 사용하여 OpenCV 설치하기

1) subversion 설치

 맥포트로 subversion을 다운로드, 설치한다.

$ sudo port install subversion

subversion ( http://en.wikipedia.org/wiki/Subversion_%28software%29 )은 예전 내 파워북에 설치해서 써 본 경험이 있기는 하다. 그때도 이렇게 오래 걸렸었나...



2) CMake 설치

$ sudo port install cmake



cf. CMake?
http://en.wikipedia.org/wiki/CMake
http://www.cmake.org/


3) OpenCV source code 다운로드

svn co https://code.ros.org/svn/opencv/trunk/opencv

에러 메시지가 나와서 기존의 "opencv" 폴더명을 바꾸고 다시 명령을 줬더니, 새로 "opencv" 폴더와 다음 파일들이 생성된다.


마지막에 "Checked out revision 3024."라는 메시지가 나왔다.
https://code.ros.org/trac/opencv/changeset/3024


4) Make 파일 만들기

생성된 opencv 폴더에 들어가서 CMake로 유닉스 메이크 파일을 생성한다. (옵션을 추가할 수 있다. 공식 위키 안내 참조)

$ cd opencv
$ sudo cmake -G "Unix Makefiles"




5) 빌드하기


$ sudo make -j8





$ sudo make install




6) 확인

-1) 파인더 창에서 보이지 않는 디렉토리는 "/usr/local/" 파인더 메뉴의 Go > Go to folder에서 직접 입력하여 들어갈 수 있다.
-2) OpenCV 새 버전을 MacPorts로 설치하지 않았으므로, 맥포츠 명령어 "port installed"로 설치된 포트들을 검색하면 이전에 맥포츠로 설치한 1.0.0 버전만 확인할 수 있다. (이전에 맥포츠로 설치한 1.0.0 버전은 "/opt/local/var/macports/software/opencv/1.0.0_0/opt/local/lib"에 들어 있다.)



2. Xcode에서 OpenCV 라이브러리 사용하기

공식 위키의 안내문:

Using the OpenCV libraries in an Xcode OS X project

These instructions were written for Xcode 3.1.x

  • Create a new XCode project using the Command Line Utility/Standard Tool template
  • Select Project -> Edit Project Settings

  • Set Configuration to All Configurations
  • In the Architectures section, double-click Valid Architectures and remove all the PPC architectures
  • In the Search Paths section set Header Search Paths to /usr/local/include/opencv
  • Close the Project Info window
  • Select Project -> New Group and create a group called OpenCV Frameworks

  • With the new group selected, select Project -> Add to Project…

  • Press the "/" key to get the Go to the folder prompt
  • Enter /usr/local/lib
  • Select libcxcore.dylib, libcvaux.dylib, libcv.dylib, libhighgui.dylib, and libml.dylib.

  • Click Add
  • Uncheck Copy Items… and click Add

Now you should be able to include the OpenCV libraries, compile, and run your project


1) 빌드 환경 설정
 
XCode 메뉴에서 Project -> Edit Project Settings를 클릭하면 Project Info 창이 뜬다. Build 탭에 들어가서
-1) Configuration 설정이 "Active (Debug)"로 되어 있는 것을 "All Configurations"로 변경한다.
-2) Architectures에서 "Valid Architectures"를 더블 클릭하여 목록이 뜨면 그 중 PPC 아케텍처에 해당하는 것들을 모두 삭제한다.
-3) Search Paths에서 Header Search Paths를 " /usr/local/include/opencv  "로 설정한다.


2) OpenCV  프레임웍스를 프로젝트에 추가

-1) Project Info 창을 닫고, 프로젝트에 "New Group"을 추가하여 "OpenCV Frameworks"라 명명한다.
-2) 이 그룹을 선택한 상태로 인용부 설명대로 usr/local/lib에 위치한 5개의 라이브러리 파일을 추가한다.




3. Xcode 프로젝트 테스트...ing




/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
            cvGrabFrame( capture ); // capture a frame           
            image = cvRetrieveFrame(capture); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}




[Session started at 2010-04-15 01:11:16 +0900.]
2010-04-15 01:11:22.273 opencv2test01[1192:7f23] *** _NSAutoreleaseNoPool(): Object 0xc5f0d0 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc320   240
ccccccccccccccccccccc320   240

[Session started at 2010-04-15 01:11:24 +0900.]
Loading program into debugger…
GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-apple-darwin".Program loaded.
sharedlibrary apply-load-rules all
Attaching to program: `/Users/lym/Documents/VIP/2010/opencv2test01/build/Debug/opencv2test01', process 1192.
unable to read unknown load command 0x22
unable to read unknown load command 0x22
StartNextIsochRead-ReadIsochPipeAsync: Error: kIOReturnIsoTooOld - isochronous I/O request for distant past!

The Debugger Debugger is attaching to process(gdb)

실행 중지하면, (gdb) 대신 다음 메시지가 추가되며 프로그램 종료된다.

StartNextIsochRead-ReadIsochPipeAsync: Error: kIOReturnIsoTooOld - isochronous I/O request for distant past!
kill

The Debugger Debugger is attaching to process(gdb)




카메라에서 비디오 입력 받지 말고, 폴더에서 이미지 파일 읽으면

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
            cvGrabFrame( capture ); // capture a frame           
//            image = cvRetrieveFrame(capture); // retrieve the caputred frame
            image = cvLoadImage("werol.jpg"); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}



 




[Session started at 2010-04-15 01:26:38 +0900.]
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
2010-04-15 01:26:43.235 opencv2test01[1333:7f23] *** _NSAutoreleaseNoPool(): Object 0xc56040 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825





카메라 캡처 부분을 지우면

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
//            cvGrabFrame( capture ); // capture a frame           
//            image = cvRetrieveFrame(capture); // retrieve the caputred frame
            image = cvLoadImage("werol.jpg"); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}


[Session started at 2010-04-15 01:32:37 +0900.]
2010-04-15 01:32:43.091 opencv2test01[1377:7f23] *** _NSAutoreleaseNoPool(): Object 0xc50970 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825




카메라 입력도 이미지 파일 로드도 없이 그냥 이미지를 하나 만들어 준 다음 디스플레이/저장해 보면,

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
   
    IplImage *iplImg = cvCreateImage(cvSize(500, 500), 8, 3);
    cvZero(iplImg);
    cvLine(iplImg, cvPoint(10, 10), cvPoint(300, 300), CV_RGB(255, 0, 0), 20);
    cvCircle(iplImg, cvPoint(400, 400), 40, CV_RGB(0, 255, 255), 5);
    cvNamedWindow("temp"); cvShowImage("temp", iplImg); cvSaveImage("temp.bmp", iplImg);  cvWaitKey();

    return 0;
}



실행 결과 "temp 창"에 뜨는 (비정상적인) 이미지와 파일로 (정상적으로) 저장되는 "temp.bmp"는 아래와 같다.

실행 폴더에 저장된 temp.bmp

"temp" 창 부분을 화면 캡처한 이미지





http://tech.groups.yahoo.com/group/OpenCV/message/70200


/opt/local/.........../opencv/1.0.0_0/opt/local/include/opencv/

..................../opt/local/lib






MacPorts로 설치했던  OpenCV 1.0.0 테스트

프로젝트 헤더 파일 경로 설정: /opt/local/include/opencv
프로젝트에 추가할 라이브러리 파일 위치:  /opt/local/lib






$ sudo port install opencv


posted by maetel
2010. 4. 12. 16:09 Computer Vision
ref.

Szeliski, Computer Vision: Algorithms and Applications (March 24, 2010 draft): 4.3.2 Hough transforms

http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm

Sonka & Hlavac & Boyle, Image Processing, Analysis, and Machine Vision, third edition: 6.2.6 Hough transforms

http://en.wikipedia.org/wiki/Hough_transform




출처: Szeliski, Computer Vision: Algorithms and Applications (March 24, 2010 draft) 252쪽


'Computer Vision' 카테고리의 다른 글

GML C++ Camera Calibration Toolbox  (0) 2010.04.22
OpenCV 2.1 설치 on Mac OS X  (0) 2010.04.14
OpenCV: cvHoughLines2() 연습 코드  (0) 2010.04.07
OpenCV: cvFitLine() 연습 코드  (0) 2010.04.06
virtual studio 구현: line fitting test  (0) 2010.04.06
posted by maetel
2010. 4. 7. 00:16 Computer Vision
OpenCV 라이브러리의 Hough transform에 의한 직선 찾기 함수


CvSeq* cvHoughLines2(CvArr* image, void* storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0)

Finds lines in a binary image using a Hough transform.

Parameters:
  • image – The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function
  • storage – The storage for the lines that are detected. It can be a memory storage (in this case a sequence of lines is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of a particular type (see below) to which the lines’ parameters are written. The matrix header is modified by the function so its cols or rows will contain the number of lines detected. If storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of lines is returned (in the case of standard hough transform the lines are sorted by the accumulator value)
  • method

    The Hough transform variant, one of the following:

    • CV_HOUGH_STANDARD - classical or standard Hough transform. Every line is represented by two floating-point numbers $(\rho , \theta )$, where $\rho $ is a distance between (0,0) point and the line, and $\theta $ is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type
    • CV_HOUGH_PROBABILISTIC - probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of CV_32SC4 type
    • CV_HOUGH_MULTI_SCALE - multi-scale variant of the classical Hough transform. The lines are encoded the same way as CV_HOUGH_STANDARD
  • rho – Distance resolution in pixel-related units
  • theta – Angle resolution measured in radians
  • threshold – Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than threshold
  • param1

    The first method-dependent parameter:

    • For the classical Hough transform it is not used (0).
    • For the probabilistic Hough transform it is the minimum line length.
    • For the multi-scale Hough transform it is the divisor for the distance resolution $\rho $. (The coarse distance resolution will be $\rho $ and the accurate resolution will be $(\rho / \texttt{param1})$).
  • param2

    The second method-dependent parameter:

    • For the classical Hough transform it is not used (0).
    • For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them).
    • For the multi-scale Hough transform it is the divisor for the angle resolution $\theta $. (The coarse angle resolution will be $\theta $ and the accurate resolution will be $(\theta / \texttt{param2})$).

Memory storage is a low-level structure used to store dynamicly growing data structures such as sequences, contours, graphs, subdivisions, etc.


입력 이미지가 8비트 단일 채널이어야 하므로,
다음과 같이 "IPL_DEPTH_32F"로 생성했던 입력 이미지 (iplDoGx)를 바꾸어 "8" 비트 depth짜리 새로운 이미지 (iplEdgeY)에 저장한다.

            cvConvert(iplDoGx, iplEdgeY);


두번째 인자 " void* storage" 는 탐지된 직선을 저장할 메모리. 이 함수의 아웃풋에 해당한다.

CvMemStorage

Growing memory storage.

typedef struct CvMemStorage
{
struct CvMemBlock* bottom;/* first allocated block */
struct CvMemBlock* top; /* the current memory block - top of the stack */
struct CvMemStorage* parent; /* borrows new blocks from */
int block\_size; /* block size */
int free\_space; /* free space in the \texttt{top} block (in bytes) */
} CvMemStorage;



CvMemStorage* cvCreateMemStorage(int blockSize=0)

Creates memory storage.

Parameter:blockSize – Size of the storage blocks in bytes. If it is 0, the block size is set to a default value - currently it is about 64K.


 그 아웃풋을 다음의 CvSeq 형태의 자료 구조체 안에 저장한다.

CvSeq

Growable sequence of elements.

#define CV_SEQUENCE\_FIELDS() \
int flags; /* micsellaneous flags */ \
int header_size; /* size of sequence header */ \
struct CvSeq* h_prev; /* previous sequence */ \
struct CvSeq* h_next; /* next sequence */ \
struct CvSeq* v_prev; /* 2nd previous sequence */ \
struct CvSeq* v_next; /* 2nd next sequence */ \
int total; /* total number of elements */ \
int elem_size;/* size of sequence element in bytes */ \
char* block_max;/* maximal bound of the last block */ \
char* ptr; /* current write pointer */ \
int delta_elems; /* how many elements allocated when the sequence grows
(sequence granularity) */ \
CvMemStorage* storage; /* where the seq is stored */ \
CvSeqBlock* free_blocks; /* free blocks list */ \
CvSeqBlock* first; /* pointer to the first sequence block */

typedef struct CvSeq
{
CV_SEQUENCE_FIELDS()
} CvSeq;

The structure CvSeq is a base for all of OpenCV dynamic data structures.


그 저장된 값을 읽는 함수

char* cvGetSeqElem(const CvSeq* seq, int index)

Returns a pointer to a sequence element according to its index.

#define CV_GET_SEQ_ELEM( TYPE, seq, index )  (TYPE*)cvGetSeqElem( (CvSeq*)(seq), (index) )
Parameters:
  • seq – Sequence
  • index – Index of element




accumulator value 란?







"detected edges" 이미지에 대해 Hough transform에 의한 line fitting 한 결과를 "input" 이미지에 그리고 있음




opencv/opencv/src/cv/cvhough.cpp 를 열면, 다음의 네 부분으로 나뉘어 정의되어 있다.
Classical Hough Transform
Multi-Scale variant of Classical Hough Transform 
Probabilistic Hough Transform      
Circle Detection

이 중 "Classical Hough Transform" 부분은 다음과 같음.
typedef struct CvLinePolar
{
    float rho;
    float angle;
}
CvLinePolar;
/*=====================================================================================*/

#define hough_cmp_gt(l1,l2) (aux[l1] > aux[l2])

static CV_IMPLEMENT_QSORT_EX( icvHoughSortDescent32s, int, hough_cmp_gt, const int* )

/*
Here image is an input raster;
step is it's step; size characterizes it's ROI;
rho and theta are discretization steps (in pixels and radians correspondingly).
threshold is the minimum number of pixels in the feature for it
to be a candidate for line. lines is the output
array of (rho, theta) pairs. linesMax is the buffer size (number of pairs).
Functions return the actual number of found lines.
*/
static void
icvHoughLinesStandard( const CvMat* img, float rho, float theta,
                       int threshold, CvSeq *lines, int linesMax )
{
    int *accum = 0;
    int *sort_buf=0;
    float *tabSin = 0;
    float *tabCos = 0;

    CV_FUNCNAME( "icvHoughLinesStandard" );

    __BEGIN__;

    const uchar* image;
    int step, width, height;
    int numangle, numrho;
    int total = 0;
    float ang;
    int r, n;
    int i, j;
    float irho = 1 / rho;
    double scale;

    CV_ASSERT( CV_IS_MAT(img) && CV_MAT_TYPE(img->type) == CV_8UC1 );

    image = img->data.ptr;
    step = img->step;
    width = img->cols;
    height = img->rows;

    numangle = cvRound(CV_PI / theta);
    numrho = cvRound(((width + height) * 2 + 1) / rho);

    CV_CALL( accum = (int*)cvAlloc( sizeof(accum[0]) * (numangle+2) * (numrho+2) ));
    CV_CALL( sort_buf = (int*)cvAlloc( sizeof(accum[0]) * numangle * numrho ));
    CV_CALL( tabSin = (float*)cvAlloc( sizeof(tabSin[0]) * numangle ));
    CV_CALL( tabCos = (float*)cvAlloc( sizeof(tabCos[0]) * numangle ));
    memset( accum, 0, sizeof(accum[0]) * (numangle+2) * (numrho+2) );

    for( ang = 0, n = 0; n < numangle; ang += theta, n++ )
    {
        tabSin[n] = (float)(sin(ang) * irho);
        tabCos[n] = (float)(cos(ang) * irho);
    }

    // stage 1. fill accumulator
    for( i = 0; i < height; i++ )
        for( j = 0; j < width; j++ )
        {
            if( image[i * step + j] != 0 )
                for( n = 0; n < numangle; n++ )
                {
                    r = cvRound( j * tabCos[n] + i * tabSin[n] );
                    r += (numrho - 1) / 2;
                    accum[(n+1) * (numrho+2) + r+1]++;
                }
        }

    // stage 2. find local maximums
    for( r = 0; r < numrho; r++ )
        for( n = 0; n < numangle; n++ )
        {
            int base = (n+1) * (numrho+2) + r+1;
            if( accum[base] > threshold &&
                accum[base] > accum[base - 1] && accum[base] >= accum[base + 1] &&
                accum[base] > accum[base - numrho - 2] && accum[base] >= accum[base + numrho + 2] )
                sort_buf[total++] = base;
        }

    // stage 3. sort the detected lines by accumulator value
    icvHoughSortDescent32s( sort_buf, total, accum );

    // stage 4. store the first min(total,linesMax) lines to the output buffer
    linesMax = MIN(linesMax, total);
    scale = 1./(numrho+2);
    for( i = 0; i < linesMax; i++ )
    {
        CvLinePolar line;
        int idx = sort_buf[i];
        int n = cvFloor(idx*scale) - 1;
        int r = idx - (n+1)*(numrho+2) - 1;
        line.rho = (r - (numrho - 1)*0.5f) * rho;
        line.angle = n * theta;
        cvSeqPush( lines, &line );
    }

    __END__;

    cvFree( &sort_buf );
    cvFree( &tabSin );
    cvFree( &tabCos );
    cvFree( &accum );
}






'Computer Vision' 카테고리의 다른 글

OpenCV 2.1 설치 on Mac OS X  (0) 2010.04.14
Hough transform  (0) 2010.04.12
OpenCV: cvFitLine() 연습 코드  (0) 2010.04.06
virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
posted by maetel
2010. 4. 6. 23:27 Computer Vision
OpenCV 라이브러리의 line fitting 함수

void cvFitLine(const CvArr* points, int dist_type, double param, double reps, double aeps, float* line)

Fits a line to a 2D or 3D point set.

Parameters:
  • points – Sequence or array of 2D or 3D points with 32-bit integer or floating-point coordinates
  • dist_type – The distance used for fitting (see the discussion)
  • param – Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen
  • reps – Sufficient accuracy for the radius (distance between the coordinate origin and the line). 0.01 is a good default value.
  • aeps – Sufficient accuracy for the angle. 0.01 is a good default value.
  • line – The output line parameters. In the case of a 2d fitting, it is an array of 4 floats (vx, vy, x0, y0) where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is some point on the line. in the case of a 3D fitting it is an array of 6 floats (vx, vy, vz, x0, y0, z0) where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is some point on the line

ref.
Structural Analysis and Shape Descriptors — OpenCV 2.0 C Reference





'Computer Vision' 카테고리의 다른 글

Hough transform  (0) 2010.04.12
OpenCV: cvHoughLines2() 연습 코드  (0) 2010.04.07
virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
posted by maetel
2010. 4. 6. 23:26 Computer Vision
overview:
렌즈의 왜곡 현상 때문에 이미지 상에 검출된 edge points들은 (직선에서 왜곡된) 2차 곡선을 그리게 된다. 이 곡선의 방정식을 먼저 구한 후에, 렌즈 왜곡 변수를 "0"으로 두고 나오는 직선들로부터 비로소 cross-ratio 값이 보존된다.

ref.  2010/02/10 - [Visual Information Processing Lab] - Seong-Woo Park & Yongduek Seo & Ki-Sang Hong

swPark_2000rti 440쪽: "The cross-ratio is not preserved for the (image) frame coordinate, positions of the feature points in an image, or for the distorted image coordinate. Cross-ratio is invariant only for the undistorted coordinate." (swPark_20

박승우_1999전자공학회지 96쪽: "이렇게 곡선으로 나타난 가로선과 세로선을 직선으로 피팅할 경우 cross-ratio는 왜곡 현상 때문에 이 선들에 대해서는 보존되지 않게 된다. 따라서 정확한 피팅을 위해서는 아래와 같이 렌즈의 왜곡변수(k1)를 고려한 이차곡선으로의 피팅이 필요하다.

Y = a*X + b/(1+k1*R^2) = a*X + b/(1+k1*(X^2+Y^2)) <--- 이 식은 영어 논문 (19)식과 한글 논문 (15)식을 조합, 수정한 식임. 확인 필요

이 식을 피팅해서 계수 a, b, k1를 구하고, 여기서 k1=0을 두면 왜곡이 보상된 점에 대한 직선식을 구할 수 있다. 이렇게 구해진 직선들을 패턴의 가로선들과 세로선들의 cross-ratio와 비교함으로써 영상에서 찾아진 선들을 인식할 수 있다. 또한 영상에서의 특징점은 이 식에 의해 피팅된 가로선들과 세로선들의 교점으로 정확하게 구할 수 있다."


그런데, 
현재 시험용 패턴과 코드로부터 촬영, 검출된 이미지 상의 점들은 거의 직선에 가깝다. 우선 OpenCV 라이브러리의 cvHoughLines2() 함수에 의한 직선 찾기를 해 보자.

2010/04/07 - [Visual Information Processing Lab] - OpenCV: cvHoughLines2() 연습 코드


1) 교점 구하기 테스트
line fitting을 통해 찾은 직선들로부터 패턴 격자의 corner points를 구하는 것을 시험해 본다.




실시간으로 산출하는 데 무리가 없음이 확인되었다.

2)
그러나, line fitting의 결과가 깔끔하지 않은 문제를 우선 해결해야 한다. (rho, theta, threshold 등의 함수 매개변수 값을 조정하는 것을 포함하여 사용 중인 웹캠에 적합한 데이터 처리가 필요하다.)

현재의 코드로부터 나오는 결과를 정리해 두면 아래와 같다.

NMS와 동시에 수평선 또는 수직선 위의 점들을 따로 추출한 결과 이미지 ("iplEdgeX"와  "iplEdgeY")를 cvHoughLines2() 함수의 입력으로 하고,
double rho = 1.0; // distance resolution in pixel-related units
double theta = 1.0; // angle resolution measured in radians
int threshold = 20; // (A line is returned by the function if the corresponding accumulator value is greater than threshold)
위와 같이 매개변수 값을 주면 검출된 직선들과 그로부터 계산한 교점들은 다음과 같이 나타난다.

수직선 상의 edges만 검출한 영상

수평선 상의 edges만 검출한 영상

Hough transform에 의한 line fitting 한 결과



(Non Maximal suppression (NMS)을 하기 전에) 1차 DoG 필터를 이미지 프레임의 x 방향, y 방향으로 적용한 결과 이미지 ("iplDoGx"와 "iplDoGy")를 cvHoughLines2() 함수의 입력으로 하고,
double rho = 1.0; // distance resolution in pixel-related units
double theta = 1.0; // angle resolution measured in radians
int threshold = 20; // (A line is returned by the function if the corresponding accumulator value is greater than threshold)
위와 같이 매개변수 값들을 주면 검출된 직선들과 그로부터 계산한 교점들은 다음과 같이 나타난다.

x방향으로 DoG 필터를 적용한 이미지

y방향으로 DoG 필터를 적용한 이미지

Hough transform에 의한 line fitting한 결과




그러니까... 실제로 한 직선 상의 점들로부터 여러 개의 직선을 찾게 되는 것은 edge points로 detection된 (흰색으로 보이는) 픽셀 부분의 세기값이 약하거나 일정하지 않기 때문인 것 같다. 입력 이미지를 binary로 바꾸고 cvHoughLines2()의 입력으로 accumulator value에 기준값을 주는 파라미터 threshold를 증가시키면 될 것 같다.



Try #1. 입력 이미지 이진화

NMS와 동시에 수평선 또는 수직선 위의 점들을 따로 추출한 결과 이미지 ("iplEdgeX"와  "iplEdgeY")를 이진화하고, 
double rho = 1.0; // distance resolution in pixel-related units
double theta = 1.0; // angle resolution measured in radians
int threshold = 40; // ("A line is returned by the function if the corresponding accumulator value is greater than threshold.")
위와 같이 매개변수 값들을 주면 검출된 직선들과 그로부터 계산한 교점들은 다음과 같이 나타난다.

수직선 상의 edges만 검출하여 이진화한 영상

수평선 상의 edges만 검출하여 이진화한 영상

Hough transform에 의한 line fitting한 결과


실제로 한 직선에 여러 개의 직선이 검출되는 빈도는 현저히 줄지만 대신 실제 직선 중에 검출되지 않는 것이 생긴다.


Try #2. line fitting의 입력 이미지 처리 & 매개변수 조정



Try #3. 실제로 하나인데 여러 개로 겹쳐서 나오는 직선들의 평균을 취해 하나로 합침

다음과 같은 입력 영상에 대해 탐지된 직선들의 방정식을 정의하는 매개변수 (rho와 theta) 값을 출력해 보면 아래와 같이 나온다.

# of found lines = 8 vertical   22 horizontal
vertical
rho = 172.6    theta = 0
rho = 133    theta = 0.139626
rho = -240.2    theta = 2.84489
rho = -209    theta = 2.98451
rho = 91.8    theta = 0.279253
rho = 173.8    theta = 0
rho = 52.6    theta = 0.401426
rho = 53.8    theta = 0.418879
horizontal
rho = 81    theta = 1.55334
rho = 53.4    theta = 1.55334
rho = 155    theta = 1.55334
rho = 114.6    theta = 1.55334
rho = 50.6    theta = 1.5708
rho = 29.8    theta = 1.55334
rho = 76.6    theta = 1.5708
rho = 112.6    theta = 1.5708
rho = 9.8    theta = 1.55334
rho = 152.6    theta = 1.5708
rho = 153.8    theta = 1.5708
rho = 150.6    theta = 1.5708
rho = 6.6    theta = 1.5708
rho = 78.6    theta = 1.5708
rho = 205.4    theta = 1.55334
rho = 27.8    theta = 1.5708
rho = 8.6    theta = 1.5708
rho = 201.8    theta = 1.5708
rho = 110.6    theta = 1.5708
rho = 49.8    theta = 1.5708
rho = 48.6    theta = 1.5708
rho = 111.8    theta = 1.5708





잠시 현재 상태 기록: cross ratios를 이용한 격자 무늬 패턴과 line detection 시험 + feature points matching을 위한 교점 찾기와 순번 부여 시험


to do next:
1) line detection의 error 교정
2) (rhoX, thetaX, rhoY, thetaY로 정의되는) 교점 indexing



'Computer Vision' 카테고리의 다른 글

OpenCV: cvHoughLines2() 연습 코드  (0) 2010.04.07
OpenCV: cvFitLine() 연습 코드  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
posted by maetel
2010. 4. 4. 17:33 Computer Vision
패턴 격자 무늬의 꼭지점 찾기

ref. swPark_2000rti.pdf

2010/04/04 - [Visual Information Processing Lab] - OpenCV: cvFilter2D() 연습 코드
2010/04/03 - [Visual Information Processing Lab] - Image Filtering



1) 1차 DoG filter 만들기: x방향과 y방향의 local maxima를 찾는다.

swPark_2000rti.pdf 440쪽:
"To find the edge of the grid, a first-order Derivative of Gaussian (DoG) filter with a kernel h = [-1, -7, -15, 0, 15, 7, 1] is used."




1차 DoG 필터 테스트:

입력 영상

흑백 영상

수평 방향 1차 DoG 필터링한 영상

수직 방향 1차 DoG 필터링한 영상





2) 다음 이미지에서 보다 확실히 나타나는, 2가지 문제를 해결해야 함

입력 영상

흑백 영상

x방향 DoG 필터링한 영상

y방향 DoG 필터링한 영상


(1) 패턴 격자에서 흑->백으로 넘어갈 때에만 edge 검출 (백->흑인 경우에는 무효)
(2) 검출된 edge 영역이 너무 두터움 (Non-Maxima Suppression을 해 주어야 한다고 함)


필터링 함수가 실제로 어떻게 이미지에 (색상/세기)값을 넣는지 볼 수 있을까 하여 cvFilter() 함수의 정의를 찾아 보니, 다음 부분이 나옴.

opencv/opencv/src/cv/cvfilter.cpp
CV_IMPL void
cvFilter2D( const CvArr* srcarr, CvArr* dstarr, const CvMat* _kernel, CvPoint anchor )
{
    cv::Mat src = cv::cvarrToMat(srcarr), dst = cv::cvarrToMat(dstarr);
    cv::Mat kernel = cv::cvarrToMat(_kernel);

    CV_Assert( src.size() == dst.size() && src.channels() == dst.channels() );

    cv::filter2D( src, dst, dst.depth(), kernel, anchor, 0, cv::BORDER_REPLICATE );
}

본론인 "cv::filter2D"에 대하여 같은 파일 안에 다음의 정의가 있음.

template<typename ST, class CastOp, class VecOp> struct Filter2D : public BaseFilter
{
    typedef typename CastOp::type1 KT;
    typedef typename CastOp::rtype DT;
   
    Filter2D( const Mat& _kernel, Point _anchor,
        double _delta, const CastOp& _castOp=CastOp(),
        const VecOp& _vecOp=VecOp() )
    {
        anchor = _anchor;
        ksize = _kernel.size();
        delta = saturate_cast<KT>(_delta);
        castOp0 = _castOp;
        vecOp = _vecOp;
        CV_Assert( _kernel.type() == DataType<KT>::type );
        preprocess2DKernel( _kernel, coords, coeffs );
        ptrs.resize( coords.size() );
    }

    void operator()(const uchar** src, uchar* dst, int dststep, int count, int width, int cn)
    {
        KT _delta = delta;
        const Point* pt = &coords[0];
        const KT* kf = (const KT*)&coeffs[0];
        const ST** kp = (const ST**)&ptrs[0];
        int i, k, nz = (int)coords.size();
        CastOp castOp = castOp0;

        width *= cn;
        for( ; count > 0; count--, dst += dststep, src++ )
        {
            DT* D = (DT*)dst;

            for( k = 0; k < nz; k++ )
                kp[k] = (const ST*)src[pt[k].y] + pt[k].x*cn;

            i = vecOp((const uchar**)kp, dst, width);

            for( ; i <= width - 4; i += 4 )
            {
                KT s0 = _delta, s1 = _delta, s2 = _delta, s3 = _delta;

                for( k = 0; k < nz; k++ )
                {
                    const ST* sptr = kp[k] + i;
                    KT f = kf[k];
                    s0 += f*sptr[0];
                    s1 += f*sptr[1];
                    s2 += f*sptr[2];
                    s3 += f*sptr[3];
                }

                D[i] = castOp(s0); D[i+1] = castOp(s1);
                D[i+2] = castOp(s2); D[i+3] = castOp(s3);
            }

            for( ; i < width; i++ )
            {
                KT s0 = _delta;
                for( k = 0; k < nz; k++ )
                    s0 += kf[k]*kp[k][i];
                D[i] = castOp(s0);
            }
        }
    }

    Vector<Point> coords;
    Vector<uchar> coeffs;
    Vector<uchar*> ptrs;
    KT delta;
    CastOp castOp0;
    VecOp vecOp;
};



Try #1.
-1) 필터링의 결과 이미지의 bit depth를 "8"이 아니라 "IPL_DEPTH_32F"로 바꾼 다음,  음수로 나온 gradient 값을 양수로 바꾸어 준다.
그런데, 입력 영상을 담을 메모리를 별도로 생성하지 않고, 다음과 같이 비디오 프레임 캡처 시 만들어 주므로 인위적으로 설정해 줄 수 없다.
iplInput = cvRetrieveFrame(capture);

그래서 cvConvert() 함수를 이용한다. 정의는 다음과 같다.

OpenCV.framework/Versions/A/Headers/cxcore.h
#define cvConvert( src, dst )  cvConvertScale( (src), (dst), 1, 0 )

void cvConvertScale(const CvArr* src, CvArr* dst, double scale=1, double shift=0)

Converts one array to another with optional linear transformation.

#define cvCvtScale cvConvertScale
#define cvScale cvConvertScale
#define cvConvert(src, dst ) cvConvertScale((src), (dst), 1, 0 )
Parameters:
  • src – Source array
  • dst – Destination array
  • scale – Scale factor
  • shift – Value added to the scaled source array elements


-2) Non Maximum Suppression  (NMS)
이웃 화소들의 세기값을 비교하여 해당 픽셀이 최대값이 아니면 "0"으로 하여 지워 준다



입력 영상

x방향 DoG filtering하고 NMS한 영상

y방향 DoG filtering하고 NMS한 영상





Szeliski, Computer Vision: Algorithms and Applications, 214쪽



Szeliski, Computer Vision: Algorithms and Applications, 215쪽

ref.
http://research.microsoft.com/en-us/um/people/szeliski/Book/
2010/04/06 - [Visual Information Processing Lab] - Non Maximum Suppression (NMS)


 
Try #2.
Sobel 마스크 이용



3)  gradient의 방향 판별
 
: 검출된 edge의 Gx, Gy의 절대값을 비교하여 vertical인지 horizontal인지 direction을 판별한다.
이로부터 그 점이 수평선 위의 점인지 수직선 위의 점인지를 구별하여 다음 단계인 line fitting에 적용한다.

ref.  2010/02/23 - [Visual Information Processing Lab] - virtual studio 구현: workflow







입력 영상

수직선 상의 edges만 검출한 영상

수평선 상의 edges만 검출한 영상




입력 영상

수직선 상의 edges만 검출한 영상

수평선 상의 edges만 검출한 영상


잘 됨 ^^

'Computer Vision' 카테고리의 다른 글

OpenCV: cvFitLine() 연습 코드  (0) 2010.04.06
virtual studio 구현: line fitting test  (0) 2010.04.06
OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
posted by maetel
2010. 4. 4. 00:00 Computer Vision
http://en.wikipedia.org/wiki/Convolution

void cvFilter2D(const CvArr* src, CvArr* dst, const CvMat* kernel, CvPoint anchor=cvPoint(-1, -1))

Convolves an image with the kernel.

Parameters:
  • src – The source image
  • dst – The destination image
  • kernel – Convolution kernel, a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using Split and process them individually
  • anchor – The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center



elt3470@naver: 사용자가 kernel에 원하는 행렬을 입력함으로써, LPF, HPF 등을 직접 디자인해서 사용할 수 있습니다.

=> 그러므로,
DoG (Derivative of Gaussian) 필터도 만들어 넣을 수 있다.



예로, 5x5 Gaussian  kernel을 만들어서 필터링하면 다음과 같이 영상을 smoothing하게 된다.

ref. 
2010/04/03 - [Visual Information Processing Lab] - Image Filtering





입력 영상

흑백 영상

2차원 5x5 Gaussian convolution 영상 (smoothing)



'Computer Vision' 카테고리의 다른 글

virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
posted by maetel
2010. 4. 3. 23:13 Computer Vision
Richard Szeliski, Computer Vision: Algorithms and Applications: 3.2 Linear Filtering


szeliski_3fig13

Richard Szeliski, Computer Vision: Algorithms and Applications 115쪽





Image Filtering — OpenCV 2.0 C Reference


'Computer Vision' 카테고리의 다른 글

virtual studio 구현: gradient filtering  (0) 2010.04.04
OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
OpenCV: cvFindContours  (0) 2010.04.02
posted by maetel
2010. 4. 3. 22:27 Computer Vision
OpenCV 2.0 C++ reference - Image Filtering


void cvSobel(const CvArr* src, CvArr* dst, int xorder, int yorder, int apertureSize=3)

Calculates the first, second, third or mixed image derivatives using an extended Sobel operator.

Parameters:
  • src – Source image of type CvArr*
  • dst – Destination image
  • xorder – Order of the derivative x
  • yorder – Order of the derivative y
  • apertureSize – Size of the extended Sobel kernel, must be 1, 3, 5 or 7


간단한 예로, 다음과 같은 (1차 미분, 크기3의) 마스크를 적용하면,

x방향 Sobal mask:

\vecthreethree {-1}{0}{1} {-2}{0}{2} {-1}{0}{1}


y방향 Sobel mask:

\vecthreethree {-1}{-2}{-1} {0}{0}{0} {1}{2}{1}






입력 영상

흑백 영상

x방향 Sobel 마스크로 필터링한 영상

y방향 Sobel 마스크로 필터링한 영상




입력 영상

흑백 영상

x방향 Sobel 마스크로 필터링한 영상

y방향 Sobel 마스크로 필터링한 영상




'Computer Vision' 카테고리의 다른 글

OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
OpenCV: cvFindContours  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
posted by maetel
2010. 4. 2. 19:39 Computer Vision
IPLimage 이미지 행렬의 색상값 읽기

/* get reference to pixel at (col,row),
   for multi-channel images (col) should be multiplied by number of channels */
#define CV_IMAGE_ELEM( image, elemtype, row, col )       \
    (((elemtype*)((image)->imageData + (image)->widthStep*(row)))[(col)])

'Computer Vision' 카테고리의 다른 글

Image Filtering  (0) 2010.04.03
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: cvFindContours  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
OpenCV: cvCanny() 연습 코드  (0) 2010.03.31
posted by maetel
2010. 4. 2. 17:43 Computer Vision

'Computer Vision' 카테고리의 다른 글

OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
OpenCV: cvCanny() 연습 코드  (0) 2010.03.31
Canny edge detection  (0) 2010.03.30
posted by maetel
2010. 3. 31. 20:44 Computer Vision
C. Harris and M.J. Stephens. A combined corner and edge detector. In Alvey Vision Conference, pages 147–152, 1988.



OpenCV: cvCornerHarris()

'Computer Vision' 카테고리의 다른 글

OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
OpenCV: cvFindContours  (0) 2010.04.02
OpenCV: cvCanny() 연습 코드  (0) 2010.03.31
Canny edge detection  (0) 2010.03.30
ARToolKit - simpleTest  (0) 2010.03.17
posted by maetel
2010. 3. 31. 16:58 Computer Vision
OpenCV 라이브러리의 Canny edge detection 함수

void cvCanny(const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3)

Implements the Canny algorithm for edge detection.

Parameters:
  • image – Single-channel input image
  • edges – Single-channel image to store the edges found by the function
  • threshold1 – The first threshold
  • threshold2 – The second threshold
  • aperture_size – Aperture parameter for the Sobel operator (see Sobel)


cvCanny() 함수의 입출력 이미지는 단일 채널 (single channel)이어야 하므로,
비디오 입력에서 컬러 영상을 받은 경우 흑백 이미지(gray image)로 전환해 주어야 한다.

void cvCvtColor(const CvArr* src, CvArr* dst, int code)

Converts an image from one color space to another.

Parameters:
  • src – The source 8-bit (8u), 16-bit (16u) or single-precision floating-point (32f) image
  • dst – The destination image of the same data type as the source. The number of channels may be different
  • code – Color conversion operation that can be specifed using CV_ *src_color_space* 2 *dst_color_space* constants (see below)





입력 영상

흑백 영상

edge 검출 영상 (Canny 알고리즘)




cf.
2010/03/30 - [Visual Information Processing Lab] - Canny edge detection
cv. Image Processing and Computer Vision

'Computer Vision' 카테고리의 다른 글

OpenCV: cvFindContours  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
Canny edge detection  (0) 2010.03.30
ARToolKit - simpleTest  (0) 2010.03.17
Three-dimensional computer vision: a geometric viewpoint By Olivier Faugeras  (0) 2010.03.15
posted by maetel
2010. 3. 30. 21:05 Computer Vision
Canny algorithm for edge detection


Canny, J. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 8(6).



The Hypermedia Image Processing Reference - Feature Detectors - Canny Edge Detector

OpenCV: cvCanny()

posted by maetel
2010. 3. 17. 00:52 Computer Vision



console:
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
*** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 159.250000 131.750000 104.800000 1.012757
350.47574 0.00000 158.25000 0.00000
0.00000 363.04709 120.75000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
Opening Data File Data/object_data2
About to load 2 Models
Read in No.1
Read in No.2
Objectfile num = 2
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(21.470911)
xdiv2(sqrt(lx1)) =  16(10.816654), ydiv2(sqrt(lx2)) =  16(10.770330)
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(10.770330)
xdiv2(sqrt(lx1)) =  64(83.815273), ydiv2(sqrt(lx2)) =  32(61.400326)
xdiv2(sqrt(lx1)) =  16(23.706539), ydiv2(sqrt(lx2)) =  16(4.472136)
camera transformation: 47.526724  72.503504  361.557196
camera transformation: 48.037951  72.822603  363.026011
camera transformation: 48.046785  72.797217  362.906210
xdiv2(sqrt(lx1)) =  16(21.954498), ydiv2(sqrt(lx2)) =  16(6.082763)
xdiv2(sqrt(lx1)) =  64(89.358827), ydiv2(sqrt(lx2)) =  64(87.367042)
camera transformation: 29.765561  60.651283  316.943385
xdiv2(sqrt(lx1)) =  64(93.193347), ydiv2(sqrt(lx2)) =  64(92.135769)
camera transformation: 29.336377  31.258768  308.552913
camera transformation: 29.326996  31.265709  308.560060
camera transformation: 29.317593  31.272594  308.567678
camera transformation: 29.308167  31.279422  308.575767
camera transformation: 29.300434  31.279400  308.471567
camera transformation: 29.294517  31.279010  308.386048
camera transformation: 29.290911  31.281587  308.389341
camera transformation: 29.289106  31.282873  308.391013
camera transformation: 29.287903  31.283729  308.392137
camera transformation: 29.286700  31.284584  308.393268
xdiv2(sqrt(lx1)) =  64(95.084173), ydiv2(sqrt(lx2)) =  64(93.048375)
camera transformation: 26.966042  18.136324  304.545556
camera transformation: 26.975528  18.123378  304.876362
camera transformation: 26.974050  18.123228  304.940890
camera transformation: 26.972437  18.124391  304.943633
camera transformation: 26.971361  18.125165  304.945467
camera transformation: 26.808263  18.230519  305.010947
xdiv2(sqrt(lx1)) =  64(95.084173), ydiv2(sqrt(lx2)) =  64(94.047860)
camera transformation: 26.002414  9.524376  302.800215
xdiv2(sqrt(lx1)) =  64(94.085068), ydiv2(sqrt(lx2)) =  64(95.047357)
camera transformation: 26.413729  0.689117  303.645529
xdiv2(sqrt(lx1)) =  64(94.085068), ydiv2(sqrt(lx2)) =  64(93.048375)
camera transformation: 25.907495  -6.101957  305.616547
xdiv2(sqrt(lx1)) =  64(92.135769), ydiv2(sqrt(lx2)) =  64(94.132885)
camera transformation: 25.412818  -5.909871  306.780215
xdiv2(sqrt(lx1)) =  64(91.350972), ydiv2(sqrt(lx2)) =  64(93.343452)
camera transformation: 25.781305  2.094997  307.981181
xdiv2(sqrt(lx1)) =  64(92.913939), ydiv2(sqrt(lx2)) =  64(93.648278)
camera transformation: 27.438552  14.266152  310.693037
camera transformation: 27.291663  13.634920  311.326695
xdiv2(sqrt(lx1)) =  64(93.557469), ydiv2(sqrt(lx2)) =  64(93.059121)
camera transformation: 29.431110  30.722648  311.787341
camera transformation: 29.182398  30.236816  310.084471
xdiv2(sqrt(lx1)) =  64(92.417531), ydiv2(sqrt(lx2)) =  64(89.627005)
camera transformation: 33.543470  46.722727  318.630344
camera transformation: 33.279319  46.231533  316.701553
camera transformation: 33.218193  46.126294  316.268524
xdiv2(sqrt(lx1)) =  64(93.770998), ydiv2(sqrt(lx2)) =  64(94.762862)
camera transformation: 22.185723  48.702070  301.635290
xdiv2(sqrt(lx1)) =  64(91.350972), ydiv2(sqrt(lx2)) =  64(91.350972)
camera transformation: 31.843662  22.584935  312.932857
xdiv2(sqrt(lx1)) =  64(87.281155), ydiv2(sqrt(lx2)) =  64(86.284413)
camera transformation: 45.529216  19.369259  325.624182
xdiv2(sqrt(lx1)) =  64(81.614950), ydiv2(sqrt(lx2)) =  64(80.752709)
camera transformation: 73.852204  34.314748  342.625602
camera transformation: 73.974162  34.334751  343.237999
camera transformation: 73.993076  34.340643  343.349751
xdiv2(sqrt(lx1)) =  64(78.917679), ydiv2(sqrt(lx2)) =  64(77.103826)
camera transformation: 88.645309  47.349086  356.684110
camera transformation: 88.758164  47.400553  357.231157
camera transformation: 88.695447  47.401908  357.055015
camera transformation: 88.700800  47.407119  357.092444
camera transformation: 88.706174  47.412326  357.129963
xdiv2(sqrt(lx1)) =  64(76.485293), ydiv2(sqrt(lx2)) =  64(75.504967)
camera transformation: 95.125085  56.840372  365.471078
xdiv2(sqrt(lx1)) =  64(74.732858), ydiv2(sqrt(lx2)) =  64(73.756356)
camera transformation: 101.360782  64.847700  374.473468
xdiv2(sqrt(lx1)) =  64(74.953319), ydiv2(sqrt(lx2)) =  64(72.780492)
camera transformation: 97.170289  68.278512  376.906534
xdiv2(sqrt(lx1)) =  64(75.927597), ydiv2(sqrt(lx2)) =  64(73.756356)
camera transformation: 86.011813  69.023996  372.861548
camera transformation: 86.057924  69.050586  373.066477
camera transformation: 86.063062  69.057045  373.103339
camera transformation: 86.076876  69.074292  373.202098
xdiv2(sqrt(lx1)) =  64(76.687678), ydiv2(sqrt(lx2)) =  64(73.545904)
camera transformation: 69.732429  66.291612  368.969634
xdiv2(sqrt(lx1)) =  64(79.624117), ydiv2(sqrt(lx2)) =  64(74.148500)
camera transformation: 46.840577  63.628929  363.768160
camera transformation: 46.840007  63.632425  363.790938
camera transformation: 46.839450  63.635925  363.813810
camera transformation: 46.838907  63.639426  363.836775
camera transformation: 46.837526  63.648777  363.898466
camera transformation: 46.974486  63.853253  365.578761
xdiv2(sqrt(lx1)) =  64(79.429214), ydiv2(sqrt(lx2)) =  64(74.813100)
camera transformation: 23.081293  61.745840  363.657219
xdiv2(sqrt(lx1)) =  64(78.089692), ydiv2(sqrt(lx2)) =  64(75.538070)
camera transformation: 8.900281  62.544139  365.768910
xdiv2(sqrt(lx1)) =  64(77.103826), ydiv2(sqrt(lx2)) =  64(74.813100)
camera transformation: 0.378742  64.940559  369.306745
xdiv2(sqrt(lx1)) =  64(75.953933), ydiv2(sqrt(lx2)) =  64(73.681748)
camera transformation: -6.822503  69.683364  373.194852
xdiv2(sqrt(lx1)) =  64(74.330344), ydiv2(sqrt(lx2)) =  64(72.560320)
camera transformation: -9.914774  75.492749  381.592839
camera transformation: -9.924722  75.520509  381.798228
camera transformation: -9.924512  75.513877  381.751701
xdiv2(sqrt(lx1)) =  64(102.420701), ydiv2(sqrt(lx2)) =  64(105.546198)
camera transformation: 25.643794  -12.219666  274.388407
xdiv2(sqrt(lx1)) =  64(101.271911), ydiv2(sqrt(lx2)) =  64(104.235311)
camera transformation: 28.719062  -28.140558  278.831061
xdiv2(sqrt(lx1)) =  64(101.271911), ydiv2(sqrt(lx2)) =  64(102.420701)
camera transformation: 29.939512  -32.147970  280.821053
xdiv2(sqrt(lx1)) =  64(101.434708), ydiv2(sqrt(lx2)) =  64(102.420701)
camera transformation: 30.361984  -29.717296  279.929286
xdiv2(sqrt(lx1)) =  64(101.434708), ydiv2(sqrt(lx2)) =  64(103.406963)
camera transformation: 30.174193  -29.017554  279.290796
xdiv2(sqrt(lx1)) =  64(101.788997), ydiv2(sqrt(lx2)) =  64(102.591423)
camera transformation: 32.117728  -25.070535  281.685938
xdiv2(sqrt(lx1)) =  64(100.662803), ydiv2(sqrt(lx2)) =  64(97.862148)
camera transformation: 38.831844  -18.496300  291.887044
camera transformation: 38.841845  -18.503971  291.843948
camera transformation: 38.843803  -18.505502  291.835079
camera transformation: 38.843753  -18.505592  291.835153
camera transformation: 38.843704  -18.505682  291.835227
camera transformation: 37.980938  -19.410609  288.045910
xdiv2(sqrt(lx1)) =  64(97.267672), ydiv2(sqrt(lx2)) =  64(89.050547)
camera transformation: 46.227197  -18.491398  306.727155
camera transformation: 45.798640  -18.550941  304.551447
camera transformation: 45.717704  -18.562751  304.060951
camera transformation: 45.708703  -18.567481  303.930300
camera transformation: 45.713116  -18.570736  303.834144
camera transformation: 45.886030  -18.573805  303.828460
xdiv2(sqrt(lx1)) =  64(94.371606), ydiv2(sqrt(lx2)) =  64(84.929382)
camera transformation: 51.491972  -21.777249  314.936856
camera transformation: 51.397513  -21.778709  314.409264
camera transformation: 51.400957  -21.778955  314.338457
xdiv2(sqrt(lx1)) =  64(90.801982), ydiv2(sqrt(lx2)) =  64(82.000000)
camera transformation: 55.475986  -19.223639  325.819428
xdiv2(sqrt(lx1)) =  64(86.977008), ydiv2(sqrt(lx2)) =  64(78.102497)
camera transformation: 58.131599  -13.545250  340.719883
camera transformation: 58.098739  -13.550505  340.466506
camera transformation: 58.097072  -13.551186  340.341751
xdiv2(sqrt(lx1)) =  64(84.118963), ydiv2(sqrt(lx2)) =  64(75.186435)
camera transformation: 61.767460  -6.498478  353.644297
xdiv2(sqrt(lx1)) =  64(82.219219), ydiv2(sqrt(lx2)) =  64(73.246160)
camera transformation: 63.582900  -1.431624  362.009189
xdiv2(sqrt(lx1)) =  64(81.271151), ydiv2(sqrt(lx2)) =  64(71.309186)
camera transformation: 64.270934  0.570528  368.182904
xdiv2(sqrt(lx1)) =  64(79.056942), ydiv2(sqrt(lx2)) =  64(70.342022)
camera transformation: 64.828170  2.674645  373.263186
xdiv2(sqrt(lx1)) =  64(79.056942), ydiv2(sqrt(lx2)) =  64(70.092796)
camera transformation: 67.302390  5.675445  377.309481
xdiv2(sqrt(lx1)) =  64(78.108898), ydiv2(sqrt(lx2)) =  64(69.123079)
camera transformation: 68.347540  8.393469  381.873230
xdiv2(sqrt(lx1)) =  64(77.162167), ydiv2(sqrt(lx2)) =  64(68.154237)
camera transformation: 69.042038  10.265904  387.758075
xdiv2(sqrt(lx1)) =  64(74.946648), ydiv2(sqrt(lx2)) =  64(66.219333)
camera transformation: 70.165458  11.984999  395.261733
xdiv2(sqrt(lx1)) =  64(73.681748), ydiv2(sqrt(lx2)) =  64(64.031242)
camera transformation: 68.526903  12.455397  403.733036
xdiv2(sqrt(lx1)) =  64(72.422372), ydiv2(sqrt(lx2)) =  32(62.817195)
camera transformation: 67.963463  14.354779  413.581813
xdiv2(sqrt(lx1)) =  64(70.519501), ydiv2(sqrt(lx2)) =  32(60.876925)
camera transformation: 66.137421  17.054604  422.936929
xdiv2(sqrt(lx1)) =  64(69.570109), ydiv2(sqrt(lx2)) =  32(59.908263)
camera transformation: 63.968950  18.263953  431.720565
xdiv2(sqrt(lx1)) =  64(67.052218), ydiv2(sqrt(lx2)) =  32(57.974132)
camera transformation: 62.462242  20.120503  439.513696
xdiv2(sqrt(lx1)) =  64(66.098411), ydiv2(sqrt(lx2)) =  32(57.723479)
camera transformation: 61.941764  22.692295  449.407070
xdiv2(sqrt(lx1)) =  64(65.145990), ydiv2(sqrt(lx2)) =  32(55.785303)
camera transformation: 59.806257  24.724058  458.476846
xdiv2(sqrt(lx1)) =  32(62.936476), ydiv2(sqrt(lx2)) =  32(54.817880)
camera transformation: 59.359972  28.938997  468.908640
xdiv2(sqrt(lx1)) =  32(61.983869), ydiv2(sqrt(lx2)) =  32(53.600373)
camera transformation: 59.681297  34.221026  480.057287
xdiv2(sqrt(lx1)) =  32(61.032778), ydiv2(sqrt(lx2)) =  32(51.662365)
camera transformation: 59.187204  39.293310  491.540486
xdiv2(sqrt(lx1)) =  32(59.135438), ydiv2(sqrt(lx2)) =  32(51.662365)
camera transformation: 61.198962  45.440910  500.587979
xdiv2(sqrt(lx1)) =  32(57.567352), ydiv2(sqrt(lx2)) =  32(49.729267)
camera transformation: 65.380338  52.172304  509.014661
xdiv2(sqrt(lx1)) =  32(56.920998), ydiv2(sqrt(lx2)) =  32(49.729267)
camera transformation: 75.377339  53.673070  514.052120
xdiv2(sqrt(lx1)) =  32(58.821765), ydiv2(sqrt(lx2)) =  32(51.419841)
camera transformation: 93.700312  52.879497  501.357587
xdiv2(sqrt(lx1)) =  32(60.728906), ydiv2(sqrt(lx2)) =  32(53.366656)
camera transformation: 110.786936  47.137332  484.639194
xdiv2(sqrt(lx1)) =  32(62.641839), ydiv2(sqrt(lx2)) =  32(54.129474)
camera transformation: 124.246028  41.597219  472.274088
xdiv2(sqrt(lx1)) =  64(64.560050), ydiv2(sqrt(lx2)) =  32(55.317267)
camera transformation: 135.008844  36.708725  456.711952
xdiv2(sqrt(lx1)) =  64(67.446275), ydiv2(sqrt(lx2)) =  32(57.070132)
camera transformation: 137.579461  31.874597  442.777551
xdiv2(sqrt(lx1)) =  64(67.446275), ydiv2(sqrt(lx2)) =  32(59.228372)
camera transformation: 127.910406  27.598463  434.634811
xdiv2(sqrt(lx1)) =  64(68.410526), ydiv2(sqrt(lx2)) =  32(60.207973)
camera transformation: 112.249511  24.013273  434.361426
xdiv2(sqrt(lx1)) =  64(66.940272), ydiv2(sqrt(lx2)) =  32(59.439044)
camera transformation: 96.317479  19.783530  438.720285
xdiv2(sqrt(lx1)) =  64(64.761099), ydiv2(sqrt(lx2)) =  32(58.463664)
camera transformation: 79.445565  20.994126  448.287909
xdiv2(sqrt(lx1)) =  32(62.817195), ydiv2(sqrt(lx2)) =  32(57.271284)
camera transformation: 66.537571  25.965808  468.052644
xdiv2(sqrt(lx1)) =  32(58.463664), ydiv2(sqrt(lx2)) =  32(53.366656)
camera transformation: 57.936429  29.645999  492.716628
xdiv2(sqrt(lx1)) =  32(58.249464), ydiv2(sqrt(lx2)) =  32(51.971146)
camera transformation: 54.637770  34.427739  505.420015
xdiv2(sqrt(lx1)) =  32(56.515485), ydiv2(sqrt(lx2)) =  32(50.990195)
camera transformation: 55.090245  39.330479  515.034782
xdiv2(sqrt(lx1)) =  32(55.317267), ydiv2(sqrt(lx2)) =  32(50.009999)
camera transformation: 52.836001  41.933900  528.054997
xdiv2(sqrt(lx1)) =  32(54.129474), ydiv2(sqrt(lx2)) =  32(48.052055)
camera transformation: 46.875504  45.714529  537.427256
xdiv2(sqrt(lx1)) =  32(53.366656), ydiv2(sqrt(lx2)) =  32(48.052055)
camera transformation: 42.737270  51.235894  545.906058
xdiv2(sqrt(lx1)) =  32(52.392748), ydiv2(sqrt(lx2)) =  32(47.074409)
camera transformation: 42.607019  57.505001  557.305403
xdiv2(sqrt(lx1)) =  32(52.392748), ydiv2(sqrt(lx2)) =  32(45.891176)
camera transformation: 42.706395  61.832306  564.125210
xdiv2(sqrt(lx1)) =  32(51.419841), ydiv2(sqrt(lx2)) =  32(45.122057)
camera transformation: 44.569870  65.995536  569.286337
xdiv2(sqrt(lx1)) =  32(50.447993), ydiv2(sqrt(lx2)) =  32(45.122057)
camera transformation: 45.407322  69.355797  574.205444
xdiv2(sqrt(lx1)) =  32(50.447993), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 46.935351  72.947170  579.393615
xdiv2(sqrt(lx1)) =  32(49.729267), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 46.394931  76.412312  587.196024
xdiv2(sqrt(lx1)) =  32(49.729267), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 45.200327  81.248592  595.161397
xdiv2(sqrt(lx1)) =  32(49.040799), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 42.080958  89.240594  602.831063
xdiv2(sqrt(lx1)) =  32(48.083261), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 38.516671  97.426379  610.670686
xdiv2(sqrt(lx1)) =  32(46.840154), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 32.168943  103.347263  617.259159
xdiv2(sqrt(lx1)) =  32(45.880279), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 26.487904  108.409702  625.759269
xdiv2(sqrt(lx1)) =  32(45.880279), ydiv2(sqrt(lx2)) =  32(40.521599)
camera transformation: 21.568188  115.906286  633.657622
xdiv2(sqrt(lx1)) =  32(45.221676), ydiv2(sqrt(lx2)) =  32(41.109610)
camera transformation: 22.289380  125.328691  638.688564
xdiv2(sqrt(lx1)) =  32(45.221676), ydiv2(sqrt(lx2)) =  32(40.804412)
camera transformation: 31.223567  126.138528  644.100426
xdiv2(sqrt(lx1)) =  32(43.965896), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 51.375830  123.050894  653.504746
xdiv2(sqrt(lx1)) =  32(43.680659), ydiv2(sqrt(lx2)) =  32(38.832976)
camera transformation: 54.986398  106.280175  660.526525
xdiv2(sqrt(lx1)) =  32(45.607017), ydiv2(sqrt(lx2)) =  32(42.201896)
camera transformation: 36.010280  109.096895  630.149252
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 23.913726  100.715033  624.185981
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(42.953463)
camera transformation: 22.448901  95.809415  630.102810
xdiv2(sqrt(lx1)) =  32(45.354162), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 24.863623  93.022652  628.420166
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(42.953463)
camera transformation: 29.075494  84.506661  626.944774
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.976184)
camera transformation: 32.818225  67.561990  629.946294
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.000000)
camera transformation: 33.948638  50.855880  628.150472
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.773197)
camera transformation: 37.205573  39.658919  626.642224
xdiv2(sqrt(lx1)) =  32(47.074409), ydiv2(sqrt(lx2)) =  32(42.755117)
camera transformation: 48.272787  27.464585  621.807431
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 48.473773  27.204197  620.426198
xdiv2(sqrt(lx1)) =  32(45.607017), ydiv2(sqrt(lx2)) =  32(44.384682)
camera transformation: 52.061117  29.826330  624.046628
xdiv2(sqrt(lx1)) =  32(44.643029), ydiv2(sqrt(lx2)) =  32(44.643029)
camera transformation: 61.332250  33.508261  630.315033
xdiv2(sqrt(lx1)) =  32(44.384682), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 75.931906  36.455966  643.181794
xdiv2(sqrt(lx1)) =  32(41.976184), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 86.565757  40.688748  656.556474
xdiv2(sqrt(lx1)) =  32(42.755117), ydiv2(sqrt(lx2)) =  32(43.931765)
camera transformation: 90.448446  42.593207  649.464611
xdiv2(sqrt(lx1)) =  32(42.755117), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 93.440056  44.079600  649.385535
xdiv2(sqrt(lx1)) =  32(41.773197), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 99.813271  46.222595  649.396449
xdiv2(sqrt(lx1)) =  32(41.773197), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 102.614138  50.375842  655.237988
xdiv2(sqrt(lx1)) =  32(42.201896), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 107.807744  58.508295  656.829797
xdiv2(sqrt(lx1)) =  32(42.201896), ydiv2(sqrt(lx2)) =  32(42.201896)
camera transformation: 112.601771  64.714215  660.423527
xdiv2(sqrt(lx1)) =  32(42.449971), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 119.089054  71.487308  666.367434
xdiv2(sqrt(lx1)) =  32(42.449971), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 125.523569  79.032026  674.223484
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 130.053455  84.563865  680.039495
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(40.804412)
camera transformation: 129.275668  86.691488  683.081734
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 126.550484  88.506627  682.311115
xdiv2(sqrt(lx1)) =  32(42.059482), ydiv2(sqrt(lx2)) =  32(41.761226)
camera transformation: 121.644265  91.022978  680.049182
xdiv2(sqrt(lx1)) =  32(41.436699), ydiv2(sqrt(lx2)) =  32(42.059482)
camera transformation: 117.740522  96.624935  684.211427
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(41.436699)
camera transformation: 110.276211  100.999170  687.457576
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 97.652541  103.798959  687.316253
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 83.897556  105.688990  687.504940
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 71.000147  108.309008  688.296546
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 60.984082  110.858782  687.249040
xdiv2(sqrt(lx1)) =  32(39.560081), ydiv2(sqrt(lx2)) =  32(41.231056)
camera transformation: 51.369035  112.901238  686.984174
xdiv2(sqrt(lx1)) =  32(39.217343), ydiv2(sqrt(lx2)) =  32(39.924930)
camera transformation: 46.166159  112.934981  690.882951
xdiv2(sqrt(lx1)) =  32(39.849718), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 45.556980  106.446232  701.391185
xdiv2(sqrt(lx1)) =  32(38.600518), ydiv2(sqrt(lx2)) =  32(39.217343)
camera transformation: 51.465861  95.475483  707.877746
xdiv2(sqrt(lx1)) =  32(38.327536), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 64.485203  82.420285  719.345051
xdiv2(sqrt(lx1)) =  32(38.078866), ydiv2(sqrt(lx2)) =  32(37.643060)
camera transformation: 72.839799  68.328484  723.620075
xdiv2(sqrt(lx1)) =  32(38.327536), ydiv2(sqrt(lx2)) =  32(38.327536)
camera transformation: 83.043436  61.689418  729.140520
xdiv2(sqrt(lx1)) =  32(39.293765), ydiv2(sqrt(lx2)) =  32(39.051248)
camera transformation: 98.932860  61.449290  721.230400
xdiv2(sqrt(lx1)) =  32(38.897301), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 111.747340  64.018396  725.454519
xdiv2(sqrt(lx1)) =  32(38.897301), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 119.345649  68.197163  731.840936
xdiv2(sqrt(lx1)) =  32(37.947332), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 123.213043  71.990158  742.869709
xdiv2(sqrt(lx1)) =  32(38.275318), ydiv2(sqrt(lx2)) =  32(38.600518)
camera transformation: 124.928583  75.842952  746.732102
xdiv2(sqrt(lx1)) =  32(38.275318), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 127.574049  79.661169  755.149807
xdiv2(sqrt(lx1)) =  32(37.336309), ydiv2(sqrt(lx2)) =  32(36.687873)
camera transformation: 128.133593  85.268440  774.011885
xdiv2(sqrt(lx1)) =  32(36.055513), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 125.285369  92.369672  789.529777
xdiv2(sqrt(lx1)) =  32(35.468296), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 125.355465  101.492943  806.804835
xdiv2(sqrt(lx1)) =  32(35.468296), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 126.976068  110.303044  816.147079
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 129.030006  118.465726  823.672709
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 130.345172  124.734486  829.930479
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 132.729278  132.789712  836.521237
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 133.268128  135.881209  839.899669
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 134.236303  135.151276  840.973138
xdiv2(sqrt(lx1)) =  32(34.438351), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 134.174915  135.014464  838.089936
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 132.647385  132.463415  836.767455
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 131.536530  130.889873  836.819482
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 129.640956  129.549781  837.835786
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 122.416326  127.237410  838.730750
xdiv2(sqrt(lx1)) =  32(33.615473), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 114.681898  130.237589  843.522592
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 105.646409  136.376346  846.852249
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 96.744903  141.580589  849.498352
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.615473)
camera transformation: 87.025411  145.160624  850.906986
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 79.279170  148.220483  853.990747
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.615473)
camera transformation: 74.042168  149.434630  858.862223
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 73.097454  147.509559  860.838221
xdiv2(sqrt(lx1)) =  32(33.615473), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 76.927423  137.613163  859.651242
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(33.526109)
camera transformation: 90.769451  121.958947  855.920196
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 112.844990  101.514565  857.973255
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 134.242323  85.505279  866.278006
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 158.734774  76.690148  871.135455
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(32.572995)
camera transformation: 184.575685  77.063560  872.870689
xdiv2(sqrt(lx1)) =  32(32.202484), ydiv2(sqrt(lx2)) =  32(32.893768)
camera transformation: 196.739036  81.023969  875.755616
xdiv2(sqrt(lx1)) =  16(31.780497), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 199.430929  81.755961  880.344083
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 200.441920  83.115797  881.501856
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 199.992201  84.056351  882.274928
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 194.660196  81.144185  879.996992
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 182.478932  76.688130  862.581188
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 165.029910  69.956759  846.914938
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.481879)
camera transformation: 148.216787  63.696632  836.139722
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 137.708326  62.434520  837.211154
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 130.405199  63.050395  838.392674
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 121.309247  64.232032  835.057791
xdiv2(sqrt(lx1)) =  32(33.837849), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 108.562469  62.405185  828.777825
xdiv2(sqrt(lx1)) =  32(33.837849), ydiv2(sqrt(lx2)) =  32(36.400549)
camera transformation: 94.802902  58.440889  818.382655
xdiv2(sqrt(lx1)) =  32(34.481879), ydiv2(sqrt(lx2)) =  32(36.400549)
camera transformation: 78.282615  55.985707  816.155844
xdiv2(sqrt(lx1)) =  32(34.481879), ydiv2(sqrt(lx2)) =  32(36.138622)
camera transformation: 66.155921  56.631358  817.613419
xdiv2(sqrt(lx1)) =  32(33.526109), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 56.963787  62.544571  835.411848
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 51.797430  71.337717  857.152748
xdiv2(sqrt(lx1)) =  16(31.953091), ydiv2(sqrt(lx2)) =  32(34.481879)
camera transformation: 49.113705  81.742516  877.848618
xdiv2(sqrt(lx1)) =  16(30.675723), ydiv2(sqrt(lx2)) =  32(34.205263)
camera transformation: 45.462997  91.780838  899.226448
xdiv2(sqrt(lx1)) =  16(30.083218), ydiv2(sqrt(lx2)) =  32(34.205263)
camera transformation: 44.191492  101.376386  913.882100
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 48.080357  104.490597  927.737990
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  32(32.984845)
camera transformation: 47.183950  99.357191  943.043611
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.280025)
camera transformation: 49.021185  95.144362  936.398343
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.280025)
camera transformation: 51.004678  92.495266  930.018137
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 52.578685  91.887273  928.649743
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.984845)
camera transformation: 54.575349  90.917785  934.008053
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 61.171677  83.994290  946.765346
xdiv2(sqrt(lx1)) =  16(27.513633), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 68.022089  79.041471  974.920725
xdiv2(sqrt(lx1)) =  16(27.856777), ydiv2(sqrt(lx2)) =  16(31.780497)
camera transformation: 70.566339  79.631419  1000.697974
camera transformation: 70.566793  79.629331  1000.700421
camera transformation: 70.566983  79.628141  1000.699678
camera transformation: 70.566895  79.627903  1000.696519
camera transformation: 70.566807  79.627665  1000.693356
camera transformation: 68.945868  77.838917  973.900413
camera transformation: 68.957132  77.842488  974.011316
camera transformation: 68.909571  77.778916  973.205540
camera transformation: 68.836672  77.680396  971.969110
camera transformation: 68.681145  77.480247  969.402115
xdiv2(sqrt(lx1)) =  16(26.000000), ydiv2(sqrt(lx2)) =  16(31.048349)
camera transformation: 74.648401  78.625620  981.499671
camera transformation: 74.293364  78.221936  976.312472
camera transformation: 74.112997  77.989945  973.535124
xdiv2(sqrt(lx1)) =  16(26.000000), ydiv2(sqrt(lx2)) =  16(31.048349)
camera transformation: 77.107583  77.283336  979.854111
xdiv2(sqrt(lx1)) =  16(23.769729), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 75.261915  78.521429  988.152351
xdiv2(sqrt(lx1)) =  16(24.166092), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.319139  82.949946  995.178316
xdiv2(sqrt(lx1)) =  16(24.166092), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.257570  87.591186  999.281600
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.639653  90.643699  999.059592
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(28.861739)
camera transformation: 73.839146  96.689996  1007.730422
xdiv2(sqrt(lx1)) =  16(21.931712), ydiv2(sqrt(lx2)) =  16(28.635642)
camera transformation: 75.007170  104.793795  1020.038034
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(27.166155)
camera transformation: 75.424381  114.712060  1035.581940
xdiv2(sqrt(lx1)) =  16(22.847319), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 75.189230  121.234236  1044.450289
xdiv2(sqrt(lx1)) =  16(22.360680), ydiv2(sqrt(lx2)) =  16(26.019224)
camera transformation: 75.101660  127.500667  1042.741247
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 74.133666  129.633153  1033.337032
xdiv2(sqrt(lx1)) =  16(23.706539), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 73.893372  135.762779  1039.388559
xdiv2(sqrt(lx1)) =  16(22.360680), ydiv2(sqrt(lx2)) =  16(23.194827)
camera transformation: 74.280040  147.109093  1061.245210
xdiv2(sqrt(lx1)) =  16(22.825424), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 72.386508  154.983481  1068.109780
xdiv2(sqrt(lx1)) =  16(21.470911), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 69.840226  161.209263  1080.880767
xdiv2(sqrt(lx1)) =  16(22.472205), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 68.988847  173.214650  1089.041513
xdiv2(sqrt(lx1)) =  16(20.591260), ydiv2(sqrt(lx2)) =  16(20.615528)
camera transformation: 69.312210  181.480066  1096.433613
xdiv2(sqrt(lx1)) =  16(21.095023), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 69.347715  192.403342  1108.818112
xdiv2(sqrt(lx1)) =  16(19.723083), ydiv2(sqrt(lx2)) =  16(20.615528)
camera transformation: 69.429048  202.875585  1110.353465
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 71.168069  214.979703  1130.873908
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 73.064987  219.576309  1128.449643
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 75.354988  218.536466  1127.484516
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(23.086793)
camera transformation: 79.496791  208.075638  1101.430550
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 83.502951  197.178803  1101.454158
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(26.019224)
camera transformation: 80.751021  187.231605  1098.142619
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 75.615859  182.953333  1098.242465
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 69.238328  182.836342  1104.420105
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 58.921256  189.803963  1119.336683
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(25.079872)
camera transformation: 48.130291  203.254595  1150.303345
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(24.186773)
camera transformation: 37.875032  215.397826  1156.775348
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(24.083189)
camera transformation: 26.924012  222.776023  1162.781642
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 20.082917  226.840938  1162.685944
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 22.806747  229.658115  1152.720331
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 29.445635  232.982815  1174.609994
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 33.481394  226.473703  1157.924273
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(23.194827)
camera transformation: 30.495340  226.090046  1177.193920
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(23.537205)
camera transformation: 24.342926  224.760426  1171.132278
xdiv2(sqrt(lx1)) =  16(15.652476), ydiv2(sqrt(lx2)) =  16(20.124612)
camera transformation: 15.579943  239.866513  1222.534369
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(21.095023)
camera transformation: 5.363581  247.346006  1225.658392
xdiv2(sqrt(lx1)) =  16(15.652476), ydiv2(sqrt(lx2)) =  16(21.540659)
camera transformation: 8.237635  251.988778  1215.926768
xdiv2(sqrt(lx1)) =  16(13.892444), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 25.406377  226.983784  1133.630367
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(25.317978)
camera transformation: 32.922597  231.597615  1169.130556
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(25.495098)
camera transformation: 35.600787  235.118105  1167.194188
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(25.495098)
camera transformation: 32.219501  234.365046  1167.515927
xdiv2(sqrt(lx1)) =  16(11.661904), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 23.162797  235.915385  1178.228608
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(25.709920)
camera transformation: 10.773644  234.532144  1179.040319
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.455318  223.415549  1185.644138
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.467461  223.544679  1186.380390
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.467041  223.539702  1186.354002
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(25.495098)
xdiv2(sqrt(lx1)) =  16(12.727922), ydiv2(sqrt(lx2)) =  16(25.317978)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(24.515301)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(25.317978)
xdiv2(sqrt(lx1)) =  16(11.313708), ydiv2(sqrt(lx2)) =  16(25.019992)
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(13.453624), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.727922), ydiv2(sqrt(lx2)) =  16(22.203603)
xdiv2(sqrt(lx1)) =  16(10.630146), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(23.021729)
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(23.021729)
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 132.779468  198.248867  1135.067772
xdiv2(sqrt(lx1)) =  16(19.646883), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 135.904589  194.712040  1138.438276
xdiv2(sqrt(lx1)) =  16(20.615528), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 133.299554  193.016949  1131.889626
xdiv2(sqrt(lx1)) =  16(20.880613), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 137.694795  195.957438  1150.675221
xdiv2(sqrt(lx1)) =  16(21.189620), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 134.499795  193.523258  1133.184371
xdiv2(sqrt(lx1)) =  16(20.880613), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 129.383477  204.042089  1152.272490
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 125.631275  224.387641  1186.884785
xdiv2(sqrt(lx1)) =  16(20.615528), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 123.228559  242.341359  1194.824739
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 130.017178  241.639230  1170.193548
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 136.216116  245.059353  1194.806020
xdiv2(sqrt(lx1)) =  16(19.313208), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 141.220983  227.546628  1187.876159
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(23.000000)
camera transformation: 144.459379  196.596440  1233.376027
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(22.022716)
camera transformation: 146.715979  165.075248  1210.828436
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 164.734280  150.806482  1203.168745
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 183.778311  136.248205  1191.281805
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 204.774009  130.848903  1164.187821
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 225.157534  124.076992  1147.211236
xdiv2(sqrt(lx1)) =  16(18.681542), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 233.234676  109.661773  1134.055904
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 252.341338  102.014798  1156.960839
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(24.083189)
camera transformation: 272.554703  90.582945  1167.346273
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(24.331050)
camera transformation: 283.819182  59.254174  1172.166378
xdiv2(sqrt(lx1)) =  16(16.643317), ydiv2(sqrt(lx2)) =  16(23.537205)
camera transformation: 284.489218  59.398413  1174.954485
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(23.345235)
camera transformation: 284.555884  59.412216  1175.232079
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(22.803509)
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(22.803509)
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(21.377558)
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(22.561028)
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(21.213203)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(21.095023)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.104973)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.000000)
camera transformation: 143.485301  119.275553  1271.117296
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(18.027756)
camera transformation: 129.794106  129.856828  1264.717625
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.026298)
camera transformation: 126.112556  137.745229  1287.226677
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(19.026298)
camera transformation: 123.598688  139.754011  1295.083712
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(19.000000)
camera transformation: 120.084903  141.958021  1286.258525
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(18.110770)
camera transformation: 123.386213  152.249674  1326.457714
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 126.672879  158.697867  1338.021301
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 130.545663  165.011224  1346.520550
xdiv2(sqrt(lx1)) =  16(18.357560), ydiv2(sqrt(lx2)) =  16(18.000000)
camera transformation: 134.183209  167.887846  1362.855432
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 136.724235  152.284118  1331.325867
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 141.528110  129.844518  1337.910152
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 136.807720  98.993744  1344.203463
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 137.260155  89.288621  1369.681085
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(21.213203)
camera transformation: 148.832467  95.382332  1371.026708
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 172.351600  125.172297  1381.037411
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 181.816715  145.946362  1410.063424
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 176.931998  166.654626  1446.978089
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 170.620455  178.757537  1466.531003
xdiv2(sqrt(lx1)) =  16(16.763055), ydiv2(sqrt(lx2)) =  16(20.223748)
camera transformation: 169.875220  188.466436  1483.797895
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 174.987850  189.284149  1495.645347
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 181.267454  174.550566  1508.472203
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 181.191349  179.079500  1523.635679
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 183.205024  186.399784  1531.998631
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 185.455632  180.597208  1539.985387
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 197.189642  164.638217  1552.491839
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 206.173040  145.484625  1549.559130
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 215.987118  135.361241  1571.892509
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 226.958563  128.138355  1585.541132
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(18.110770)
camera transformation: 233.309462  120.679644  1584.757498
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 238.299973  116.870754  1602.890555
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(18.027756)
camera transformation: 241.279009  118.853804  1645.679400
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 250.395471  132.817954  1684.433255
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 246.302134  143.499304  1672.585730
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 243.962590  153.779282  1653.906387
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 247.855327  166.700262  1672.224472
xdiv2(sqrt(lx1)) =  16(13.416408), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 263.020140  186.311260  1733.226991
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 265.540248  193.649368  1698.521928
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 261.202446  194.335060  1638.473022
xdiv2(sqrt(lx1)) =  16(13.601471), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 277.677335  204.125783  1758.083014
xdiv2(sqrt(lx1)) =  16(13.038405), ydiv2(sqrt(lx2)) =  16(15.000000)
camera transformation: 252.234114  187.684918  1704.556790
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(15.000000)
camera transformation: 230.021565  183.427555  1705.381776
xdiv2(sqrt(lx1)) =  16(15.231546), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 199.096174  175.711371  1632.755823
xdiv2(sqrt(lx1)) =  16(15.231546), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 165.369508  170.797145  1566.022733
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 147.951303  182.264753  1594.530747
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 129.921272  190.183003  1585.781741
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 109.351318  187.884061  1551.996125
xdiv2(sqrt(lx1)) =  16(18.681542), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 99.229434  184.290652  1531.791488
xdiv2(sqrt(lx1)) =  16(18.973666), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 104.979937  177.781940  1536.484260
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 124.332845  190.556711  1582.673109
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 141.986048  202.377179  1631.466171
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 157.015429  225.142302  1670.826105
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 152.941581  265.432881  1684.669707
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 141.025194  287.346532  1687.439792
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.763055)
camera transformation: 131.603548  304.812650  1740.148738
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 119.006914  319.644670  1790.621284
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(15.297059)
camera transformation: 110.288130  298.452425  1812.736760
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 109.521847  296.328554  1799.999192
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 97.055577  194.365925  1724.268905
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 95.897451  173.718488  1749.545482
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 110.441981  147.880406  1775.516558
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 124.130648  129.475194  1826.289845
xdiv2(sqrt(lx1)) =  16(15.132746), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 127.679031  125.075851  1870.393666
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 131.617102  125.677431  1906.683206
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 137.063968  125.568816  1931.118619
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 144.807538  124.538217  1961.758234
xdiv2(sqrt(lx1)) =  16(13.152946), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 154.592253  123.364501  1970.643244
xdiv2(sqrt(lx1)) =  16(13.341664), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 165.553154  131.929389  2001.078743
xdiv2(sqrt(lx1)) =  16(13.341664), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 172.089708  139.656325  2039.037894
xdiv2(sqrt(lx1)) =  16(13.152946), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 168.448797  131.016686  2074.893963
xdiv2(sqrt(lx1)) =  16(12.165525), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 162.881587  118.086281  2121.776574
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(14.142136)
camera transformation: 165.465607  121.274846  2151.049887
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(14.035669)
camera transformation: 187.234046  137.079502  2142.325863
xdiv2(sqrt(lx1)) =  16(10.198039), ydiv2(sqrt(lx2)) =  16(14.035669)
camera transformation: 232.688689  157.696996  2211.706249
xdiv2(sqrt(lx1)) =  16(10.198039), ydiv2(sqrt(lx2)) =  16(13.038405)
camera transformation: 265.112194  168.887463  2142.947566
xdiv2(sqrt(lx1)) =  16(9.486833), ydiv2(sqrt(lx2)) =  16(14.000000)
camera transformation: 292.027486  177.698791  2095.847918
xdiv2(sqrt(lx1)) =  16(8.544004), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 290.009642  175.182421  1958.558153
xdiv2(sqrt(lx1)) =  16(9.486833), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 285.854613  144.080295  1827.777227
camera transformation: 284.816669  143.551644  1821.122812
camera transformation: 284.751126  143.518367  1820.702402



posted by maetel
2010. 3. 15. 15:56 Computer Vision
Three-dimensional computer vision: a geometric viewpoint 
By Olivier Faugeras

googleBooks
mitpress

'Computer Vision' 카테고리의 다른 글

Canny edge detection  (0) 2010.03.30
ARToolKit - simpleTest  (0) 2010.03.17
opencv: video capturing from a camera  (4) 2010.03.13
Leordeanu & Hebert, "Unsupervised learning for graph matching"  (0) 2010.03.04
ARToolKit test log  (0) 2010.03.03
posted by maetel
2010. 3. 13. 01:16 Computer Vision


// Test: video capturing from a camera

#include <OpenCV/OpenCV.h> // matrix operations

int main()
{
    IplImage* image = 0;
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
        cvGrabFrame( capture ); // capture a frame
        image = cvRetrieveFrame(capture); // retrieve the caputred frame
       
        cvShowImage( "camera", image );
       
        if( cvWaitKey(10) >= 0 )
            break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );

    return 0;
}


posted by maetel
2010. 3. 4. 16:56 Computer Vision
Unsupervised learning for graph matching - 2009
Marius Leordeanu, Martial Hebert
Conference: Computer Vision and Pattern Recognition - CVPR

posted by maetel
2010. 3. 3. 19:54 Computer Vision
http://www.hitl.washington.edu/artoolkit/

ARToolKit Patternmaker
Automatically create large numbers of target patterns for the ARToolKit, by the University of Utah.


ARToolKit-2.72.tgz 다운로드

http://www.openvrml.org/

DSVideoLib
A DirectShow wrapper supporting concurrent access to framebuffers from multiple threads. Useful for developing applications that require live video input from a variety of capture devices (frame grabbers, IEEE-1394 DV camcorders, USB webcams).


openvrml on macports
http://trac.macports.org/browser/trunk/dports/graphics/openvrml/Portfile


galaxy:~ lym$ port search openvrml
openvrml @0.17.12 (graphics, x11)
    a cross-platform VRML and X3D browser and C++ runtime library
galaxy:~ lym$ port info openvrml
openvrml @0.17.12 (graphics, x11)
Variants:    js_mozilla, mozilla_plugin, no_opengl, no_x11, player, universal,
             xembed

OpenVRML is a free cross-platform runtime for VRML and X3D available under the
GNU Lesser General Public License. The OpenVRML distribution includes libraries
you can use to add VRML/X3D support to an application. On platforms where GTK+
is available, OpenVRML also provides a plug-in to render VRML/X3D worlds in Web
browsers.
Homepage:    http://www.openvrml.org/

Build Dependencies:   pkgconfig
Library Dependencies: boost, libpng, jpeg, fontconfig, mesa, libsdl
Platforms:            darwin
Maintainers:          raphael@ira.uka.de openmaintainer@macports.org
galaxy:~ lym$ port deps openvrml
openvrml has build dependencies on:
    pkgconfig
openvrml has library dependencies on:
    boost
    libpng
    jpeg
    fontconfig
    mesa
    libsdl
galaxy:~ lym$ port variants openvrml
openvrml has the variants:
    js_mozilla: Enable support for JavaScript in the Script node with Mozilla
    no_opengl: Do not build the GL renderer
    xembed: Build the XEmbed control
    player: Build the GNOME openvrml-player
    mozilla_plugin: Build the Mozilla plug-in
    no_x11: Disable support for X11
    universal: Build for multiple architectures


openvrml 설치



ARToolKit-2.72.1 설치 후 테스트

graphicsTest on the bin directory
-> This test confirms that your camera support ARToolKit graphics module with OpenGL.

videoTest on the bin directory
-> This test confirms that your camera supports ARToolKit video module and ARToolKit graphics module.

simpleTest on the bin directory
-> You need to notice that better the format is similar to ARToolKit tracking format, faster is the acquisition (RGB more efficient).


"hiro" 패턴을 쓰지 않으면, 아래와 같은 에러가 난다.

/Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
galaxy:~ lym$ /Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
Camera parameter load error !!
logout


Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
*** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 159.250000 131.750000 104.800000 1.012757
350.47574 0.00000 158.25000 0.00000
0.00000 363.04709 120.75000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
Opening Data File Data/object_data2
About to load 2 Models
Read in No.1
Read in No.2
Objectfile num = 2


arGetTransMat() 안에서 다음과 같이 pattern의 transformation 값을 출력해 보면,
    // http://www.hitl.washington.edu/artoolkit/documentation/tutorialcamera.htm
    printf("camera transformation: %f  %f  %f\n",conv[0][3],conv[1][3],conv[2][3]);

결과:


Feature List     
* A simple framework for creating real-time augmented reality applications    
* A multiplatform library (Windows, Linux, Mac OS X, SGI)    
* Overlays 3D virtual objects on real markers ( based on computer vision algorithm)    
* A multi platform video library with:          
o multiple input sources (USB, Firewire, capture card) supported          
o multiple format (RGB/YUV420P, YUV) supported          
o multiple camera tracking supported          
o GUI initializing interface    
* A fast and cheap 6D marker tracking (real-time planar detection)    
* An extensible markers patterns approach (number of markers fct of efficency)    
* An easy calibration routine    
* A simple graphic library (based on GLUT)    
* A fast rendering based on OpenGL    
* A 3D VRML support    
* A simple and modular API (in C)    
* Other language supported (JAVA, Matlab)    
* A complete set of samples and utilities    
* A good solution for tangible interaction metaphor    
* OpenSource with GPL license for non-commercial usage


framework



"ARToolKit is able to perform this camera tracking in real time, ensuring that the virtual objects always appear overlaid on the tracking markers."

how to
1. 매 비디오 프레임 마다 사각형 모양을 찾기
2. 검은색 사각형에 대한 카메라의 상대적 위치를 계산
3. 그 위치로부터 컴퓨터 그래픽 모델이 어떻게 그려질지를 계산
4. 실제 영상의 마커 위에 모델을 그림

limitations
1. 추적하는 마커가 영상 안에 보일 때에만 가상 물체를 합성할 수 있음
2. 이 때문에 가상 물체들의 크기나 이동이 제한됨
3. 마커의 패턴의 일부가 가려지는 경우 가상 물체를 합성할 수 없음
4. range(거리)의 제한: 마커의 모양이 클수록 멀리 떨어진 패턴까지 감지할 수 있으므로 추적할 수 있는 volume(범위)이 더 커짐
(이때 거리는  pattern complexity (패턴의 복잡도)에 따라 달라짐: 패턴이 단순할수록 한계 거리가 길어짐)
5. 추적 성능이 카메라에 대한 마커의 상대적인 orientation(방향)에 따라 달라짐
: 마커가 많이 기울어 수평에 가까워질수록 보이는 패턴의 부분이 줄어들기 때문에 recognition(인식)이 잘 되지 않음(신뢰도가 떨어짐)
6. 추적 성능이 lighting conditions (조명 상태)에 따라 달라짐
: 조명에 의해 종이 마커 위에 reflection and glare spots (반사)가 생기면 마커의 사각형을 찾기가 어려워짐
: 종이 대신 반사도가 적은 재료를 쓸 수 있음


ARToolKit Vision Algorithm



Development
Initialization    
1. Initialize the video capture and read in the marker pattern files and camera parameters. -> init()
Main Loop    
2. Grab a video input frame. -> arVideoGetImage()
3. Detect the markers and recognized patterns in the video input frame. -> arDetectMarker()
4. Calculate the camera transformation relative to the detected patterns. -> arGetTransMat)
5. Draw the virtual objects on the detected patterns. -> draw()
Shutdown    
6. Close the video capture down. -> cleanup()

ref.
http://king8028.tistory.com/entry/ARToolkit-simpletestc-%EC%84%A4%EB%AA%8512
http://kougaku-navi.net/ARToolKit.html



ARToolKit video configuration



camera calibration

Default camera properties are contained in the camera parameter file camera_para.dat, that is read in each time an application is started.

The program calib_dist is used to measure the image center point and lens distortion, while calib_param produces the other camera properties. (Both of these programs can be found in the bin directory and their source is in the utils/calib_dist and utils/calib_cparam directories.)



ARToolKit gives the position of the marker in the camera coordinate system, and uses OpenGL matrix system for the position of the virtual object.


ARToolKit API Documentation
http://artoolkit.sourceforge.net/apidoc/


ARMarkerInfo Main structure for detected marker
ARMarkerInfo2 Internal structure use for marker detection
ARMat Matrix structure
ARMultiEachMarkerInfoT Multi-marker structure
ARMultiMarkerInfoT Global multi-marker structure
ARParam Camera intrinsic parameters
arPrevInfo Structure for temporal continuity of tracking
ARVec Vector structure


arVideoGetImage()

video.h
/**
 * \brief get the video image.
 *
 * This function returns a buffer with a captured video image.
 * The returned data consists of a tightly-packed array of
 * pixels, beginning with the first component of the leftmost
 * pixel of the topmost row, and continuing with the remaining
 * components of that pixel, followed by the remaining pixels
 * in the topmost row, followed by the leftmost pixel of the
 * second row, and so on.
 * The arrangement of components of the pixels in the buffer is
 * determined by the configuration string passed in to the driver
 * at the time the video stream was opened. If no pixel format
 * was specified in the configuration string, then an operating-
 * system dependent default, defined in <AR/config.h> is used.
 * The memory occupied by the pixel data is owned by the video
 * driver and should not be freed by your program.
 * The pixels in the buffer remain valid until the next call to
 * arVideoCapNext, or the next call to arVideoGetImage which
 * returns a non-NULL pointer, or any call to arVideoCapStop or
 * arVideoClose.
 * \return A pointer to the pixel data of the captured video frame,
 * or NULL if no new pixel data was available at the time of calling.
 */
AR_DLL_API  ARUint8*        arVideoGetImage(void);


ARParam

param.h
/** \struct ARParam
* \brief camera intrinsic parameters.
*
* This structure contains the main parameters for
* the intrinsic parameters of the camera
* representation. The camera used is a pinhole
* camera with standard parameters. User should
* consult a computer vision reference for more
* information. (e.g. Three-Dimensional Computer Vision
* (Artificial Intelligence) by Olivier Faugeras).
* \param xsize length of the image (in pixels).
* \param ysize height of the image (in pixels).
* \param mat perspective matrix (K).
* \param dist_factor radial distortions factor
*          dist_factor[0]=x center of distortion
*          dist_factor[1]=y center of distortion
*          dist_factor[2]=distortion factor
*          dist_factor[3]=scale factor
*/
typedef struct {
    int      xsize, ysize;
    double   mat[3][4];
    double   dist_factor[4];
} ARParam;

typedef struct {
    int      xsize, ysize;
    double   matL[3][4];
    double   matR[3][4];
    double   matL2R[3][4];
    double   dist_factorL[4];
    double   dist_factorR[4];
} ARSParam;




arDetectMarker()

ar.h 헤더 파일의 설명:
/**
* \brief main function to detect the square markers in the video input frame.
*
* This function proceeds to thresholding, labeling, contour extraction and line corner estimation
* (and maintains an history).
* It's one of the main function of the detection routine with arGetTransMat.
* \param dataPtr a pointer to the color image which is to be searched for square markers.
*                The pixel format depend of your architecture. Generally ABGR, but the images
*                are treated as a gray scale, so the order of BGR components does not matter.
*                However the ordering of the alpha comp, A, is important.
* \param thresh  specifies the threshold value (between 0-255) to be used to convert
*                the input image into a binary image.
* \param marker_info a pointer to an array of ARMarkerInfo structures returned
*                    which contain all the information about the detected squares in the image
* \param marker_num the number of detected markers in the image.
* \return 0 when the function completes normally, -1 otherwise
*/
int arDetectMarker( ARUint8 *dataPtr, int thresh,
                    ARMarkerInfo **marker_info, int *marker_num );


You need to notice that arGetTransMat give the position of the marker in the camera coordinate system (not the reverse). If you want the position of the camera in the marker coordinate system you need to inverse this transformation (arMatrixInverse()).



XXXBK: not be sure of this function: this function must just convert 3x4 matrix to classical perspective openGL matrix. But in the code, you used arParamDecompMat that seem decomposed K and R,t, aren't it ? why do this decomposition since we want just intrinsic parameters ? and if not what is arDecomp ?




double arGetTransMat()

ar.h 헤더 파일의 설명:
/**
* \brief compute camera position in function of detected markers.
*
* calculate the transformation between a detected marker and the real camera,
* i.e. the position and orientation of the camera relative to the tracking mark.
* \param marker_info the structure containing the parameters for the marker for
*                    which the camera position and orientation is to be found relative to.
*                    This structure is found using arDetectMarker.
* \param center the physical center of the marker. arGetTransMat assumes that the marker
*              is in x-y plane, and z axis is pointing downwards from marker plane.
*              So vertex positions can be represented in 2D coordinates by ignoring the
*              z axis information. The marker vertices are specified in order of clockwise.
* \param width the size of the marker (in mm).
* \param conv the transformation matrix from the marker coordinates to camera coordinate frame,
*             that is the relative position of real camera to the real marker
* \return always 0.
*/
double arGetTransMat( ARMarkerInfo *marker_info,
                      double center[2], double width, double conv[3][4] )



arUtilMatInv()

ar.h 헤더 파일의 설명:
/**
* \brief Inverse a non-square matrix.
*
* Inverse a matrix in a non homogeneous format. The matrix
* need to be euclidian.
* \param s matrix input   
* \param d resulted inverse matrix.
* \return 0 if the inversion success, -1 otherwise
* \remark input matrix can be also output matrix
*/
int    arUtilMatInv( double s[3][4], double d[3][4] );






posted by maetel
2010. 3. 2. 20:31 Computer Vision
Tricodes: A Barcode-Like Fiducial Design for Augmented Reality Media - 2006
Jonathan Mooser, Suya You, Ulrich Neumann
International Conference on Multimedia Computing and Systems/International Conference on Multimedia and Expo - ICME(ICMCS)

posted by maetel
2010. 3. 2. 20:26 Computer Vision
Design Patterns for Augmented Reality Systems - 2004
Asa Macwilliams, Thomas Reicher, Gudrun Klinker, Bernd Brügge
Conference: Workshop on Exploring the Design and Engineering of Mixed Reality Systems - MIXER


Figure 2: Relationships between the individual patterns for augmented reality systems. Several approaches are used in combination within an augmented reality system. One approach might require the use of another approach or prevent its usage.


posted by maetel
2010. 2. 26. 01:11 Computer Vision
cross ratio test


Try #1. pi 값 이용

pi = 3.14159265358979323846264338327950288...
pi 값을 이용하여 cross ratio를 구해 보면, 다음과 같이 나온다.



cross ratio = 1.088889
cross ratio = 2.153846
cross ratio = 1.185185
cross ratio = 1.094737
cross ratio = 2.166667
cross ratio = 1.160714
cross ratio = 1.274510
cross ratio = 1.562500
cross ratio = 1.315789
cross ratio = 1.266667
cross ratio = 1.266667
cross ratio = 1.446429
cross ratio = 1.145455
cross ratio = 1.441176
cross ratio = 1.484848
cross ratio = 1.421875
cross ratio = 1.123457
cross ratio = 1.600000
cross ratio = 1.142857
cross ratio = 1.960784
cross ratio = 1.142857
cross ratio = 1.350000
cross ratio = 1.384615
cross ratio = 1.529412
cross ratio = 1.104575
cross ratio = 1.421875
cross ratio = 1.711111
cross ratio = 1.178571
cross ratio = 1.200000
cross ratio = 1.098039
cross ratio = 2.800000
cross ratio = 1.230769
cross ratio = 1.142857


다른 식 적용

cross ratio = 0.040000
cross ratio = 0.666667
cross ratio = 0.107143
cross ratio = 0.064935
cross ratio = 0.613636
cross ratio = 0.113636
cross ratio = 0.204545
cross ratio = 0.390625
cross ratio = 0.230769
cross ratio = 0.203620
cross ratio = 0.205882
cross ratio = 0.316406
cross ratio = 0.109375
cross ratio = 0.300000
cross ratio = 0.360000
cross ratio = 0.290909
cross ratio = 0.090909
cross ratio = 0.400000
cross ratio = 0.100000
cross ratio = 0.562500
cross ratio = 0.100000
cross ratio = 0.257143
cross ratio = 0.285714
cross ratio = 0.363636
cross ratio = 0.074380
cross ratio = 0.290909
cross ratio = 0.466667
cross ratio = 0.125000
cross ratio = 0.156250




Try #2. swPark_2000rti: 43p: figure 7의 cross ratio 값들로 패턴의 그리드 (격자 위치)를 역추산
 
40개의 수직선에 대한 37개의 cross ratio :
0.47, 0.11, 0.32, 0.17, 0.44, 0.08, 0.42, 0.25, 0.24, 0.13, 0.46, 0.18, 0.19, 0.29, 0.21, 0.37, 0.16, 0.38, 0.23, 0.09, 0.37, 0.26, 0.31, 0.18, 0.30, 0.15, 0.39, 0.16, 0.32, 0.27, 0.20, 0.28, 0.39, 0.12, 0.23, 0.28, 0.35
20개의 수평선에 대한 17개의 cross ratio :
0.42, 0.13, 0.32, 0.16, 0.49, 0.08, 0.40, 0.20, 0.29, 0.19, 0.37, 0.13, 0.26, 0.38, 0.21, 0.16, 0.42




뭥미?????

# of cross-ratios in vertical lines = 37
# of cross-ratios in horizontal lines = 17

x[0]=1  x[1]=2  x[2]=4
x[3]=-2.87805  x[4]=-1.42308  x[5]=-0.932099  x[6]=-0.787617  x[7]=-0.596499  x[8]=-0.55288  x[9]=-0.506403  x[10]=-0.456778  x[11]=-0.407892  x[12]=-0.390887  x[13]=-0.363143  x[14]=-0.338174  x[15]=-0.324067  x[16]=-0.312345  x[17]=-0.305022  x[18]=-0.293986  x[19]=-0.286594  x[20]=-0.273759  x[21]=-0.251966  x[22]=-0.244977  x[23]=-0.238299  x[24]=-0.231391  x[25]=-0.219595  x[26]=-0.20838  x[27]=-0.192558  x[28]=-0.183594  x[29]=-0.16952  x[30]=-0.159689  x[31]=-0.147983  x[32]=-0.131036  x[33]=-0.114782  x[34]=-0.0950305  x[35]=0.0303307  x[36]=0.964201  x[37]=-0.959599  x[38]=-0.519287  x[39]=-0.356521 


posted by maetel
2010. 2. 26. 00:07 Computer Vision
Optimal Grid Pattern for Automated Camera Calibration Using Cross Ratio

Chikara MATSUNAGA  Yasushi KANAZAWA  Kenichi KANATANI 

Publication IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences  Vol.E83-A  No.10  pp.1921-1928
Publication Date: 2000/10/20
Online ISSN: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Information Theory and Its Applications)
Category: Image Processing
Keyword: cross ratioMarkov processerror analysisreliability evaluationvirtual studio
Full Text:
출처:  http://www.suri.it.okayama-u.ac.jp/~kanatani/data/ejournal.html

MVA2000 IAPR Workshop on Machine Vision Applications, Nov. 28-30,2000, The University of Tokyo, Japan
13-28
Optimal Grid Pattern for Automated Matching Using Cross Ratio
Chikara Matsunaga (Broadcast Division, FOR-A Co. Ltd.)
Kenichi Kanatanit (Department of Computer Science, Gunma University)


Kenichi Kanatani  金谷健一   http://www.suri.it.okayama-u.ac.jp/%7Ekanatani/
Yasushi Kanazawa 金澤靖     http://www.img.tutkie.tut.ac.jp/~kanazawa/

IEICE (
The Institute of Electronics Information and Communication Engineers)   http://www.ieice.org
IAPR (International Association of Pattern Recognition)   http://www.iapr.org
IAPR - Machine Vision & Applications



Summary: 
With a view to virtual studio applications, we design an optimal grid pattern such that the observed image of a small portion of it can be matched to its corresponding position in the pattern easily. The grid shape is so determined that the cross ratio of adjacent intervals is different everywhere. The cross ratios are generated by an optimal Markov process that maximizes the accuracy of matching. We test our camera calibration system using the resulting grid pattern in a realistic setting and show that the performance is greatly improved by applying techniques derived from the designed properties of the pattern.


Camera calibration is a first step in all vision and media applications.
> pre-calibration (Tsai) vs. self-calibration (Pollefeys)
=> "simultaneous calibration" by placing an easily distinguishable planar pattern in the scene

Introducing a statistic model of image noise, we generate the grid intervals by an optimal Markov process that maximizes the accuracy of matching.
: The pattern is theoretically designed by statistical analysis

If the cross rations are given, the sequence is determined as follows.


To find a sequence of cross ratios such that the sequence of numbers is a homogeneous increasing with the average interval being 1 and the minimum width as specified.
=> To generate the sequence of cross ratios stochastically, according to a probability distribution defined in such a way that the resulting sequence of numbers has the desired properties
=> able to optimize the probability distribution so that the matching performance is maximized by analyzing the statistical properties of image noise

 



 

출처: C. Matsunaga, Y. Kanazawa, and K. Kanatani, Optimal grid pattern for automated camera calibration using cross ratio , IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol. E83-A, No. 10, pp. 1921--1928, 2000. 중 1926쪽 Fig.8 4배 확대 캡처





'Computer Vision' 카테고리의 다른 글

"Design Patterns for Augmented Reality Systems"  (0) 2010.03.02
virtual studio 구현: cross ratio test  (0) 2010.02.26
virtual studio 구현: workflow  (0) 2010.02.23
chroma keying  (0) 2010.02.22
3차원 인터페이스 시장조사  (1) 2010.02.22
posted by maetel
2010. 2. 23. 00:47 Computer Vision
1> pattern identification 패턴 인식

rough preview
1) 무늬의 deep/light 색의 경계점들 찾기 edge detection
2) 찾은 점들을 직선으로 연결
3) 검출된 가로선과 세로선의 cross ratio와 실제 무늬의 cross ratio를 비교하여, 몇 번째 선인지 인식

detailed preview
1. initial identification process 초기 인식 과정 (특징점 인식)

1) chroma keying:  RGB -> YUV 변환

2) gradient filtering: first-order derivative Gaussian filter (length = 7)
 -1) 세로축에 대해 영상 축소 (1/4)하여 필터링
 -2) Gx, Gy 절대값 비교하여 vertical / horizontal direction 판별
 -3) 가로축에 대해

3) line fitting: lens distortion coefficient을 고려하여 이차곡선으로 피팅

4) identification
 -1) 영상에서 찾아진 선들이 실제 무늬에서 몇 번째 선인지 인식
 -2) feature points는 직선 식에 의해 피팅된 선들의 교점으로 정확하게 구할 수 있음

2. feature point tracking 실제 동작 과정 (특징점 위치 추적)
: feature points corresponding 검출된 특징점을 무늬의 교점과 매칭

  1) intersection filter H (교점 필터)로 local maximum & minimum를 가지는 교점 검출

  2) 검출된 교점의 부호를 판별하여 두 부류로 나눔

  3) 이전 프레임에서의 교점의 위치를 기준으로 현재 프레임에서 검출된 교점에 대해 가장 가까운 이전 점을 찾음

  * 다음 프레임에서 새로 나타난 특징점에 대해서도 이전 프레임에서의 카메라 변수를 이용해 실제 패턴 상의 교점을 영상으로 투영시켜 기준점으로 삼을 수 있음




2> real-time camera parameter extraction 실시간 카메라 변수 추출: Tsai's algorithm

1. determining image center 영상 중심 구하기: zooming
: using the center of expansion as a constant image-center

1) (lens distortion을 구하기 위한 초기화 과정에서) 정지된 카메라의 maximum zoom-out과 maximum zoom-in 상태에서 찾아서 인식한 특징점들을 저장

2) 두 개의 프레임에서 같은 점으로 나타난 특징점들을 연결한 line segments의 common intersection 교점을 계산

* 실제로 zooming은 여러 개의 lens들의 조합으로 작동하기 때문에 카메라의 zoom에 따라서 image center가 변하게 되지만, 이에 대한 표준 편차가 작으므로 무시하기로 함

2. lens distortion coefficient 계산
zooming이 없다면 고정된 값이 되므로 이하와 같이 매번 계산해 줄 필요가 없어짐

(1) f-k1 look-up table을 참조하는 방법
: zooming하는 과정에서 초점 거리 f와 렌즈 왜곡 변수 k1이 계속 변하게 되므로, 이에 대한 참조표를 미리 만들어 두고 나서 실제 동작 과정에서 참조
* 특징점들이 모두 하나의 평면에 존재하는 경우에는 초점거리 f와 카메라의 z 방향으로의 이동 Tz가 서로 coupled되기 때문에 카메라 변수가 제대로 계산되기 어렵다는 점을 고려하여 평면 상의 특징점들에 대해서 Tz/f를 인덱스로 사용하는 편법을 쓴다면, 카메라가 z 방향으로는 이동하지 않고 고정되어 있어야 한다는 (T1z = 0)조건이 붙게 됨

(2) collinearity를 이용하는 방법
: searching for k1 which maximally preserves collinearity 인식된 교점들에 대해 원래 하나의 직선에 속하는 점들이 왜곡 보상 되었을 때 가장 직선이 되게 하는 왜곡변수를 구함

  1) 영상에서 같은 가로선에 속하는 교점들 (Xf, Yf) 가운데 세 개를 고름

  2) 식7로부터 왜곡된 영상면 좌표 (Xd, Yd)를 구함
 
  3) 식5로부터 왜곡 보상된 영상면 좌표 (Xu, Yu)를 구함

  4) 식21과 같은 에러 함수 E(k1)를 정의

  5) 영상에 나타난 N개의 가로선들에 대해서 E(k1) 값을 최소화하는 k1을 구함 (식 23) -> 비선형 최적화이나 iteration은 한 번
 
3. Tsai's algorithm
렌즈 왜곡 변수를 알면 카메라 캘리브레이션은 선형적 방법으로 구할 수 있게 됨




3> filtering
잡음으로 인해 검출된 교점에 오차가 생기므로 카메라변수가 틀려지게 됨
(->카메라가 정지해 있어도 카메라변수에 변화가 생겨 결과적으로 그래픽으로 생성된 가상의 무대에 떨림이 나타나게 됨)

averaging filter 평균 필터 (전자공학회논문지 제36권 S편 제7호 식19)









posted by maetel