블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

2010. 11. 2. 21:53

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 10. 12. 13:14 Computer Vision
posted by maetel
2010. 10. 12. 13:09 Computer Vision

'Computer Vision' 카테고리의 다른 글

OpenCV 2.1.0 Installation on Mac OS X Snow Leopard  (0) 2010.11.02
OpenCV: Decision Trees  (0) 2010.10.12
variational principle  (0) 2010.10.01
OpenCV: cvFindChessboardCorners() 함수  (0) 2010.09.27
OpenCV: cvThreshold() 함수 연습  (0) 2010.09.26
posted by maetel
2010. 9. 26. 19:30 Computer Vision
OpenCV 함수 cvTreshold or cv::threshold

source code link: https://code.ros.org/trac/opencv/browser/tags/2.1/opencv/src/cv/cvthresh.cpp
file: /opencv/src/cv/cvthresh.cpp


Learning OpenCV: Chapter 5 Image Processing: Threshold (135p)


cf. Otsu method




원본 영상

입력 영상



threshold=100, max_val=100, CV_THRESH_BINARY

threshold=100, max_val=200, CV_THRESH_BINARY

threshold=200, max_val=200, CV_THRESH_BINARY



threshold=50, CV_THRESH_TRUNC

threshold=50, CV_THRESH_TRUNC

threshold=150, CV_THRESH_TRUNC



threshold=50, CV_THRESH_TOZERO

threshold=100, CV_THRESH_TOZERO

threshold=200, CV_THRESH_TOZERO



threshold=100, max_val=200, CV_THRESH_BINARY

threshold=100, max_val=200, CV_THRESH_BINARY & CV_THRESH_OTSU

threshold=100, max_val=200, CV_THRESH_OTSU


코드에서 CV_THRESH_OTSU는 CV_THRESH_BINARY | CV_THRESH_OTSU 와 같은 효과.
CV_THRESH_OTSU는 함수의 인자 "threshold"의 초기값과 무관하게 입력 영상에 대해 내부적으로 threshold 값을 구하고 이에 따라 선택된 픽셀들에 max_value 값을 준다.


posted by maetel
2010. 9. 16. 00:40

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 9. 15. 21:33

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 9. 7. 17:29 Computer Vision
OpenCV 함수 cvFindChessboardCorners()와 cvDrawChessboardCorners() 사용


bool findChessboardCorners(const Mat& image, Size patternSize, vector<Point2f>& corners, int flags=CV_CALIB_CB_ADAPTIVE_THRESH+ CV_CALIB_CB_NORMALIZE_IMAGE)

Finds the positions of the internal corners of the chessboard.

Parameters:
  • image – Source chessboard view; it must be an 8-bit grayscale or color image
  • patternSize – The number of inner corners per chessboard row and column ( patternSize = cvSize(points _ per _ row,points _ per _ colum) = cvSize(columns,rows) )
  • corners – The output array of corners detected
  • flags

    Various operation flags, can be 0 or a combination of the following values:

    • CV_CALIB_CB_ADAPTIVE_THRESH use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
    • CV_CALIB_CB_NORMALIZE_IMAGE normalize the image gamma with EqualizeHist before applying fixed or adaptive thresholding.
    • CV_CALIB_CB_FILTER_QUADS use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads that are extracted at the contour retrieval stage.



file:///Users/lym/opencv/src/cv/cvcalibinit.cpp
https://code.ros.org/trac/opencv/browser/tags/2.1/opencv/src/cv/cvcalibinit.cpp



void drawChessboardCorners(Mat& image, Size patternSize, const Mat& corners, bool patternWasFound)

Renders the detected chessboard corners.

Parameters:
  • image – The destination image; it must be an 8-bit color image
  • patternSize – The number of inner corners per chessboard row and column. (patternSize = cvSize(points _ per _ row,points _ per _ colum) = cvSize(columns,rows) )
  • corners – The array of corners detected
  • patternWasFound – Indicates whether the complete board was found or not . One may just pass the return value FindChessboardCorners her



Learning OpenCV: Chapter 11. Camera Models and Calibration: Chessboards
381p


chessboard.bmp

640x480





console:
finding chessboard corners...
what = 1
chessboard corners: 215.5, 179
#0=(215.5, 179)    #1=(237.5, 178.5)    #2=(260.5, 178)    #3=(283.5, 177.5)    #4=(307, 177)    #5=(331.5, 175.5)    #6=(355.5, 174.5)    #7=(380.5, 174)    #8=(405.5, 173.5)    #9=(430.5, 172.5)    #10=(212.5, 201.5)    #11=(235.5, 201.5)    #12=(258, 200.5)    #13=(280.5, 200.5)    #14=(305.5, 199.5)    #15=(330, 198.5)    #16=(354.5, 198)    #17=(379.5, 197.5)    #18=(405.5, 196.5)    #19=(430.5, 196)    #20=(210, 224.5)    #21=(232.5, 224.5)    #22=(256, 223.5)    #23=(280, 224)    #24=(304, 223)    #25=(328.5, 222.5)    #26=(353.5, 222)    #27=(378.5, 221.5)    #28=(404.5, 221.5)    #29=(430.5, 220.5)    #30=(207, 247.5)    #31=(230.5, 247.5)    #32=(253.5, 247.5)    #33=(277.5, 247)    #34=(303, 247)    #35=(327, 246.5)    #36=(352, 246.5)    #37=(377.5, 246)    #38=(403.5, 245.5)    #39=(430, 245.5)    #40=(204.5, 271.5)    #41=(227.5, 271.5)    #42=(251.5, 271.5)    #43=(275.5, 271.5)    #44=(300, 272)    #45=(325.5, 271.5)    #46=(351, 271)    #47=(376.5, 271.5)    #48=(403, 271.5)    #49=(429.5, 271)    #50=(201.5, 295.5)    #51=(225.5, 295.5)    #52=(249.5, 296)    #53=(273.5, 296.5)    #54=(299, 297)    #55=(324, 296)    #56=(349.5, 296.5)    #57=(375.5, 296.5)    #58=(402.5, 296.5)    #59=(429, 297)   

finished









finding chessboard corners...
what = 0
chessboard corners: 0, 0
#0=(0, 0)    #1=(0, 0)    #2=(0, 0)    #3=(0, 0)    #4=(0, 0)    #5=(0, 0)    #6=(0, 0)    #7=(0, 0)    #8=(0, 0)    #9=(0, 0)    #10=(0, 0)    #11=(0, 0)    #12=(0, 0)    #13=(0, 0)    #14=(0, 0)    #15=(0, 0)    #16=(0, 0)    #17=(0, 0)    #18=(0, 0)    #19=(0, 0)    #20=(0, 0)    #21=(0, 0)    #22=(0, 0)    #23=(0, 0)    #24=(0, -2.22837e-29)    #25=(-2.22809e-29, -1.99967)    #26=(4.2039e-45, -2.22837e-29)    #27=(-2.22809e-29, -1.99968)    #28=(4.2039e-45, 1.17709e-43)    #29=(6.72623e-44, 1.80347e-42)    #30=(0, 0)    #31=(4.2039e-45, 1.45034e-42)    #32=(-2.2373e-29, -1.99967)    #33=(4.2039e-45, 2.52094e-42)    #34=(-2.2373e-29, -1.99969)    #35=(-2.22634e-29, -1.99968)    #36=(4.2039e-45, 1.17709e-43)    #37=(6.72623e-44, 1.80347e-42)    #38=(0, 0)    #39=(0, 1.80347e-42)    #40=(3.36312e-44, 5.46787e-42)    #41=(6.45718e-42, 5.04467e-44)    #42=(0, 1.80347e-42)    #43=(6.48101e-42, 5.48188e-42)    #44=(0, 1.4013e-45)    #45=(4.2039e-45, 0)    #46=(1.12104e-44, -2.22837e-29)    #47=(-2.22809e-29, -1.99969)    #48=(4.2039e-45, 6.72623e-44)    #49=(6.16571e-44, 1.80347e-42)    #50=(0, 0)    #51=(1.4013e-45, -2.27113e-29)    #52=(4.56823e-42, -1.99969)    #53=(4.2039e-45, -2.20899e-29)    #54=(-2.2373e-29, -1.9997)    #55=(-2.22619e-29, -1.99969)    #56=(4.2039e-45, 6.72623e-44)    #57=(-1.9997, 1.80347e-42)    #58=(0, -2.22957e-29)    #59=(-2.23655e-29, -2.20881e-29)   

finished










finding chessboard corners...
what = 0
chessboard corners: 0, 0
#0=(0, 0)    #1=(0, 0)    #2=(0, 0)    #3=(0, 0)    #4=(0, 0)    #5=(0, 0)    #6=(0, 0)    #7=(0, 0)    #8=(0, 0)    #9=(0, 0)    #10=(0, 0)    #11=(0, 0)    #12=(0, 0)    #13=(0, 0)    #14=(0, 0)    #15=(0, 0)    #16=(0, 0)    #17=(0, 0)    #18=(0, 0)    #19=(0, 0)    #20=(0, 0)    #21=(0, 0)    #22=(0, 0)    #23=(0, 0)    #24=(0, -2.22837e-29)    #25=(-2.22809e-29, -1.99967)    #26=(4.2039e-45, -2.22837e-29)    #27=(-2.22809e-29, -1.99968)    #28=(4.2039e-45, 1.17709e-43)    #29=(6.72623e-44, 1.80347e-42)    #30=(0, 0)    #31=(4.2039e-45, 1.45034e-42)    #32=(-2.2373e-29, -1.99967)    #33=(4.2039e-45, 2.52094e-42)    #34=(-2.2373e-29, -1.99969)    #35=(-2.22634e-29, -1.99968)    #36=(4.2039e-45, 1.17709e-43)    #37=(6.72623e-44, 1.80347e-42)    #38=(0, 0)    #39=(0, 1.80347e-42)    #40=(3.36312e-44, 5.46787e-42)    #41=(6.45718e-42, 5.04467e-44)    #42=(0, 1.80347e-42)    #43=(6.48101e-42, 5.48188e-42)    #44=(0, 1.4013e-45)    #45=(4.2039e-45, 0)    #46=(1.12104e-44, -2.22837e-29)    #47=(-2.22809e-29, -1.99969)    #48=(4.2039e-45, 6.72623e-44)    #49=(6.16571e-44, 1.80347e-42)    #50=(0, 0)    #51=(1.4013e-45, -2.27113e-29)    #52=(4.56823e-42, -1.99969)    #53=(4.2039e-45, -2.20899e-29)    #54=(-2.2373e-29, -1.9997)    #55=(-2.22619e-29, -1.99969)    #56=(4.2039e-45, 6.72623e-44)    #57=(-1.9997, 1.80347e-42)    #58=(0, -2.22957e-29)    #59=(-2.23655e-29, -2.20881e-29)   

finished







source code:
// Test: chessboard detection

#include <OpenCV/OpenCV.h> // frameworks on mac
//#include <cv.h>
//#include <highgui.h>

#include <iostream>
using namespace std;


int main()
{

    IplImage* image = cvLoadImage( "DSCN3310.jpg", 1 );
   
/*    IplImage* image = 0;
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
                
    while(1) {
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            cvGrabFrame( capture ); // capture a frame
            image = cvRetrieveFrame(capture); // retrieve the captured frame
*/           
//            cvShowImage( "camera", image );
            cvNamedWindow( "camera" );  cvShowImage( "camera", image );
   
            cout << endl << "finding chessboard corners..." << endl;
            CvPoint2D32f corners[60];
            int numCorners[60];
            //cvFindChessboardCorners(<#const void * image#>, <#CvSize pattern_size#>, <#CvPoint2D32f * corners#>, <#int * corner_count#>, <#int flags#>)
            int what = cvFindChessboardCorners( image, cvSize(10,6), corners, numCorners, CV_CALIB_CB_ADAPTIVE_THRESH );
            cout << "what = " << what << endl;
            cout << "chessboard corners: " << corners[0].x << ", " << corners[0].y << endl;            
       
    for( int n = 0; n < 60; n++ )
    {
        cout << "#" << n << "=(" << corners[n].x << ", " << corners[n].y << ")\t";
    }
    cout << endl;
       
            // cvDrawChessboardCorners(<#CvArr * image#>, <#CvSize pattern_size#>, <#CvPoint2D32f * corners#>, <#int count#>, <#int pattern_was_found#>)
    cvDrawChessboardCorners( image, cvSize(10,6), corners, 60, what );   
   
   
    cvNamedWindow( "chessboard" ); cvMoveWindow( "chessboard", 200, 200 ); cvShowImage( "chessboard", image );
    cvSaveImage( "chessboard.bmp", image );       
            cvWaitKey(0);
//        }
//    }
   
    cout << endl << "finished" << endl;
//   cvReleaseCapture( &capture ); // release the capture source
    cvDestroyAllWindows();
   
    return 0;
}





posted by maetel
2010. 9. 4. 05:36

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

OpenCV: cvFindContours( )

cvFindContours()
int cvFindContours(CvArr* image, CvMemStorage* storage, CvSeq** first_contour, int header_size=sizeof(CvContour), int mode=CV_RETR_LIST, int method=CV_CHAIN_APPROX_SIMPLE, CvPoint offset=cvPoint(0, 0))

Finds the contours in a binary image.

Parameters:
  • image – The source, an 8-bit single channel image. Non-zero pixels are treated as 1’s, zero pixels remain 0’s - the image is treated as binary . To get such a binary image from grayscale, one may use Threshold , AdaptiveThreshold or Canny . The function modifies the source image’s content
  • storage – Container of the retrieved contours
  • first_contour – Output parameter, will contain the pointer to the first outer contour
  • header_size – Size of the sequence header, \ge \texttt{sizeof(CvChain)} if \texttt{method} =\texttt{CV\_CHAIN\_CODE} , and \ge \texttt{sizeof(CvContour)} otherwise
  • mode

    Retrieval mode

    • CV_RETR_EXTERNAL retrives only the extreme outer contours
    • CV_RETR_LIST retrieves all of the contours and puts them in the list
    • CV_RETR_CCOMP retrieves all of the contours and organizes them into a two-level hierarchy: on the top level are the external boundaries of the components, on the second level are the boundaries of the holes
    • CV_RETR_TREE retrieves all of the contours and reconstructs the full hierarchy of nested contours
  • method

    Approximation method (for all the modes, except CV_LINK_RUNS , which uses built-in approximation)

    • CV_CHAIN_CODE outputs contours in the Freeman chain code. All other methods output polygons (sequences of vertices)
    • CV_CHAIN_APPROX_NONE translates all of the points from the chain code into points
    • CV_CHAIN_APPROX_SIMPLE compresses horizontal, vertical, and diagonal segments and leaves only their end points
    • CV_CHAIN_APPROX_TC89_L1,CV_CHAIN_APPROX_TC89_KCOS applies one of the flavors of the Teh-Chin chain approximation algorithm.
    • CV_LINK_RUNS uses a completely different contour retrieval algorithm by linking horizontal segments of 1’s. Only the CV_RETR_LIST retrieval mode can be used with this method.
  • offset – Offset, by which every contour point is shifted. This is useful if the contours are extracted from the image ROI and then they should be analyzed in the whole image context



Learning OpenCV: Chater 8. Contours: Contour Finding
: 234p
"the concept of a contour tree"


Suzuki, M. (1985) Evapotranspiration Estimates of Forested Watersheds in Japan Using the Short-time Period Water-budget Methods. Journal of Japanese Forest Society, 67: 115-125. (in Japanese with English summary)


posted by maetel
2010. 6. 2. 23:53 Computer Vision
cvCalibrateCamera2( ) 함수는 /opencv/src/cv/cvcalibration.cpp 파일에 정의되어 있다.

또는 OpenCV – Trac 웹페이지: /branches/OPENCV_1_0/opencv/src/cv/cvcalibration.cpp





ref. Learning OpenCV: Chapter 11 Camera Models and Calibration

378p: "Calibration"
http://www.vision.caltech.edu/bouguetj/calib_doc/

388p: Intrinsic parameters are directly tied to the 3D geometry (and hence the extrinsic parameters) of where the chessboard is in space; distortion parameters are tied to the 3D geometry of how the pattern of points gets distorted, so we deal with the constraints on these two classes of parameters separately.

389p: The algorithm OpenCV uses to solve for the focal lengths and offsets is based on Zhang's method [Zhang00], but OpenCV uses a different method based on Brown [Brown71] to solve for the distortion parameters.


1) number of views

인자 중 pointCounts – Integer 1xM or Mx1 vector (where M is the number of calibration pattern views)
그런데 우리의 경우 매 입력 프레임 한 개이므로, M = 1
int numView = 1; // number of calibration pattern views
// integer "1*numView" vector
CvMat* countsP = cvCreateMat( numView, 1, CV_32SC1 );
// the sum of vector elements must match the size of objectPoints and imagePoints
// cvmSet( countsP, 0, 0, numPair );    // <-- 이렇게 하면 에러 남. 아래로 변경
cvSet( countsP, cvScalar(numPair) );


2) rotation vector와 translation vector

cvCalibrateCamera2() 함수의 output 중 외부 파라미터를 받을 matrices를 다음과 같이 생성하면
                // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 3, 1, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 3, 1, CV_32FC1 ); // translation vector

아래와 같은 에러 메시지가 난다.

OpenCV ERROR: Bad argument (the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/cvcalibration.cpp(1488)

OpenCV ERROR: Bad argument (the output array of translation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/

다음과 같이 바꾸어야 함.
   // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 1, 3, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 1, 3, CV_32FC1 ); // translation vector


3) rotation matrix

카메라 회전을 원하는 형태로 얻으려면 rotation vector를 rotation matrix로 바꾸어야 한다. (아래 설명 참조.)

Learning OpenCV: 394p
"(The rotation vector) represents an axis in three-dimensional space in the camera coordinate system around which (the pattern) was rotated and where the length or magnitude of the vector encodes the counterclock-wise angle of the rotation. Each of these rotation vectors can be converted to a 3-by-3 rotation matrix by calling cvRodrigues2()."








check#1. 대응점 쌍 개수

cvCalibrateCamera2() 함수유효한 대응점 4쌍이 있으면 camera calibration을 한다고 하지만, 함수 실행 에러가 나지 않고 계산값을 내보낼 뿐 그것이 정확한 결과는 아니다. ( OpenCV의 알고리즘은 렌즈 왜곡 변수와 카메라 내부 변수를 따로 분리해서 계산하고 있다. 렌즈 왜곡 변수에 대해서는 radial distortion coefficients 3개와 tangential distortion coefficients 2개, 총 5개의 미지수의 해를 구하고자 하므로 이론상 이미지 상의 3개의 2차원 (x,y) 점으로부터 6개의 값을 얻으면 계산할 수 있다.

그런데, 우리 프로그램에서 4쌍의 대응점으로 카메라 캘리브레이션을 한 결과는 다음과 같다.

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인


 
frame # 103  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=1958.64 0 cx=160.37
0 fy=792.763 cy=121.702
0 0 1

lens distortion
k1 = -8.17823
k2 = -0.108369
p1 = -0.388965
p2 = -0.169033

rotation vector
4.77319  63.4612  0.300428

translation vector
-130.812  -137.452  714.396


재확인...

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인



frame # 87  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=372.747 0 cx=159.5
0 fy=299.305 cy=119.5
0 0 1

lens distortion
k1 = -7.36674e-14
k2 = 8.34645e-14
p1 = -9.57187e-15
p2 = -4.6854e-15

rotation vector
-0.276568  -0.125119  -0.038675

translation vector
-196.841  -138.012  168.806


즉, 4쌍의 대응점으로부터 카메라 매트릭스를 산술적으로 계산해 내기는 한다. 그런데 현재 패턴을 사용하여 대응점이 4쌍이 나오는 경우는 대개 패턴 중 격자 한 개의 코너점 4개가 검출된 경우이다. 그러므로 인접한 4점의 위치 좌표에 렌즈 왜곡의 효과가 충분히 반영되어 있다고 볼 수 없으므로 위의 출력 결과와 같이 k1 = 0으로 렌즈 왜곡이 없다고 보는 것과 같은 결과가 된다.

한 가지 더. cvCalibrateCam2( ) 함수가 내부적으로 시행하는 첫번째 일은 cvConvertPointsHomogeneous( ) 함수를 호출하여, input matrices로 서로 대응하는 world coordinate과 image coordinate을 각기 1차원 Homogeneous coordinate으로 변환하는 것이다. 그런데 함수 설명에 다음과 같은 내용이 있다. "It is always safe to use the function with number of points \texttt{N} \ge 5 , or to use multi-channel Nx1 or 1xN arrays."

카메라 렌즈에 skew가 없다는 가정을 전제로 한다.
캘리브레이션을 위해 대응점을 얻을 패턴이 평면 (z = 0)인 경우에만 intrinsic parameter의 고정값 초기화가 가능하다. (
"where z-coordinates of the object points must be all 0’s.")
 
 
posted by maetel
2010. 5. 30. 01:59

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 5. 27. 20:53 Computer Vision
virtual object rendering test
가상의 그래픽 합성 시험

OpenGL 그래픽과 OpenCV 이미지 합성

camera로부터의 입력 이미지를 OpenCV 함수로 받아 IplImage 형태로 저장한 것과 OpenGL 함수로 그린 그래픽 정보를 합성하기


Way #1.

OpenCV의 카메라 입력으로 받은 image frame을 texture로 만들어 OpenGL의 디스플레이 창에 배경 (평면에 texture mapping)으로 넣고 여기에 그래픽을 그려 display하는 방법
ref. http://cafe.naver.com/opencv/12266

여차저차 조사 끝에 정리하면,
OpenGL에서 texture mapping을 실행하는 함수 glTexImage2D( )의 입력 텍스처로 OpenCV의 image data structure인 IplImage를 넣어 줄 수 있으면 끝난다.
ref.
http://www.gamedev.net/community/forums/topic.asp?topic_id=205527
http://www.rauwendaal.net/blog/opencvandopengl-1
ARTag source code: cfar_code/OpenGL/opengl_only_test/opengl_only_test.cpp
ARTag source code: cfar_code/IntroARProg/basic_artag_opengl/basic_artag_opengl.cpp

테스팅 중 발견한 문제점:
cvRetrieveFrame() 함수를 while 아래에서 돌려 cvShowImage() 함수로 보여 주는 대신 glutDisplayFunc() 함수에서 불러 glutMainLoop() 함수로 돌리면 시간이 많이 걸린다. (* cvGrabFrame() 함수의 경우는 괜찮음.)


ref. OpenGL Programming Guide - Chapter 9 - Texture Mapping
Textures are simply rectangular arrays of data - for example, color data, luminance data, or color and alpha data. The individual values in a texture array are often called texels.

The data describing a texture may consist of one, two, three, or four elements per texel, representing anything from a modulation constant to an (R, G, B, A) quadruple.

A texture object stores texture data and makes it readily available. You can now control many textures and go back to textures that have been previously loaded into your texture resources.


6일 동안의 갖은 삽질 끝에 몇 줄 되지도 않는 source code. ㅜㅜ

glLoadMarix( ) 함수




Way #2.

OpenCV에서 카메라로부터 얻은 image frame과 이로부터 계산한 OpenGL의 그래픽 정보를 OpenCV의 Iplimage로 넘기는 방법

ref.
http://cafe.naver.com/opencv/12622


http://webcache.googleusercontent.com/search?q=cache:xUG17-FlHQMJ:www.soe.ucsc.edu/classes/cmps260/Winter99/Winter99/handouts/proj1/proj1_99.html+tsai+opengl&cd=5&hl=ko&ct=clnk&gl=kr


http://www.google.com/codesearch/p?hl=ko#zI2h2OEMZ0U/~mgattass/ra/software/tsai.zip%7CsGIrNzsqK4o/tsai/src/main.c&q=tsai%20glut&l=9

http://www.google.com/codesearch/p?hl=ko#XWPk_ZdtAX4/classes/cmps260/Winter99/handouts/proj1/cs260proj1.tar.gz|DRo4_7nUzpo/CS260/camera.c&q=tsai%20glut&d=7


posted by maetel
2010. 5. 26. 22:59 Computer Vision
2010/02/10 - [Visual Information Processing Lab] - Seong-Woo Park & Yongduek Seo & Ki-Sang Hong
2010/05/18 - [Visual Information Processing Lab] - virtual studio 구현: camera calibration test



1. 내부 파라미터 계산

cvCalibrateCamera2() 함수를 이용하여 카메라 내부/외부 파라미터와 렌즈 왜곡 변수를 얻는다.


frame # 191  ---------------------------
# of found lines = 8 vertical, 6 horizontal
vertical lines:
horizontal lines:
p.size = 48
CRimage.size = 48
# of corresponding pairs = 15 = 15

camera matrix
fx=286.148 0 cx=207.625
0 fy=228.985 cy=98.8437
0 0 1

lens distortion
k1 = 0.0728017
k2 = -0.0447815
p1 = -0.0104295
p2 = 0.00914935

rotation vector
-0.117104  -0.109022  -0.0709096

translation vector
-208.234  -160.983  163.298



이 결과를 가지고 cvProjectPoints2()를 써서 패턴의 점에 대응되는 이미지 상의 점을 찾은 결과는 아래와 같다.




1-1.

카메라 내부 파라미터와 외부 파라미터를 모두 계산하는 cvCalibrateCamera2() 함수 대신 내부 파라미터만 계산하는
cvInitIntrinsicParams2D() 함수를 써 본다.



2. lens distortion(kappa1, kappa2)을 가지고 rectification

패턴 인식이 성공적인 경우 당연히 카메라 캘리브레이션 결과가 정확해지며, 이로부터 가상의 물체를 합성하기 위해 필요한 object 또는 graphic coordinate을 실시간으로 계산할 수 있다. 현재 우리 프로그램에서 패턴 인식이 실패하는 원인은 직선 검출의 오차인데, 이 오차의 원인으로는 여러가지가 있지만 가장 큰 것은 렌즈 왜곡이다. (현재 렌즈 왜곡을 고려하지 않고 있다.) 그래서 실제로는 하나의 직선에 대해 여러 개 (2-3개)의 직선을 검출하며 (NMS 알고리즘만으로는 이 오차를 줄이는 데 한계를 보이고 있어), 이로부터 계산된 교차점들의 위치 좌표 오차는 cross ratio 계산에 결정적인 오차로 작용한다. 현재 방식의 패턴 생성과 패턴 인식은 cross ratios 값에 절대적으로 의존하고 있기 때문에 이 문제를 반드시 해결해야 한다. 그러므로 렌즈 왜곡을 고려하여 입력 이미지를 펴서 (rectification) 기존의 패턴 인식 알고리즘을 적용하자.

ref.
Learning OpenCV: Chapter 6: Image Trasnforms
opencv v2.1 documentation — Geometric Image Transformations


1) Undistortion

Learning OpenCV: 396p
"OpenCV provides us with a ready-to-use undistortion algorithm that takes a raw image and the distortion coefficients from cvCalibrateCamera2() and produces a corrected image (see Figure 11-12). We can access this algorithm either through the function cvUndistort2(), which does everything we need in one shot, or through the pair of routines cvInitUndistortMap() and cvRemap(), which allow us to handle things a little more efficiently for video or other situations where we have many images from the same camera. ( * We should take a moment to clearly make a distinction here between undistortion, which mathematically removes lens distortion, and rectifi cation, which mathematically aligns the images with respect to each other. )

입력 영상 (렌즈 왜곡)

출력 영상 (왜곡 제거)






 

# of corresponding pairs = 30 = 30

camera matrix
fx=94.6664 0 cx=206.772
0 fy=78.3349 cy=158.782
0 0 1

lens distortion
k1 = 0.0130734
k2 = -0.000955421
p1 = 0.00287948
p2 = 0.00158042









            if ( ( k1 > 0.3 && k1 < 0.6 ) && ( cx > 150.0 && cx < 170.0 ) && ( cy > 110 && cy < 130 ) )


# of corresponding pairs = 42 = 42

camera matrix
fx=475.98 0 cx=162.47
0 fy=384.935 cy=121.552
0 0 1

lens distortion
k1 = 0.400136
k2 = -0.956089
p1 = 0.00367761
p2 = 0.00547217







2) Recitifaction




cvInitUndistortRectifyMap



3. line detection




4. 패턴 인식 (대응점 찾기)




5. 외부 파라미터 계산 (4의 결과 & lens distortion = 0 입력)
cvFindExtrinsicCameraParams2()



6. reprojection
2에서 얻은 rectificated image에 할 것




posted by maetel
2010. 4. 22. 20:05 Computer Vision
Graphics and Media Lab
CMC department, Moscow State University
http://graphics.cs.msu.ru/en/science/research/calibration/cpp
posted by maetel
2010. 4. 14. 16:40 Computer Vision
매킨토시에서 OpenCV 버전 2로 업그레이드
http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port

현재 연구실 맥미니 사양 (Leopard)
 System Version:    Mac OS X 10.5.8 (9L30)
 Kernel Version:    Darwin 9.8.0


0. MacPorts 업그레이드


1) MacPorts 포트 정보 확인

$ port info macports

MacPorts @1.8.2 (sysutils)
Variants:             darwin_10, darwin_7, darwin_8, darwin_8_i386,
                      darwin_8_powerpc, darwin_9, darwin_9_i386,
                      darwin_9_powerpc, universal

Description:          MacPorts provides the infrastructure that allows easy
                      installation and management of freely available software
                      on Mac OS X 10.4 or newer systems.
Homepage:             http://www.macports.org/

Platforms:            darwin, freebsd
License:              unknown
Maintainers:          macports-mgr@lists.macosforge.org


2) 기존 MacPorts 버전 확인 및 새 버전 다운로드

$ sudo port selfupdate

MacPorts base version 1.710 installed
Downloaded MacPorts base version 1.800

Installing new MacPorts release in /opt/local as root:admin - TCL-PACKAGE in /Library/Tcl; Permissions: 0755


3) MacPorts 새 버전 설치

$ sudo port -v selfupdate
Password:



4) OpenCV 포트 확인

$ port search opencv

opencv @2.0.0 (graphics, science)
    Intel(R) Open Source Computer Vision Library

이 포트를 설치하면 64 비트 OpenCV 2.0이 된다고 하는데, 이 말은 눈표범에 해당하는 게 아닌가 한다. 그냥 표범인 지금 맥미니는 (확실친 않지만) 32비트인 것으로 확인되었다.  어쨌든, 무엇보다 snow leopard 스노우 러퍼드 (Mac OS X 10.6) 사용자는 quicktime (iSight)과 carbon (GUI) support를 포기해야 한다고 되어 있다. 나는 그냥 러퍼드 (Mac OS X 10.5)이지만, 왠지 불안하므로 맥포트로 OpenCV를 업그레이드 하는 방법은 일단 제외하기로 한다. 대신, 공식 위키 안내대로 CMake를 이용하여 OpenCV 새 버전을 설치하기로 한다.


1. CMake를 사용하여 OpenCV 설치하기

1) subversion 설치

 맥포트로 subversion을 다운로드, 설치한다.

$ sudo port install subversion

subversion ( http://en.wikipedia.org/wiki/Subversion_%28software%29 )은 예전 내 파워북에 설치해서 써 본 경험이 있기는 하다. 그때도 이렇게 오래 걸렸었나...



2) CMake 설치

$ sudo port install cmake



cf. CMake?
http://en.wikipedia.org/wiki/CMake
http://www.cmake.org/


3) OpenCV source code 다운로드

svn co https://code.ros.org/svn/opencv/trunk/opencv

에러 메시지가 나와서 기존의 "opencv" 폴더명을 바꾸고 다시 명령을 줬더니, 새로 "opencv" 폴더와 다음 파일들이 생성된다.


마지막에 "Checked out revision 3024."라는 메시지가 나왔다.
https://code.ros.org/trac/opencv/changeset/3024


4) Make 파일 만들기

생성된 opencv 폴더에 들어가서 CMake로 유닉스 메이크 파일을 생성한다. (옵션을 추가할 수 있다. 공식 위키 안내 참조)

$ cd opencv
$ sudo cmake -G "Unix Makefiles"




5) 빌드하기


$ sudo make -j8





$ sudo make install




6) 확인

-1) 파인더 창에서 보이지 않는 디렉토리는 "/usr/local/" 파인더 메뉴의 Go > Go to folder에서 직접 입력하여 들어갈 수 있다.
-2) OpenCV 새 버전을 MacPorts로 설치하지 않았으므로, 맥포츠 명령어 "port installed"로 설치된 포트들을 검색하면 이전에 맥포츠로 설치한 1.0.0 버전만 확인할 수 있다. (이전에 맥포츠로 설치한 1.0.0 버전은 "/opt/local/var/macports/software/opencv/1.0.0_0/opt/local/lib"에 들어 있다.)



2. Xcode에서 OpenCV 라이브러리 사용하기

공식 위키의 안내문:

Using the OpenCV libraries in an Xcode OS X project

These instructions were written for Xcode 3.1.x

  • Create a new XCode project using the Command Line Utility/Standard Tool template
  • Select Project -> Edit Project Settings

  • Set Configuration to All Configurations
  • In the Architectures section, double-click Valid Architectures and remove all the PPC architectures
  • In the Search Paths section set Header Search Paths to /usr/local/include/opencv
  • Close the Project Info window
  • Select Project -> New Group and create a group called OpenCV Frameworks

  • With the new group selected, select Project -> Add to Project…

  • Press the "/" key to get the Go to the folder prompt
  • Enter /usr/local/lib
  • Select libcxcore.dylib, libcvaux.dylib, libcv.dylib, libhighgui.dylib, and libml.dylib.

  • Click Add
  • Uncheck Copy Items… and click Add

Now you should be able to include the OpenCV libraries, compile, and run your project


1) 빌드 환경 설정
 
XCode 메뉴에서 Project -> Edit Project Settings를 클릭하면 Project Info 창이 뜬다. Build 탭에 들어가서
-1) Configuration 설정이 "Active (Debug)"로 되어 있는 것을 "All Configurations"로 변경한다.
-2) Architectures에서 "Valid Architectures"를 더블 클릭하여 목록이 뜨면 그 중 PPC 아케텍처에 해당하는 것들을 모두 삭제한다.
-3) Search Paths에서 Header Search Paths를 " /usr/local/include/opencv  "로 설정한다.


2) OpenCV  프레임웍스를 프로젝트에 추가

-1) Project Info 창을 닫고, 프로젝트에 "New Group"을 추가하여 "OpenCV Frameworks"라 명명한다.
-2) 이 그룹을 선택한 상태로 인용부 설명대로 usr/local/lib에 위치한 5개의 라이브러리 파일을 추가한다.




3. Xcode 프로젝트 테스트...ing




/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
            cvGrabFrame( capture ); // capture a frame           
            image = cvRetrieveFrame(capture); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}




[Session started at 2010-04-15 01:11:16 +0900.]
2010-04-15 01:11:22.273 opencv2test01[1192:7f23] *** _NSAutoreleaseNoPool(): Object 0xc5f0d0 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc320   240
ccccccccccccccccccccc320   240

[Session started at 2010-04-15 01:11:24 +0900.]
Loading program into debugger…
GNU gdb 6.3.50-20050815 (Apple version gdb-962) (Sat Jul 26 08:14:40 UTC 2008)
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i386-apple-darwin".Program loaded.
sharedlibrary apply-load-rules all
Attaching to program: `/Users/lym/Documents/VIP/2010/opencv2test01/build/Debug/opencv2test01', process 1192.
unable to read unknown load command 0x22
unable to read unknown load command 0x22
StartNextIsochRead-ReadIsochPipeAsync: Error: kIOReturnIsoTooOld - isochronous I/O request for distant past!

The Debugger Debugger is attaching to process(gdb)

실행 중지하면, (gdb) 대신 다음 메시지가 추가되며 프로그램 종료된다.

StartNextIsochRead-ReadIsochPipeAsync: Error: kIOReturnIsoTooOld - isochronous I/O request for distant past!
kill

The Debugger Debugger is attaching to process(gdb)




카메라에서 비디오 입력 받지 말고, 폴더에서 이미지 파일 읽으면

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
            cvGrabFrame( capture ); // capture a frame           
//            image = cvRetrieveFrame(capture); // retrieve the caputred frame
            image = cvLoadImage("werol.jpg"); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}



 




[Session started at 2010-04-15 01:26:38 +0900.]
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
usbConnectToCam-SetConfiguration: Error: kIOReturnNotResponding - device not responding
2010-04-15 01:26:43.235 opencv2test01[1333:7f23] *** _NSAutoreleaseNoPool(): Object 0xc56040 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825





카메라 캡처 부분을 지우면

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
    IplImage* image = 0; // image
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
//        printf("bbbbbbbbbbbbbb");
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            printf("ccccccccccccccccccccc");
//            cvGrabFrame( capture ); // capture a frame           
//            image = cvRetrieveFrame(capture); // retrieve the caputred frame
            image = cvLoadImage("werol.jpg"); // retrieve the caputred frame
           
            cout << image->width << "   " << image->height << endl;
           
            cvShowImage( "camera", image );
           
            if( cvWaitKey(10) >= 0 )
                break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );
   
    return 0;
   
}


[Session started at 2010-04-15 01:32:37 +0900.]
2010-04-15 01:32:43.091 opencv2test01[1377:7f23] *** _NSAutoreleaseNoPool(): Object 0xc50970 of class NSThread autoreleased with no pool in place - just leaking
Stack: (0x9143bf4f 0x91348432 0x9134e1a4 0xa260db7 0xa265e9a 0xa2649d3 0xa268cbd 0xa268130 0x90088935 0x93fcedb9 0x93e8f340 0x93e8f6ac 0x90088935 0x93fd117d 0x93e981c4 0x93e8f6ac 0x90088935 0x93fcfa81 0x93e7bc5d 0x93e80b2d 0x93e7b167 0x90088935 0x97ab89f8 0xdbf116 0xe6a016 0xe6a116 0x96917155 0x96917012)
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825
ccccccccccccccccccccc558   825




카메라 입력도 이미지 파일 로드도 없이 그냥 이미지를 하나 만들어 준 다음 디스플레이/저장해 보면,

/* Test: video capturing from a camera
 camera: Logitech QuickCam Pro 4000
 */

//#include <OpenCV/OpenCV.h>
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace std;

int main()
{
   
    IplImage *iplImg = cvCreateImage(cvSize(500, 500), 8, 3);
    cvZero(iplImg);
    cvLine(iplImg, cvPoint(10, 10), cvPoint(300, 300), CV_RGB(255, 0, 0), 20);
    cvCircle(iplImg, cvPoint(400, 400), 40, CV_RGB(0, 255, 255), 5);
    cvNamedWindow("temp"); cvShowImage("temp", iplImg); cvSaveImage("temp.bmp", iplImg);  cvWaitKey();

    return 0;
}



실행 결과 "temp 창"에 뜨는 (비정상적인) 이미지와 파일로 (정상적으로) 저장되는 "temp.bmp"는 아래와 같다.

실행 폴더에 저장된 temp.bmp

"temp" 창 부분을 화면 캡처한 이미지





http://tech.groups.yahoo.com/group/OpenCV/message/70200


/opt/local/.........../opencv/1.0.0_0/opt/local/include/opencv/

..................../opt/local/lib






MacPorts로 설치했던  OpenCV 1.0.0 테스트

프로젝트 헤더 파일 경로 설정: /opt/local/include/opencv
프로젝트에 추가할 라이브러리 파일 위치:  /opt/local/lib






$ sudo port install opencv


posted by maetel
2010. 4. 7. 00:16 Computer Vision
OpenCV 라이브러리의 Hough transform에 의한 직선 찾기 함수


CvSeq* cvHoughLines2(CvArr* image, void* storage, int method, double rho, double theta, int threshold, double param1=0, double param2=0)

Finds lines in a binary image using a Hough transform.

Parameters:
  • image – The 8-bit, single-channel, binary source image. In the case of a probabilistic method, the image is modified by the function
  • storage – The storage for the lines that are detected. It can be a memory storage (in this case a sequence of lines is created in the storage and returned by the function) or single row/single column matrix (CvMat*) of a particular type (see below) to which the lines’ parameters are written. The matrix header is modified by the function so its cols or rows will contain the number of lines detected. If storage is a matrix and the actual number of lines exceeds the matrix size, the maximum possible number of lines is returned (in the case of standard hough transform the lines are sorted by the accumulator value)
  • method

    The Hough transform variant, one of the following:

    • CV_HOUGH_STANDARD - classical or standard Hough transform. Every line is represented by two floating-point numbers $(\rho , \theta )$, where $\rho $ is a distance between (0,0) point and the line, and $\theta $ is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type
    • CV_HOUGH_PROBABILISTIC - probabilistic Hough transform (more efficient in case if picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of CV_32SC4 type
    • CV_HOUGH_MULTI_SCALE - multi-scale variant of the classical Hough transform. The lines are encoded the same way as CV_HOUGH_STANDARD
  • rho – Distance resolution in pixel-related units
  • theta – Angle resolution measured in radians
  • threshold – Threshold parameter. A line is returned by the function if the corresponding accumulator value is greater than threshold
  • param1

    The first method-dependent parameter:

    • For the classical Hough transform it is not used (0).
    • For the probabilistic Hough transform it is the minimum line length.
    • For the multi-scale Hough transform it is the divisor for the distance resolution $\rho $. (The coarse distance resolution will be $\rho $ and the accurate resolution will be $(\rho / \texttt{param1})$).
  • param2

    The second method-dependent parameter:

    • For the classical Hough transform it is not used (0).
    • For the probabilistic Hough transform it is the maximum gap between line segments lying on the same line to treat them as a single line segment (i.e. to join them).
    • For the multi-scale Hough transform it is the divisor for the angle resolution $\theta $. (The coarse angle resolution will be $\theta $ and the accurate resolution will be $(\theta / \texttt{param2})$).

Memory storage is a low-level structure used to store dynamicly growing data structures such as sequences, contours, graphs, subdivisions, etc.


입력 이미지가 8비트 단일 채널이어야 하므로,
다음과 같이 "IPL_DEPTH_32F"로 생성했던 입력 이미지 (iplDoGx)를 바꾸어 "8" 비트 depth짜리 새로운 이미지 (iplEdgeY)에 저장한다.

            cvConvert(iplDoGx, iplEdgeY);


두번째 인자 " void* storage" 는 탐지된 직선을 저장할 메모리. 이 함수의 아웃풋에 해당한다.

CvMemStorage

Growing memory storage.

typedef struct CvMemStorage
{
struct CvMemBlock* bottom;/* first allocated block */
struct CvMemBlock* top; /* the current memory block - top of the stack */
struct CvMemStorage* parent; /* borrows new blocks from */
int block\_size; /* block size */
int free\_space; /* free space in the \texttt{top} block (in bytes) */
} CvMemStorage;



CvMemStorage* cvCreateMemStorage(int blockSize=0)

Creates memory storage.

Parameter:blockSize – Size of the storage blocks in bytes. If it is 0, the block size is set to a default value - currently it is about 64K.


 그 아웃풋을 다음의 CvSeq 형태의 자료 구조체 안에 저장한다.

CvSeq

Growable sequence of elements.

#define CV_SEQUENCE\_FIELDS() \
int flags; /* micsellaneous flags */ \
int header_size; /* size of sequence header */ \
struct CvSeq* h_prev; /* previous sequence */ \
struct CvSeq* h_next; /* next sequence */ \
struct CvSeq* v_prev; /* 2nd previous sequence */ \
struct CvSeq* v_next; /* 2nd next sequence */ \
int total; /* total number of elements */ \
int elem_size;/* size of sequence element in bytes */ \
char* block_max;/* maximal bound of the last block */ \
char* ptr; /* current write pointer */ \
int delta_elems; /* how many elements allocated when the sequence grows
(sequence granularity) */ \
CvMemStorage* storage; /* where the seq is stored */ \
CvSeqBlock* free_blocks; /* free blocks list */ \
CvSeqBlock* first; /* pointer to the first sequence block */

typedef struct CvSeq
{
CV_SEQUENCE_FIELDS()
} CvSeq;

The structure CvSeq is a base for all of OpenCV dynamic data structures.


그 저장된 값을 읽는 함수

char* cvGetSeqElem(const CvSeq* seq, int index)

Returns a pointer to a sequence element according to its index.

#define CV_GET_SEQ_ELEM( TYPE, seq, index )  (TYPE*)cvGetSeqElem( (CvSeq*)(seq), (index) )
Parameters:
  • seq – Sequence
  • index – Index of element




accumulator value 란?







"detected edges" 이미지에 대해 Hough transform에 의한 line fitting 한 결과를 "input" 이미지에 그리고 있음




opencv/opencv/src/cv/cvhough.cpp 를 열면, 다음의 네 부분으로 나뉘어 정의되어 있다.
Classical Hough Transform
Multi-Scale variant of Classical Hough Transform 
Probabilistic Hough Transform      
Circle Detection

이 중 "Classical Hough Transform" 부분은 다음과 같음.
typedef struct CvLinePolar
{
    float rho;
    float angle;
}
CvLinePolar;
/*=====================================================================================*/

#define hough_cmp_gt(l1,l2) (aux[l1] > aux[l2])

static CV_IMPLEMENT_QSORT_EX( icvHoughSortDescent32s, int, hough_cmp_gt, const int* )

/*
Here image is an input raster;
step is it's step; size characterizes it's ROI;
rho and theta are discretization steps (in pixels and radians correspondingly).
threshold is the minimum number of pixels in the feature for it
to be a candidate for line. lines is the output
array of (rho, theta) pairs. linesMax is the buffer size (number of pairs).
Functions return the actual number of found lines.
*/
static void
icvHoughLinesStandard( const CvMat* img, float rho, float theta,
                       int threshold, CvSeq *lines, int linesMax )
{
    int *accum = 0;
    int *sort_buf=0;
    float *tabSin = 0;
    float *tabCos = 0;

    CV_FUNCNAME( "icvHoughLinesStandard" );

    __BEGIN__;

    const uchar* image;
    int step, width, height;
    int numangle, numrho;
    int total = 0;
    float ang;
    int r, n;
    int i, j;
    float irho = 1 / rho;
    double scale;

    CV_ASSERT( CV_IS_MAT(img) && CV_MAT_TYPE(img->type) == CV_8UC1 );

    image = img->data.ptr;
    step = img->step;
    width = img->cols;
    height = img->rows;

    numangle = cvRound(CV_PI / theta);
    numrho = cvRound(((width + height) * 2 + 1) / rho);

    CV_CALL( accum = (int*)cvAlloc( sizeof(accum[0]) * (numangle+2) * (numrho+2) ));
    CV_CALL( sort_buf = (int*)cvAlloc( sizeof(accum[0]) * numangle * numrho ));
    CV_CALL( tabSin = (float*)cvAlloc( sizeof(tabSin[0]) * numangle ));
    CV_CALL( tabCos = (float*)cvAlloc( sizeof(tabCos[0]) * numangle ));
    memset( accum, 0, sizeof(accum[0]) * (numangle+2) * (numrho+2) );

    for( ang = 0, n = 0; n < numangle; ang += theta, n++ )
    {
        tabSin[n] = (float)(sin(ang) * irho);
        tabCos[n] = (float)(cos(ang) * irho);
    }

    // stage 1. fill accumulator
    for( i = 0; i < height; i++ )
        for( j = 0; j < width; j++ )
        {
            if( image[i * step + j] != 0 )
                for( n = 0; n < numangle; n++ )
                {
                    r = cvRound( j * tabCos[n] + i * tabSin[n] );
                    r += (numrho - 1) / 2;
                    accum[(n+1) * (numrho+2) + r+1]++;
                }
        }

    // stage 2. find local maximums
    for( r = 0; r < numrho; r++ )
        for( n = 0; n < numangle; n++ )
        {
            int base = (n+1) * (numrho+2) + r+1;
            if( accum[base] > threshold &&
                accum[base] > accum[base - 1] && accum[base] >= accum[base + 1] &&
                accum[base] > accum[base - numrho - 2] && accum[base] >= accum[base + numrho + 2] )
                sort_buf[total++] = base;
        }

    // stage 3. sort the detected lines by accumulator value
    icvHoughSortDescent32s( sort_buf, total, accum );

    // stage 4. store the first min(total,linesMax) lines to the output buffer
    linesMax = MIN(linesMax, total);
    scale = 1./(numrho+2);
    for( i = 0; i < linesMax; i++ )
    {
        CvLinePolar line;
        int idx = sort_buf[i];
        int n = cvFloor(idx*scale) - 1;
        int r = idx - (n+1)*(numrho+2) - 1;
        line.rho = (r - (numrho - 1)*0.5f) * rho;
        line.angle = n * theta;
        cvSeqPush( lines, &line );
    }

    __END__;

    cvFree( &sort_buf );
    cvFree( &tabSin );
    cvFree( &tabCos );
    cvFree( &accum );
}






'Computer Vision' 카테고리의 다른 글

OpenCV 2.1 설치 on Mac OS X  (0) 2010.04.14
Hough transform  (0) 2010.04.12
OpenCV: cvFitLine() 연습 코드  (0) 2010.04.06
virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
posted by maetel
2010. 4. 6. 23:27 Computer Vision
OpenCV 라이브러리의 line fitting 함수

void cvFitLine(const CvArr* points, int dist_type, double param, double reps, double aeps, float* line)

Fits a line to a 2D or 3D point set.

Parameters:
  • points – Sequence or array of 2D or 3D points with 32-bit integer or floating-point coordinates
  • dist_type – The distance used for fitting (see the discussion)
  • param – Numerical parameter (C) for some types of distances, if 0 then some optimal value is chosen
  • reps – Sufficient accuracy for the radius (distance between the coordinate origin and the line). 0.01 is a good default value.
  • aeps – Sufficient accuracy for the angle. 0.01 is a good default value.
  • line – The output line parameters. In the case of a 2d fitting, it is an array of 4 floats (vx, vy, x0, y0) where (vx, vy) is a normalized vector collinear to the line and (x0, y0) is some point on the line. in the case of a 3D fitting it is an array of 6 floats (vx, vy, vz, x0, y0, z0) where (vx, vy, vz) is a normalized vector collinear to the line and (x0, y0, z0) is some point on the line

ref.
Structural Analysis and Shape Descriptors — OpenCV 2.0 C Reference





'Computer Vision' 카테고리의 다른 글

Hough transform  (0) 2010.04.12
OpenCV: cvHoughLines2() 연습 코드  (0) 2010.04.07
virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
posted by maetel
2010. 4. 4. 00:00 Computer Vision
http://en.wikipedia.org/wiki/Convolution

void cvFilter2D(const CvArr* src, CvArr* dst, const CvMat* kernel, CvPoint anchor=cvPoint(-1, -1))

Convolves an image with the kernel.

Parameters:
  • src – The source image
  • dst – The destination image
  • kernel – Convolution kernel, a single-channel floating point matrix. If you want to apply different kernels to different channels, split the image into separate color planes using Split and process them individually
  • anchor – The anchor of the kernel that indicates the relative position of a filtered point within the kernel. The anchor shoud lie within the kernel. The special default value (-1,-1) means that it is at the kernel center



elt3470@naver: 사용자가 kernel에 원하는 행렬을 입력함으로써, LPF, HPF 등을 직접 디자인해서 사용할 수 있습니다.

=> 그러므로,
DoG (Derivative of Gaussian) 필터도 만들어 넣을 수 있다.



예로, 5x5 Gaussian  kernel을 만들어서 필터링하면 다음과 같이 영상을 smoothing하게 된다.

ref. 
2010/04/03 - [Visual Information Processing Lab] - Image Filtering





입력 영상

흑백 영상

2차원 5x5 Gaussian convolution 영상 (smoothing)



'Computer Vision' 카테고리의 다른 글

virtual studio 구현: line fitting test  (0) 2010.04.06
virtual studio 구현: gradient filtering  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
posted by maetel
2010. 4. 3. 22:27 Computer Vision
OpenCV 2.0 C++ reference - Image Filtering


void cvSobel(const CvArr* src, CvArr* dst, int xorder, int yorder, int apertureSize=3)

Calculates the first, second, third or mixed image derivatives using an extended Sobel operator.

Parameters:
  • src – Source image of type CvArr*
  • dst – Destination image
  • xorder – Order of the derivative x
  • yorder – Order of the derivative y
  • apertureSize – Size of the extended Sobel kernel, must be 1, 3, 5 or 7


간단한 예로, 다음과 같은 (1차 미분, 크기3의) 마스크를 적용하면,

x방향 Sobal mask:

\vecthreethree {-1}{0}{1} {-2}{0}{2} {-1}{0}{1}


y방향 Sobel mask:

\vecthreethree {-1}{-2}{-1} {0}{0}{0} {1}{2}{1}






입력 영상

흑백 영상

x방향 Sobel 마스크로 필터링한 영상

y방향 Sobel 마스크로 필터링한 영상




입력 영상

흑백 영상

x방향 Sobel 마스크로 필터링한 영상

y방향 Sobel 마스크로 필터링한 영상




'Computer Vision' 카테고리의 다른 글

OpenCV: cvFilter2D() 연습 코드  (0) 2010.04.04
Image Filtering  (0) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
OpenCV: cvFindContours  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
posted by maetel
2010. 4. 2. 19:39 Computer Vision
IPLimage 이미지 행렬의 색상값 읽기

/* get reference to pixel at (col,row),
   for multi-channel images (col) should be multiplied by number of channels */
#define CV_IMAGE_ELEM( image, elemtype, row, col )       \
    (((elemtype*)((image)->imageData + (image)->widthStep*(row)))[(col)])

'Computer Vision' 카테고리의 다른 글

Image Filtering  (0) 2010.04.03
OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: cvFindContours  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
OpenCV: cvCanny() 연습 코드  (0) 2010.03.31
posted by maetel
2010. 4. 2. 17:43 Computer Vision

'Computer Vision' 카테고리의 다른 글

OpenCV: cvSobel() 연습 코드  (3) 2010.04.03
OpenCV: CV_IMAGE_ELEM  (0) 2010.04.02
Harris corner detector  (0) 2010.03.31
OpenCV: cvCanny() 연습 코드  (0) 2010.03.31
Canny edge detection  (0) 2010.03.30
posted by maetel
2010. 3. 13. 01:16 Computer Vision


// Test: video capturing from a camera

#include <OpenCV/OpenCV.h> // matrix operations

int main()
{
    IplImage* image = 0;
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
        cvGrabFrame( capture ); // capture a frame
        image = cvRetrieveFrame(capture); // retrieve the caputred frame
       
        cvShowImage( "camera", image );
       
        if( cvWaitKey(10) >= 0 )
            break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );

    return 0;
}


posted by maetel
2009. 11. 23. 11:48 Computer Vision
to apply Particle filter to object tracking
3차원 파티클 필터를 이용한 물체(공) 추적 (contour tracking) 알고리즘 연습


IplImage* cvRetrieveFrame(CvCapture* capture)

Gets the image grabbed with cvGrabFrame.

Parameter: capture – video capturing structure.

The function cvRetrieveFrame() returns the pointer to the image grabbed with the GrabFrame function. The returned image should not be released or modified by the user. In the event of an error, the return value may be NULL.



Canny edge detection

Canny edge detection in OpenCV image processing functions
void cvCanny(const CvArr* image, CvArr* edges, double threshold1, double threshold2, int aperture_size=3)

Implements the Canny algorithm for edge detection.

Parameters:
  • image – Single-channel input image
  • edges – Single-channel image to store the edges found by the function
  • threshold1 – The first threshold
  • threshold2 – The second threshold
  • aperture_size – Aperture parameter for the Sobel operator (see cvSobel())

The function cvCanny() finds the edges on the input image image and marks them in the output image edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking, the largest value is used to find the initial segments of strong edges.



source cod...ing
// 3-D Particle filter algorithm + Computer Vision exercise
// : object tracking - contour tracking
// lym, VIP Lab, Sogang Univ.
// 2009-11-23
// ref. Probabilistic Robotics: 98p

#include <OpenCV/OpenCV.h> // matrix operations & Canny edge detection
#include <iostream>
#include <cstdlib> // RAND_MAX
#include <ctime> // time as a random seed
#include <cmath>
#include <algorithm>
using namespace std;

#define PI 3.14159265
#define N 100 //number of particles
#define D 3 // dimension of the state

// uniform random number generator
double uniform_random(void) {
   
    return (double) rand() / (double) RAND_MAX;
   
}

// Gaussian random number generator
double gaussian_random(void) {
   
    static int next_gaussian = 0;
    static double saved_gaussian_value;
   
    double fac, rsq, v1, v2;
   
    if(next_gaussian == 0) {
       
        do {
            v1 = 2.0 * uniform_random() - 1.0;
            v2 = 2.0 * uniform_random() - 1.0;
            rsq = v1 * v1 + v2 * v2;
        }
        while(rsq >= 1.0 || rsq == 0.0);
        fac = sqrt(-2.0 * log(rsq) / rsq);
        saved_gaussian_value = v1 * fac;
        next_gaussian = 1;
        return v2 * fac;
    }
    else {
        next_gaussian = 0;
        return saved_gaussian_value;
    }
}

double normal_distribution(double mean, double standardDeviation, double state) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * (state - mean) * (state - mean) / variance ) / sqrt(2 * PI * variance);
}
////////////////////////////////////////////////////////////////////////////

// distance between measurement and prediction
double distance(CvMat* measurement, CvMat* prediction)
{
    double distance2 = 0;
    double differance = 0;
    for (int u = 0; u < 3; u++)
    {
        differance = cvmGet(measurement,u,0) - cvmGet(prediction,u,0);
        distance2 += differance * differance;
    }
    return sqrt(distance2);
}

double distanceEuclidean(CvPoint2D64f a, CvPoint2D64f b)
{
    double d2 = (a.x - b.x) * (a.x - b.x) + (a.y - b.y) * (a.y - b.y);
    return sqrt(d2);
}

// likelihood based on multivariative normal distribution (ref. 15p eqn(2.4))
double likelihood(CvMat *mean, CvMat *covariance, CvMat *sample) {
   
    CvMat* diff = cvCreateMat(3, 1, CV_64FC1);
    cvSub(sample, mean, diff); // sample - mean -> diff
    CvMat* diff_t = cvCreateMat(1, 3, CV_64FC1);
    cvTranspose(diff, diff_t); // transpose(diff) -> diff_t
    CvMat* cov_inv = cvCreateMat(3, 3, CV_64FC1);
    cvInvert(covariance, cov_inv); // transpose(covariance) -> cov_inv
    CvMat* tmp = cvCreateMat(3, 1, CV_64FC1);
    CvMat* dist = cvCreateMat(1, 1, CV_64FC1);
    cvMatMul(cov_inv, diff, tmp); // cov_inv * diff -> tmp   
    cvMatMul(diff_t, tmp, dist); // diff_t * tmp -> dist
   
    double likeliness = exp( -0.5 * cvmGet(dist, 0, 0) );
    double bound = 0.0000001;
    if ( likeliness < bound )
    {
        likeliness = bound;
    }
    return likeliness;
//    return exp( -0.5 * cvmGet(dist, 0, 0) );
//    return max(0.0000001, exp(-0.5 * cvmGet(dist, 0, 0)));   
}

// likelihood based on normal distribution (ref. 14p eqn(2.3))
double likelihood(double distance, double standardDeviation) {
   
    double variance = standardDeviation * standardDeviation;
   
    return exp(-0.5 * distance*distance / variance ) / sqrt(2 * PI * variance);
}

int main (int argc, char * const argv[]) {
   
    srand(time(NULL));
   
    IplImage *iplOriginalColor; // image to be captured
    IplImage *iplOriginalGrey; // grey-scale image of "iplOriginalColor"
    IplImage *iplEdge; // image detected by Canny edge algorithm
    IplImage *iplImg; // resulting image to show tracking process   
    IplImage *iplEdgeClone;

    int hours, minutes, seconds;
    double frame_rate, Codec, frame_count, duration;
    char fnVideo[200], titleOriginal[200], titleEdge[200], titleResult[200];
   
    sprintf(titleOriginal, "original");
    sprintf(titleEdge, "Edges by Canny detector");
//    sprintf(fnVideo, "E:/AVCHD/BDMV/STREAM/00092.avi");   
    sprintf(fnVideo, "/Users/lym/Documents/VIP/2009/Nov/volleyBall.mov");
    sprintf(titleResult, "3D Particle filter for contour tracking");
   
    CvCapture *capture = cvCaptureFromAVI(fnVideo);
   
    // stop the process if capture is failed
    if(!capture) { printf("Can NOT read the movie file\n"); return -1; }
   
    frame_rate = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
//    Codec = cvGetCaptureProperty( capture, CV_CAP_PROP_FOURCC );
    frame_count = cvGetCaptureProperty( capture, CV_CAP_PROP_FRAME_COUNT);
   
    duration = frame_count/frame_rate;
    hours = duration/3600;
    minutes = (duration-hours*3600)/60;
    seconds = duration-hours*3600-minutes*60;
   
    //  stop the process if grabbing is failed
    //    if(cvGrabFrame(capture) == 0) { printf("Can NOT grab a frame\n"); return -1; }
   
    cvSetCaptureProperty(capture, CV_CAP_PROP_POS_FRAMES, 0); // go to frame #0
    iplOriginalColor = cvRetrieveFrame(capture);
    iplOriginalGrey = cvCreateImage(cvGetSize(iplOriginalColor), 8, 1);
    iplEdge = cvCloneImage(iplOriginalGrey);
    iplEdgeClone = cvCreateImage(cvSize(iplOriginalColor->width, iplOriginalColor->height), 8, 3);
    iplImg = cvCreateImage(cvSize(iplOriginalColor->width, iplOriginalColor->height), 8, 3);   
   
    int width = iplOriginalColor->width;
    int height = iplOriginalColor->height;
   
    cvNamedWindow(titleOriginal);
    cvNamedWindow(titleEdge);
   
    cout << "image width : height = " << width << "  " << height << endl;
    cout << "# of frames = " << frame_count << endl;   
    cout << "capture finished" << endl;   
   
   
    // set the system   
   
    // set the process noise
    // covariance of Gaussian noise to control
    CvMat* transition_noise = cvCreateMat(D, D, CV_64FC1);
    // assume the transition noise
    for (int row = 0; row < D; row++)
    {   
        for (int col = 0; col < D; col++)
        {
            cvmSet(transition_noise, row, col, 0.0);
        }
    }
    cvmSet(transition_noise, 0, 0, 3.0);
    cvmSet(transition_noise, 1, 1, 3.0);
    cvmSet(transition_noise, 2, 2, 0.3);
   
    // set the measurement noise
/*
    // covariance of Gaussian noise to measurement
     CvMat* measurement_noise = cvCreateMat(D, D, CV_64FC1);
     // initialize the measurement noise
     for (int row = 0; row < D; row++)
     {   
        for (int col = 0; col < D; col++)
        {
            cvmSet(measurement_noise, row, col, 0.0);
        }
     }
     cvmSet(measurement_noise, 0, 0, 5.0);
     cvmSet(measurement_noise, 1, 1, 5.0);
     cvmSet(measurement_noise, 2, 2, 5.0); 
 */
    double measurement_noise = 2.0; // standrad deviation of Gaussian noise to measurement
   
    CvMat* state = cvCreateMat(D, 1, CV_64FC1);    // state of the system to be estimated   
//    CvMat* measurement = cvCreateMat(2, 1, CV_64FC1); // measurement of states
   
    // declare particles
    CvMat* pb [N]; // estimated particles
    CvMat* pp [N]; // predicted particles
    CvMat* pu [N]; // temporary variables to update a particle
    CvMat* v[N]; // estimated velocity of each particle
    CvMat* vu[N]; // temporary varialbe to update the velocity   
    double w[N]; // weight of each particle
    for (int n = 0; n < N; n++)
    {
        pb[n] = cvCreateMat(D, 1, CV_64FC1);
        pp[n] = cvCreateMat(D, 1, CV_64FC1);
        pu[n] = cvCreateMat(D, 1, CV_64FC1);   
        v[n] = cvCreateMat(D, 1, CV_64FC1);   
        vu[n] = cvCreateMat(D, 1, CV_64FC1);           
    }   
   
    // initialize the state and particles    
    for (int n = 0; n < N; n++)
    {
        cvmSet(state, 0, 0, 258.0); // center-x
        cvmSet(state, 1, 0, 406.0); // center-y       
        cvmSet(state, 2, 0, 38.0); // radius   
       
//        cvmSet(state, 0, 0, 300.0); // center-x
//        cvmSet(state, 1, 0, 300.0); // center-y       
//        cvmSet(state, 2, 0, 38.0); // radius       
       
        cvmSet(pb[n], 0, 0, cvmGet(state,0,0)); // center-x
        cvmSet(pb[n], 1, 0, cvmGet(state,1,0)); // center-y
        cvmSet(pb[n], 2, 0, cvmGet(state,2,0)); // radius
       
        cvmSet(v[n], 0, 0, 2 * uniform_random()); // center-x
        cvmSet(v[n], 1, 0, 2 * uniform_random()); // center-y
        cvmSet(v[n], 2, 0, 0.1 * uniform_random()); // radius       
       
        w[n] = (double) 1 / (double) N; // equally weighted particle
    }
   
    // initialize the image window
    cvZero(iplImg);   
    cvNamedWindow(titleResult);
   
    cout << "start filtering... " << endl << endl;
   
    float aperture = 3,     thresLow = 50,     thresHigh = 110;   
//    float aperture = 3,     thresLow = 80,     thresHigh = 110;   
    // for each frame
    int frameNo = 0;   
    while(frameNo < frame_count && cvGrabFrame(capture)) {
        // retrieve color frame from the movie "capture"
        iplOriginalColor = cvRetrieveFrame(capture);        
        // convert color pixel values of "iplOriginalColor" to grey scales of "iplOriginalGrey"
        cvCvtColor(iplOriginalColor, iplOriginalGrey, CV_RGB2GRAY);               
        // extract edges with Canny detector from "iplOriginalGrey" to save the results in the image "iplEdge" 
        cvCanny(iplOriginalGrey, iplEdge, thresLow, thresHigh, aperture);

        cvCvtColor(iplEdge, iplEdgeClone, CV_GRAY2BGR);
       
        cvShowImage(titleOriginal, iplOriginalColor);
        cvShowImage(titleEdge, iplEdge);

//        cvZero(iplImg);
       
        cout << "frame # " << frameNo << endl;
       
        double like[N]; // likelihood between measurement and prediction
        double like_sum = 0; // sum of likelihoods
       
        for (int n = 0; n < N; n++) // for "N" particles
        {
            // predict
            double prediction;
            for (int row = 0; row < D; row++)
            {
                prediction = cvmGet(pb[n],row,0) + cvmGet(v[n],row,0)
                            + cvmGet(transition_noise,row,row) * gaussian_random();
                cvmSet(pp[n], row, 0, prediction);
            }
            if ( cvmGet(pp[n],2,0) < 2) { cvmSet(pp[n],2,0,0.0); }
//            cvLine(iplImg, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))),
//             cvPoint(cvRound(cvmGet(pb[n],0,0)), cvRound(cvmGet(pb[n],1,0))), CV_RGB(100,100,0), 1);           
            cvCircle(iplEdgeClone, cvPoint(cvRound(cvmGet(pp[n],0,0)), cvRound(cvmGet(pp[n],1,0))), cvRound(cvmGet(pp[n],2,0)), CV_RGB(255, 255, 0));
//            cvCircle(iplImg, cvPoint(iplImg->width *0.5, iplImg->height * 0.5), 100, CV_RGB(255, 255, 0), -1);
//            cvSaveImage("a.bmp", iplImg);

            double cX = cvmGet(pp[n], 0, 0); // predicted center-y of the object
            double cY = cvmGet(pp[n], 1, 0); // predicted center-x of the object
            double cR = cvmGet(pp[n], 2, 0); // predicted radius of the object           

            if ( cR < 0 ) { cR = 0; }
           
            // measure
            // search measurements
            CvPoint2D64f direction [8]; // 8 searching directions
            // define 8 starting points in each direction
            direction[0].x = cX + cR;    direction[0].y = cY;      // East
            direction[2].x = cX;        direction[2].y = cY - cR; // North
            direction[4].x = cX - cR;    direction[4].y = cY;      // West
            direction[6].x = cX;        direction[6].y = cY + cR; // South
            int cD = cvRound( cR/sqrt(2.0) );
            direction[1].x = cX + cD;    direction[1].y = cY - cD; // NE
            direction[3].x = cX - cD;    direction[3].y = cY - cD; // NW
            direction[5].x = cX - cD;    direction[5].y = cY + cD; // SW
            direction[7].x = cX + cD;    direction[7].y = cY + cD; // SE       
           
            CvPoint2D64f search [8];    // searched point in each direction         
            double scale = 0.4;
            double scope [8]; // scope of searching
   
            for ( int i = 0; i < 8; i++ )
            {
//                scope[2*i] = cR * scale;
//                scope[2*i+1] = cD * scale;
                scope[i] = 6.0;
            }
           
            CvPoint d[8];
            d[0].x = 1;        d[0].y = 0; // E
            d[1].x = 1;        d[1].y = -1; // NE
            d[2].x = 0;        d[2].y = 1; // N
            d[3].x = 1;        d[3].y = 1; // NW
            d[4].x = 1;        d[4].y = 0; // W
            d[5].x = 1;        d[5].y = -1; // SW
            d[6].x = 0;        d[6].y = 1; // S
            d[7].x = 1;        d[7].y = 1; // SE           
           
            int count = 0; // number of measurements
            double dist_sum = 0;
           
            for (int i = 0; i < 8; i++) // for 8 directions
            {
                double dist = scope[i] * 1.5;
                for ( int range = 0; range < scope[i]; range++ )
                {
                    int flag = 0;
                    for (int turn = -1; turn <= 1; turn += 2) // reverse the searching direction
                    {
                        search[i].x = direction[i].x + turn * range * d[i].x;
                        search[i].y = direction[i].y + turn * range * d[i].y;
                       
//                        cvCircle(iplImg, cvPoint(cvRound(search[i].x), cvRound(search[i].y)), 2, CV_RGB(0, 255, 0), -1);
//                        cvShowImage(titleResult, iplImg);
//                        cvWaitKey(100);

                        // detect measurements   
//                        CvScalar s = cvGet2D(iplEdge, cvRound(search[i].y), cvRound(search[i].x));
                        unsigned char s = CV_IMAGE_ELEM(iplEdge, unsigned char, cvRound(search[i].y), cvRound(search[i].x));
//                        if ( s.val[0] > 200 && s.val[1] > 200 && s.val[2] > 200 ) // bgr color               
                        if (s > 250) // bgr color                           
                        { // when the pixel value is white, that means a measurement is detected
                            flag = 1;
                            count++;
//                            cvCircle(iplEdgeClone, cvPoint(cvRound(search[i].x), cvRound(search[i].y)), 3, CV_RGB(200, 0, 255));
//                            cvShowImage("3D Particle filter", iplEdgeClone);
//                            cvWaitKey(1);
/*                            // get measurement
                            cvmSet(measurement, 0, 0, search[i].x);
                            cvmSet(measurement, 1, 0, search[i].y);   
                            double dist = distance(measurement, pp[n]);
*/                            // evaluate the difference between predictions of the particle and measurements
                            dist = distanceEuclidean(search[i], direction[i]);
                            break; // break for "turn"
                        } // end if
                    } // for turn
                    if ( flag == 1 )
                    { break; } // break for "range"
                } // for range
               
                dist_sum += dist; // for all searching directions of one particle 

            } // for i direction
           
            double dist_avg; // average distance of measurements from the one particle "n"
//            cout << "count = " << count << endl;
            dist_avg = dist_sum / 8;
//            cout << "dist_avg = " << dist_avg << endl;
           
//            estimate likelihood with "dist_avg"
            like[n] = likelihood(dist_avg, measurement_noise);
//            cout << "likelihood of particle-#" << n << " = " << like[n] << endl;
            like_sum += like[n];   
        } // for n particle
//        cout << "sum of likelihoods of N particles = " << like_sum << endl;
       
        // estimate states       
        double state_x = 0.0;
        double state_y = 0.0;
        double state_r = 0.0;
        // estimate the state with predicted particles
        for (int n = 0; n < N; n++) // for "N" particles
        {
            w[n] = like[n] / like_sum; // update normalized weights of particles           
//            cout << "w" << n << "= " << w[n] << "  ";               
            state_x += w[n] * cvmGet(pp[n], 0, 0); // center-x of the object
            state_y += w[n] * cvmGet(pp[n], 1, 0); // center-y of the object
            state_r += w[n] * cvmGet(pp[n], 2, 0); // radius of the object           
        }
        if (state_r < 0) { state_r = 0; }
        cvmSet(state, 0, 0, state_x);
        cvmSet(state, 1, 0, state_y);       
        cvmSet(state, 2, 0, state_r);
       
        cout << endl << "* * * * * *" << endl;       
        cout << "estimation: (x,y,r) = " << cvmGet(state,0,0) << ",  " << cvmGet(state,1,0)
        << ",  " << cvmGet(state,2,0) << endl;
        cvCircle(iplEdgeClone, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0)) ),
                 cvRound(cvmGet(state,2,0)), CV_RGB(255, 0, 0), 1);

        cvShowImage(titleResult, iplEdgeClone);
        cvWaitKey(1);

   
        // update particles       
        cout << endl << "updating particles" << endl;
        double a[N]; // portion between particles
       
        // define integrated portions of each particles; 0 < a[0] < a[1] < a[2] = 1
        a[0] = w[0];
        for (int n = 1; n < N; n++)
        {
            a[n] = a[n - 1] + w[n];
//            cout << "a" << n << "= " << a[n] << "  ";           
        }
//        cout << "a" << N << "= " << a[N] << "  " << endl;           
       
        for (int n = 0; n < N; n++)
        {   
            // select a particle from the distribution
            int pselected;
            for (int k = 0; k < N; k++)
            {
                if ( uniform_random() < a[k] )               
                {
                    pselected = k;
                    break;
                }
            }
//            cout << "p " << n << " => " << pselected << "  ";       
           
            // retain the selection 
            for (int row = 0; row < D; row++)
            {
                cvmSet(pu[n], row, 0, cvmGet(pp[pselected],row,0));
                cvSub(pp[pselected], pb[pselected], vu[n]); // pp - pb -> vu
            }
        }
       
        // updated each particle and its velocity
        for (int n = 0; n < N; n++)
        {
            for (int row = 0; row < D; row++)
            {
                cvmSet(pb[n], row, 0, cvmGet(pu[n],row,0));
                cvmSet(v[n], row , 0, cvmGet(vu[n],row,0));
            }
        }
        cout << endl << endl ;
       
//      cvShowImage(titleResult, iplImg);  
//        cvWaitKey(1000);       
        cvWaitKey(1);       
        frameNo++;
    }
   
    cvWaitKey();   
   
    return 0;
}








posted by maetel
2009. 11. 5. 16:12 Computer Vision
Kalman filter 연습 코딩

1차원 간단 예제
// 1-D Kalman filter algorithm exercise
// VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <iostream>
using namespace std;

int main (int argc, char * const argv[]) {
   
    double groundtruth[] = {1.0, 2.0, 3.5, 5.0, 7.0, 8.0, 10.0};
    double measurement[] = {1.0, 2.1, 3.2, 5.3, 7.4, 8.1, 9.6};
    double transition_noise = 0.1; // covariance of Gaussian noise to control
    double measurement_noise = 0.3; // covariance of Gaussian noise to measurement
   
    double x = 0.0, v = 1.0;    double cov = 0.5;
   
    double x_p, c_p; // prediction of x and cov
    double gain; // Kalman gain
    double x_pre, m;
   
    for (int t=0; t<7; t++)
    {
        // prediction
        x_pre = x;
        x_p = x + v;
        c_p = cov + transition_noise;
        m = measurement[t];
        // update
        gain = c_p / (c_p + measurement_noise);
        x = x_p + gain * (m - x_p);
        cov = ( 1 - gain ) * c_p;
        v = x - x_pre;

        cout << t << endl;
        cout << "estimation  = " << x << endl;
        cout << "measurement = " << measurement[t] << endl;   
        cout << "groundtruth = " << groundtruth[t] << endl;
    }
    return 0;
}

실행 결과:
0
estimation  = 1
measurement = 1
groundtruth = 1
1
estimation  = 2.05
measurement = 2.1
groundtruth = 2
2
estimation  = 3.14545
measurement = 3.2
groundtruth = 3.5
3
estimation  = 4.70763
measurement = 5.3
groundtruth = 5
4
estimation  = 6.76291
measurement = 7.4
groundtruth = 7
5
estimation  = 8.50584
measurement = 8.1
groundtruth = 8
6
estimation  = 9.9669
measurement = 9.6
groundtruth = 10
logout

[Process completed]


2차원 연습
// 2-D Kalman filter algorithm exercise
// lym, VIP lab, Sogang University
// 2009-11-05
// ref. Probabilistic Robotics: 42p

#include <OpenCV/OpenCV.h> // matrix operations

#include <iostream>
#include <iomanip>
using namespace std;

int main (int argc, char * const argv[]) {
    int step = 7;
   
    IplImage *iplImg = cvCreateImage(cvSize(150, 150), 8, 3);
    cvZero(iplImg);
   
    cvNamedWindow("Kalman-2d", 0);
   
    //ground truth of real states
    double groundtruth[] = {10.0, 20.0, 35, 50.0, 70.0, 80.0, 100.0, //x-value
                            10.0, 20.0, 40.0, 55, 65, 80.0, 90.0}; //y-value
    //measurement of observed states
    double measurement_set[] = {10.0, 21, 32, 53, 74, 81, 96,  //x-value
                            10.0, 19, 42, 56, 66, 78, 88};    //y-value
    //covariance of Gaussian noise to control
//    double transition_noise[] = { 0.1, 0.0, 
//                                  0.0, 0.1 }; 
    CvMat* transition_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(transition_noise, 0, 0, 0.1); //set transition_noise(0,0) to 0.1
    cvmSet(transition_noise, 0, 1, 0.0);
    cvmSet(transition_noise, 1, 0, 0.0);
    cvmSet(transition_noise, 1, 1, 0.1);    
    //covariance of Gaussian noise to measurement
//    double measurement_noise[] = { 0.3, 0.0, 
//                                   0.0, 0.2 };
    CvMat* measurement_noise = cvCreateMat(2, 2, CV_64FC1); 
    cvmSet(measurement_noise, 0, 0, 0.3); //set measurement_noise(0,0) to 0.3
    cvmSet(measurement_noise, 0, 1, 0.0);
    cvmSet(measurement_noise, 1, 0, 0.0);
    cvmSet(measurement_noise, 1, 1, 0.2);        
   
    CvMat* state = cvCreateMat(2, 1, CV_64FC1);    //states to be estimated   
    CvMat* state_p = cvCreateMat(2, 1, CV_64FC1);  //states to be predicted
    CvMat* velocity = cvCreateMat(2, 1, CV_64FC1); //motion controls to change states
    CvMat* measurement = cvCreateMat(2, 1, CV_64FC1); //measurement of states
   
    CvMat* cov = cvCreateMat(2, 2, CV_64FC1);     //covariance to be updated
    CvMat* cov_p = cvCreateMat(2, 2, CV_64FC1); //covariance to be predicted
    CvMat* gain = cvCreateMat(2, 2, CV_64FC1);     //Kalman gain to be updated
   
    // temporary matrices to be used for estimation
    CvMat* Kalman = cvCreateMat(2, 2, CV_64FC1); //
    CvMat* invKalman = cvCreateMat(2, 2, CV_64FC1); //

    CvMat* I = cvCreateMat(2,2,CV_64FC1);
    cvSetIdentity(I); // does not seem to be working properly   
//  cvSetIdentity (I, cvRealScalar (1));   
    // check matrix
    for(int i=0; i<2; i++)
    {
        for(int j=0; j<2; j++)
        {
            cout << cvmGet(I, i, j) << "\t";           
        }
        cout << endl;
    }
 
    // set the initial state
    cvmSet(state, 0, 0, 0.0); //x-value //set state(0,0) to 0.0
    cvmSet(state, 1, 0, 0.0); //y-value //set state(1,0) to 0.0
    // set the initital covariance of state
    cvmSet(cov, 0, 0, 0.5); //set cov(0,0) to 0.5
    cvmSet(cov, 0, 1, 0.0); //set cov(0,1) to 0.0
    cvmSet(cov, 1, 0, 0.0); //set cov(1,0) to 0.0
    cvmSet(cov, 1, 0, 0.4); //set cov(1,1) to 0.4   
    // set the initial control
    cvmSet(velocity, 0, 0, 10.0); //x-direction //set velocity(0,0) to 1.0
    cvmSet(velocity, 1, 0, 10.0); //y-direction //set velocity(0,0) to 1.0
   
    for (int t=0; t<step; t++)
    {
        // retain the current state
        CvMat* state_out = cvCreateMat(2, 1, CV_64FC1); // temporary vector   
        cvmSet(state_out, 0, 0, cvmGet(state,0,0)); 
        cvmSet(state_out, 1, 0, cvmGet(state,1,0));        
        // predict
        cvAdd(state, velocity, state_p); // state + velocity -> state_p
        cvAdd(cov, transition_noise, cov_p); // cov + transition_noise -> cov_p
        // measure
        cvmSet(measurement, 0, 0, measurement_set[t]); //x-value
        cvmSet(measurement, 1, 0, measurement_set[step+t]); //y-value
        // estimate Kalman gain
        cvAdd(cov_p, measurement_noise, Kalman); // cov_p + measure_noise -> Kalman
        cvInvert(Kalman, invKalman); // inv(Kalman) -> invKalman
        cvMatMul(cov_p, invKalman, gain); // cov_p * invKalman -> gain       
        // update the state
        CvMat* err = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvSub(measurement, state_p, err); // measurement - state_p -> err
        CvMat* adjust = cvCreateMat(2, 1, CV_64FC1); // temporary vector
        cvMatMul(gain, err, adjust); // gain*err -> adjust
        cvAdd(state_p, adjust, state); // state_p + adjust -> state
        // update the covariance of states
        CvMat* cov_up = cvCreateMat(2, 2, CV_64FC1); // temporary matrix   
        cvSub(I, gain, cov_up); // I - gain -> cov_up       
        cvMatMul(cov_up, cov_p, cov); // cov_up *cov_p -> cov
        // update the control
        cvSub(state, state_out, velocity); // state - state_p -> velocity
       
        // result in colsole
        cout << "step " << t << endl;
        cout << "estimation  = " << cvmGet(state,0,0) << setw(10) << cvmGet(state,1,0) << endl;
        cout << "measurement = " << cvmGet(measurement,0,0) << setw(10) << cvmGet(measurement,1,0) << endl;   
        cout << "groundtruth = " << groundtruth[t] << setw(10) << groundtruth[t+step] << endl;
        // result in image
        cvCircle(iplImg, cvPoint(cvRound(groundtruth[t]), cvRound(groundtruth[t + step])), 3, cvScalarAll(255));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(measurement,0,0)), cvRound(cvmGet(measurement,1,0))), 2, cvScalar(255, 0, 0));
        cvCircle(iplImg, cvPoint(cvRound(cvmGet(state,0,0)), cvRound(cvmGet(state,1,0))), 2, cvScalar(0, 0, 255));
       
        cvShowImage("Kalman-2d", iplImg);
        cvWaitKey(500);   
   
    }
    cvWaitKey();   
   
    return 0;
}


실행 결과:


step 0
estimation  = 10        10
measurement = 10        10
groundtruth = 10        10
step 1
estimation  = 20.5   19.6263
measurement = 21        19
groundtruth = 20        20
step 2
estimation  = 31.4545   35.5006
measurement = 32        42
groundtruth = 35        40
step 3
estimation  = 47.0763   53.7411
measurement = 53        56
groundtruth = 50        55
step 4
estimation  = 67.6291   69.0154
measurement = 74        66
groundtruth = 70        65
step 5
estimation  = 85.0584   81.1424
measurement = 81        78
groundtruth = 80        80
step 6
estimation  = 99.669    90.634
measurement = 96        88
groundtruth = 100        90



posted by maetel
2009. 7. 27. 20:50 Computer Vision
http://opencv.willowgarage.com/wiki/InstallGuide

http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port

Installing OpenCV on OS/X with Python - Princess Polymath


0. MacPorts 설치 -> 참조

1. SubVersion 설치

%% sudo port install subversion
Password:




2. OpenCV 다운로드

%% svn co http://opencvlibrary.svn.sourceforge.net/svnroot/opencvlibrary/trunk opencv



3. 다운로드 받은 OpenCV 확인

%% port search opencv

opencv @1.0.0 (graphics, science)
    Intel(R) Open Source Computer Vision Library


3-1. 좀 더 자세한 정보 보기

%% port info opencv

opencv @1.0.0 (graphics, science)
Variants:    universal

opencv is a library that is mainly aimed at real time computer
vision. Some example areas would be Human-Computer Interaction
(HCI), Object Identification, Segmentation and Recognition, Face
Recognition, Gesture Recognition, Motion Tracking, Ego Motion,
Motion Understanding, Structure From Motion (SFM), and Mobile
Robotics.
Homepage:    http://www.intel.com/technology/computing/opencv/

Library Dependencies: gtk2, zlib, jpeg, libpng, tiff
Platforms:            darwin
Maintainers:          stante@gmail.com



4. OpenCV 설치하기

OpenCV 공식 위키의 설명대로 하면 안 되어서,
( cf. http://en.wikipedia.org/wiki/CMake
http://www.cmake.org/ )

예전에 Freeimage를 설치했던 과정을 참고로 MacPorts를 사용하자.
ref. http://opencv.darwinports.com/


%% sudo port install opencv



4-1. Python 2.5를 (OpenCV의 무엇에 대해서란 말인지는 모르겠으나;;;) 기본으로 설정하기

(맥에 파이썬이 기본으로 설치되어 있음은 나도 알고 있다. 여기에서는 버전을 지정/설정해 주라는 뜻인 것 같은데... 확실치 않다...)
To fully complete your installation and make python 2.5 the default, please run

    sudo port install python_select 
    sudo python_select python25

위와 같은 메시지가 있기에, 그대로..
%% sudo port install python_select 

실행 결과:
--->  Fetching python_select
--->  Attempting to fetch select-0.2.1.tar.gz from http://svn.macports.org/repository/macports/contrib/select/
--->  Verifying checksum(s) for python_select
--->  Extracting python_select
--->  Configuring python_select
--->  Building python_select
--->  Staging python_select into destroot
--->  Installing python_select @0.2.1_0+darwin_9
--->  Activating python_select @0.2.1_0+darwin_9
--->  Cleaning python_select

그리고...
%% sudo python_select python25

Selecting version "python25" for python


4-2. Python에 대하여

그냥 한 번 보기 (버전 업그레이드해야 하는 지도 모르고 하니...)
%% port info python25

python25 @2.5.4, Revision 6 (lang)
Variants:    darwin_10, darwin_7, darwin_8, darwin_9, macosx,
             puredarwin, universal

Python is an interpreted, interactive, object-oriented programming
language.
Homepage:    http://www.python.org/

Library Dependencies: gettext, zlib, openssl, tk, sqlite3, db46,
                      bzip2, gdbm, readline, ncurses
Platforms:            darwin
Maintainers:          mww@macports.org


cf.
http://wiki.python.org/moin/MacPython/Leopard
(상위 버전이 나왔으나 지금은 OpenCV를 위해서라면 그대로 2.5가 괜찮을 듯.)

ref.
Installing OpenCV on OS/X with Python - Princess Polymath


4-3. Mac에서 Unix application을  porting하는 문제에 대하여

http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/



4-4. private framework

공식 위키의 안내대로 OpenCV를 Xcode에서 부를 수 있도록 Mac OS X용 frameworks를 만들기 위해,
생성된 opencv 폴더 아래에서 make_frameworks.sh가  opencv 폴더에 있음을 확인하고 실행을 시켰다.

%% ./make_frameworks.sh

실패. 빌딩 준비만 하다가 끝난 것이라고 한다.



5. Xcode에서 OpenCV frameworks 추가하기

frameworks에는 public과 private이 있는데, OpenCV는 private framework이다.
(Mac의 public frameworks들은 /System/Library/Framework 에 있다.)


5-1. OpenCV  프레임웍 추가

4-4의 시도가 실패했으므로 (이를 바로잡기보다 우선은 편하게),
아래 링크에서 미리 built된 맥용 OpenCV frameworks를 다운로드 한다. (또는 두번째 링크에서 1.2버전 바로 받기 )
Institut für Nachrichtentechnik – OpenCV - Universal Binary Frameworks for MacOS X

.dmg 파일을 실행시켜 마운트된 이미지에서 OpenCV.framework 폴더를
/Library/Frameworks 에 넣는다.

5-2. Xcode 프로젝트에 프레임웍 삽입 (Xcode 3.1 기준)

Xcode를 열고 새 프로젝트 Command Line Utility / C++ Tool 을 만들고,
(자동으로 프로젝트와 동일명의 target 이 생성됨)
왼쪽 바 Groups & Files에서 프로젝트를 오른쪽 마우스 클릭하여 Add > Existing Frameworks 선택한다.
탐색창이 뜨면 /Library/Frameworks/OpenCV.framework 를 찾아 추가한다.

5-3. 테스트

아래 링크들의 예제 파일들을 실행시켜 본다.

ref.
http://stefanix.net/opencv-in-xcode
Wonwoo's Life DB :: OpenCV를 MAC OS X의 XCODE 에서 사용하기
Phillip Whisenhunt - Using openCV for Mac OS in XCode
Installing OpenCV on Mac OS X - NUI Group Community Wiki


5-4. Mac에서의 frameworks에 대하여
 
http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port 를 보고도 헤매여야 했던 이유를
http://opencv.willowgarage.com/wiki/PrivateFramework 에서 알 수 있었다.

ADC: Framework Programming Guide: Creating a Framework
A framework is a hierarchical directory that encapsulates shared resources, such as a dynamic shared library, nib files, image files, localized strings, header files, and reference documentation in a single package.

A framework is also a bundle and its contents can be accessed using Core Foundation Bundle Services or the Cocoa NSBundle class.

Frameworks can include a wider variety of resource types than libraries.

http://en.wikipedia.org/wiki/Framework

http://en.wikipedia.org/wiki/Software_framework


OpenCV is a Private Framework
(2007-06-10, Mark Asbach)

posted by maetel