블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

'homography'에 해당되는 글 2건

  1. 2010.09.24 Learning OpenCV: Chapter 11 Camera Models and Calibration
  2. 2010.06.02 OpenCV: cvCalibrateCamera2( )
2010. 9. 24. 16:24 Computer Vision
Learning OpenCV ebook
: Chapter 11 Camera Models and Calibration


ref.
opencv v2.1 documentation » cv. Image Processing and Computer Vision » Camera Calibration and 3D Reconstruction (c reference / cpp reference)
Noah Kuntz's OpenCV Tutorial 10 - Chapter 11


370p
detection of light from the world:
light source -> reflected light from a object -> our eye or camera (lens -> retina or imager)


cf. http://en.wikipedia.org/wiki/Electronic_imager


geometry of the ray's travel
 
pinhole camera model

ref. O'Connor 2002
al-Hytham (1021)
Descartes
Kepler
Galileo
Newton
Hooke
Euler
Fermat
Snell

ref.
Trucco 1998
Jaehne 1995;1997
Hartley and Zisserman 2006
Forsyth and Ponce 2003
Shapiro and Stockman 2002
Xu and Zhang 1996


projective geometry

lens distortion

camera calibration
1) to correct mathematically for the main deviations from the simple pinhole model with lenses
2) to relate camera measurements with measurements in the real, 3-dimensional world


3-D scene reconstruction


: Camera Model (371p)

camera calibration => model of the camera's geometry & distortion model of the lens : intrinsic parameters


homography transform

OpenCV function cvCalibrateCamera2() or cv::calibrateCamera

pinhole camera:
single ray -> image plane (projective plane)

(...) the size of the image relative to the distant object is given by a single parameter of the camera: its focal length. For our idealized pinhole camera, the distance from the pinhole aperture to the screen is precisely the focal length.

The point in the pinhole is reinterpreted as the center of projection.

The point at the intersection of the image plane and the optical axis is refereed to as the principal point.

(...) the individual pixels on a typical low-cost imager are rectangular rather than square. The focal length(fx) is actually the product of the physical focal length(F) of the lens and the size (sx) of the individual imager elements. (*sx converts physical units to pixel units.)


: Basic Projective Geometry (373p)

projective transform

homogeneous coordinates

The homogeneous coordinates associated with a point in a projective space of dimension n are typically expressed as an (n+1)-dimensional vector with the additional restriction that any two points whose values are proportional are equivalent.

camera intrinsics matrix (of parameters defining our camera, i.e., fx,fy,cx,and cy)

ref. Heikkila and Silven 1997


OpenCV function cvConvertPointsHomogeneous() or cv::convertPointsHomogeneous

cf. cvReshape or cv::reshape

For a camera to form images at a faster rate, we must gather a lot of light over a wider area and bend (i.e., focus) that light to converge at the point of projection. To accomplish this, we uses a lens. A lens can focus a large amount of light on a point to give us fast imaging, but it comes at the cost of introducing distortions.


: Lens Distortions (375p)

Radia distortions arise as a result of the shape of lens, whereas tangential distortions arise from the assembly process of the camera as a whole.


radial distortion:
External points on a frontfacing rectangular grid are increasingly displaced inward as the radial distance from the optical center increases.

"barrel" or "fish-eye" effect -> barrel distortion


tangential distortion:
due to manufacturing defects resulting from the lens not being exactly parallel to the imaging plane.


plumb bob (연직 추, 측량 추) model
ref. D. C. Brown, "Decentering distortion of lenses", Photogrammetric Engineering 32(3) (1966). 7: 444–462.
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

distortion vector (k1, k2, p1, p2, k3); 5-by-1 matrix


: Calibration (378p)

ref. http://www.vision.caltech.edu/bouguetj/calib_doc/

cvCalibrateCamera2() or cv::calibrateCamera
The method of calibration is to target the camera on a known structure that has many individual and identifiable points. By viewing this structure from a variety of angles, it is possible to then compute the (relative) location and orientation of the camera at the time of each image as well as the intrinsic parameters of the camera.


얘네는 뭔 있다 없다 하니...

cv::initCameraMatrix2D

cvFindExtrinsicCameraParams2



: Rotation Matrix and Translation Vector (379p)

Ultimately, a rotation is equivalent to introducing a new description of a point's location in a different coordinate system.


Using a planar object, we'll see that each view fixed eight parameters. Because the six parameters on two additional parameters that we use to resolve the camera intrinsic matrix. We'll then need at least two views to solve for all the geometric parameters.


: Chessboards (381p)

OpenCV opts for using multiple view of a planar object (a chessboard) rather than one view of  a specially constructed 3D object. We use a pattern of alternating black and white squares, which ensures that there is no bias toward one side or the other in measurement. Also, the resulting gird corners lend themselves naturally to the subpixel localization function.

use a chessboard grid that is asymmetric and of even and odd dimensions - for example, (5,6). using such even-odd asymmetry yields a chessboard that has only one symmetry axis, so the board orientation can always be defined uniquely.


The chessboard interior corners are simply a special case of the more general Harris corners; the chessboard corners just happen to be particularly easy to find and track.







cf. Chapter 10: Tracking and Motion: Subpixel Corners
319p: If you are processing images for the purpose of extracting geometric measurements, as opposed to extracting features for recognition, then you will normally need more resolution than the simple pixel values supplied by cvGoodFeaturesToTrack().

fitting a curve (a parabola)
ref. newer techniques
Lucchese02
Chen05


cvFindCornerSubpix() or cv::cornerSubPix (cf. cv::getRectSubPix )



ref.
Zhang99; Zhang00
Sturm99


: Homography (384p)




posted by maetel
2010. 6. 2. 23:53 Computer Vision
cvCalibrateCamera2( ) 함수는 /opencv/src/cv/cvcalibration.cpp 파일에 정의되어 있다.

또는 OpenCV – Trac 웹페이지: /branches/OPENCV_1_0/opencv/src/cv/cvcalibration.cpp





ref. Learning OpenCV: Chapter 11 Camera Models and Calibration

378p: "Calibration"
http://www.vision.caltech.edu/bouguetj/calib_doc/

388p: Intrinsic parameters are directly tied to the 3D geometry (and hence the extrinsic parameters) of where the chessboard is in space; distortion parameters are tied to the 3D geometry of how the pattern of points gets distorted, so we deal with the constraints on these two classes of parameters separately.

389p: The algorithm OpenCV uses to solve for the focal lengths and offsets is based on Zhang's method [Zhang00], but OpenCV uses a different method based on Brown [Brown71] to solve for the distortion parameters.


1) number of views

인자 중 pointCounts – Integer 1xM or Mx1 vector (where M is the number of calibration pattern views)
그런데 우리의 경우 매 입력 프레임 한 개이므로, M = 1
int numView = 1; // number of calibration pattern views
// integer "1*numView" vector
CvMat* countsP = cvCreateMat( numView, 1, CV_32SC1 );
// the sum of vector elements must match the size of objectPoints and imagePoints
// cvmSet( countsP, 0, 0, numPair );    // <-- 이렇게 하면 에러 남. 아래로 변경
cvSet( countsP, cvScalar(numPair) );


2) rotation vector와 translation vector

cvCalibrateCamera2() 함수의 output 중 외부 파라미터를 받을 matrices를 다음과 같이 생성하면
                // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 3, 1, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 3, 1, CV_32FC1 ); // translation vector

아래와 같은 에러 메시지가 난다.

OpenCV ERROR: Bad argument (the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/cvcalibration.cpp(1488)

OpenCV ERROR: Bad argument (the output array of translation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/

다음과 같이 바꾸어야 함.
   // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 1, 3, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 1, 3, CV_32FC1 ); // translation vector


3) rotation matrix

카메라 회전을 원하는 형태로 얻으려면 rotation vector를 rotation matrix로 바꾸어야 한다. (아래 설명 참조.)

Learning OpenCV: 394p
"(The rotation vector) represents an axis in three-dimensional space in the camera coordinate system around which (the pattern) was rotated and where the length or magnitude of the vector encodes the counterclock-wise angle of the rotation. Each of these rotation vectors can be converted to a 3-by-3 rotation matrix by calling cvRodrigues2()."








check#1. 대응점 쌍 개수

cvCalibrateCamera2() 함수유효한 대응점 4쌍이 있으면 camera calibration을 한다고 하지만, 함수 실행 에러가 나지 않고 계산값을 내보낼 뿐 그것이 정확한 결과는 아니다. ( OpenCV의 알고리즘은 렌즈 왜곡 변수와 카메라 내부 변수를 따로 분리해서 계산하고 있다. 렌즈 왜곡 변수에 대해서는 radial distortion coefficients 3개와 tangential distortion coefficients 2개, 총 5개의 미지수의 해를 구하고자 하므로 이론상 이미지 상의 3개의 2차원 (x,y) 점으로부터 6개의 값을 얻으면 계산할 수 있다.

그런데, 우리 프로그램에서 4쌍의 대응점으로 카메라 캘리브레이션을 한 결과는 다음과 같다.

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인


 
frame # 103  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=1958.64 0 cx=160.37
0 fy=792.763 cy=121.702
0 0 1

lens distortion
k1 = -8.17823
k2 = -0.108369
p1 = -0.388965
p2 = -0.169033

rotation vector
4.77319  63.4612  0.300428

translation vector
-130.812  -137.452  714.396


재확인...

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인



frame # 87  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=372.747 0 cx=159.5
0 fy=299.305 cy=119.5
0 0 1

lens distortion
k1 = -7.36674e-14
k2 = 8.34645e-14
p1 = -9.57187e-15
p2 = -4.6854e-15

rotation vector
-0.276568  -0.125119  -0.038675

translation vector
-196.841  -138.012  168.806


즉, 4쌍의 대응점으로부터 카메라 매트릭스를 산술적으로 계산해 내기는 한다. 그런데 현재 패턴을 사용하여 대응점이 4쌍이 나오는 경우는 대개 패턴 중 격자 한 개의 코너점 4개가 검출된 경우이다. 그러므로 인접한 4점의 위치 좌표에 렌즈 왜곡의 효과가 충분히 반영되어 있다고 볼 수 없으므로 위의 출력 결과와 같이 k1 = 0으로 렌즈 왜곡이 없다고 보는 것과 같은 결과가 된다.

한 가지 더. cvCalibrateCam2( ) 함수가 내부적으로 시행하는 첫번째 일은 cvConvertPointsHomogeneous( ) 함수를 호출하여, input matrices로 서로 대응하는 world coordinate과 image coordinate을 각기 1차원 Homogeneous coordinate으로 변환하는 것이다. 그런데 함수 설명에 다음과 같은 내용이 있다. "It is always safe to use the function with number of points \texttt{N} \ge 5 , or to use multi-channel Nx1 or 1xN arrays."

카메라 렌즈에 skew가 없다는 가정을 전제로 한다.
캘리브레이션을 위해 대응점을 얻을 패턴이 평면 (z = 0)인 경우에만 intrinsic parameter의 고정값 초기화가 가능하다. (
"where z-coordinates of the object points must be all 0’s.")
 
 
posted by maetel