블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

'plane pattern'에 해당되는 글 2건

  1. 2010.09.24 Learning OpenCV: Chapter 11 Camera Models and Calibration
  2. 2010.05.26 virtual studio 구현: camera calibration
2010. 9. 24. 16:24 Computer Vision
Learning OpenCV ebook
: Chapter 11 Camera Models and Calibration


ref.
opencv v2.1 documentation » cv. Image Processing and Computer Vision » Camera Calibration and 3D Reconstruction (c reference / cpp reference)
Noah Kuntz's OpenCV Tutorial 10 - Chapter 11


370p
detection of light from the world:
light source -> reflected light from a object -> our eye or camera (lens -> retina or imager)


cf. http://en.wikipedia.org/wiki/Electronic_imager


geometry of the ray's travel
 
pinhole camera model

ref. O'Connor 2002
al-Hytham (1021)
Descartes
Kepler
Galileo
Newton
Hooke
Euler
Fermat
Snell

ref.
Trucco 1998
Jaehne 1995;1997
Hartley and Zisserman 2006
Forsyth and Ponce 2003
Shapiro and Stockman 2002
Xu and Zhang 1996


projective geometry

lens distortion

camera calibration
1) to correct mathematically for the main deviations from the simple pinhole model with lenses
2) to relate camera measurements with measurements in the real, 3-dimensional world


3-D scene reconstruction


: Camera Model (371p)

camera calibration => model of the camera's geometry & distortion model of the lens : intrinsic parameters


homography transform

OpenCV function cvCalibrateCamera2() or cv::calibrateCamera

pinhole camera:
single ray -> image plane (projective plane)

(...) the size of the image relative to the distant object is given by a single parameter of the camera: its focal length. For our idealized pinhole camera, the distance from the pinhole aperture to the screen is precisely the focal length.

The point in the pinhole is reinterpreted as the center of projection.

The point at the intersection of the image plane and the optical axis is refereed to as the principal point.

(...) the individual pixels on a typical low-cost imager are rectangular rather than square. The focal length(fx) is actually the product of the physical focal length(F) of the lens and the size (sx) of the individual imager elements. (*sx converts physical units to pixel units.)


: Basic Projective Geometry (373p)

projective transform

homogeneous coordinates

The homogeneous coordinates associated with a point in a projective space of dimension n are typically expressed as an (n+1)-dimensional vector with the additional restriction that any two points whose values are proportional are equivalent.

camera intrinsics matrix (of parameters defining our camera, i.e., fx,fy,cx,and cy)

ref. Heikkila and Silven 1997


OpenCV function cvConvertPointsHomogeneous() or cv::convertPointsHomogeneous

cf. cvReshape or cv::reshape

For a camera to form images at a faster rate, we must gather a lot of light over a wider area and bend (i.e., focus) that light to converge at the point of projection. To accomplish this, we uses a lens. A lens can focus a large amount of light on a point to give us fast imaging, but it comes at the cost of introducing distortions.


: Lens Distortions (375p)

Radia distortions arise as a result of the shape of lens, whereas tangential distortions arise from the assembly process of the camera as a whole.


radial distortion:
External points on a frontfacing rectangular grid are increasingly displaced inward as the radial distance from the optical center increases.

"barrel" or "fish-eye" effect -> barrel distortion


tangential distortion:
due to manufacturing defects resulting from the lens not being exactly parallel to the imaging plane.


plumb bob (연직 추, 측량 추) model
ref. D. C. Brown, "Decentering distortion of lenses", Photogrammetric Engineering 32(3) (1966). 7: 444–462.
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html

distortion vector (k1, k2, p1, p2, k3); 5-by-1 matrix


: Calibration (378p)

ref. http://www.vision.caltech.edu/bouguetj/calib_doc/

cvCalibrateCamera2() or cv::calibrateCamera
The method of calibration is to target the camera on a known structure that has many individual and identifiable points. By viewing this structure from a variety of angles, it is possible to then compute the (relative) location and orientation of the camera at the time of each image as well as the intrinsic parameters of the camera.


얘네는 뭔 있다 없다 하니...

cv::initCameraMatrix2D

cvFindExtrinsicCameraParams2



: Rotation Matrix and Translation Vector (379p)

Ultimately, a rotation is equivalent to introducing a new description of a point's location in a different coordinate system.


Using a planar object, we'll see that each view fixed eight parameters. Because the six parameters on two additional parameters that we use to resolve the camera intrinsic matrix. We'll then need at least two views to solve for all the geometric parameters.


: Chessboards (381p)

OpenCV opts for using multiple view of a planar object (a chessboard) rather than one view of  a specially constructed 3D object. We use a pattern of alternating black and white squares, which ensures that there is no bias toward one side or the other in measurement. Also, the resulting gird corners lend themselves naturally to the subpixel localization function.

use a chessboard grid that is asymmetric and of even and odd dimensions - for example, (5,6). using such even-odd asymmetry yields a chessboard that has only one symmetry axis, so the board orientation can always be defined uniquely.


The chessboard interior corners are simply a special case of the more general Harris corners; the chessboard corners just happen to be particularly easy to find and track.







cf. Chapter 10: Tracking and Motion: Subpixel Corners
319p: If you are processing images for the purpose of extracting geometric measurements, as opposed to extracting features for recognition, then you will normally need more resolution than the simple pixel values supplied by cvGoodFeaturesToTrack().

fitting a curve (a parabola)
ref. newer techniques
Lucchese02
Chen05


cvFindCornerSubpix() or cv::cornerSubPix (cf. cv::getRectSubPix )



ref.
Zhang99; Zhang00
Sturm99


: Homography (384p)




posted by maetel
2010. 5. 26. 22:59 Computer Vision
2010/02/10 - [Visual Information Processing Lab] - Seong-Woo Park & Yongduek Seo & Ki-Sang Hong
2010/05/18 - [Visual Information Processing Lab] - virtual studio 구현: camera calibration test



1. 내부 파라미터 계산

cvCalibrateCamera2() 함수를 이용하여 카메라 내부/외부 파라미터와 렌즈 왜곡 변수를 얻는다.


frame # 191  ---------------------------
# of found lines = 8 vertical, 6 horizontal
vertical lines:
horizontal lines:
p.size = 48
CRimage.size = 48
# of corresponding pairs = 15 = 15

camera matrix
fx=286.148 0 cx=207.625
0 fy=228.985 cy=98.8437
0 0 1

lens distortion
k1 = 0.0728017
k2 = -0.0447815
p1 = -0.0104295
p2 = 0.00914935

rotation vector
-0.117104  -0.109022  -0.0709096

translation vector
-208.234  -160.983  163.298



이 결과를 가지고 cvProjectPoints2()를 써서 패턴의 점에 대응되는 이미지 상의 점을 찾은 결과는 아래와 같다.




1-1.

카메라 내부 파라미터와 외부 파라미터를 모두 계산하는 cvCalibrateCamera2() 함수 대신 내부 파라미터만 계산하는
cvInitIntrinsicParams2D() 함수를 써 본다.



2. lens distortion(kappa1, kappa2)을 가지고 rectification

패턴 인식이 성공적인 경우 당연히 카메라 캘리브레이션 결과가 정확해지며, 이로부터 가상의 물체를 합성하기 위해 필요한 object 또는 graphic coordinate을 실시간으로 계산할 수 있다. 현재 우리 프로그램에서 패턴 인식이 실패하는 원인은 직선 검출의 오차인데, 이 오차의 원인으로는 여러가지가 있지만 가장 큰 것은 렌즈 왜곡이다. (현재 렌즈 왜곡을 고려하지 않고 있다.) 그래서 실제로는 하나의 직선에 대해 여러 개 (2-3개)의 직선을 검출하며 (NMS 알고리즘만으로는 이 오차를 줄이는 데 한계를 보이고 있어), 이로부터 계산된 교차점들의 위치 좌표 오차는 cross ratio 계산에 결정적인 오차로 작용한다. 현재 방식의 패턴 생성과 패턴 인식은 cross ratios 값에 절대적으로 의존하고 있기 때문에 이 문제를 반드시 해결해야 한다. 그러므로 렌즈 왜곡을 고려하여 입력 이미지를 펴서 (rectification) 기존의 패턴 인식 알고리즘을 적용하자.

ref.
Learning OpenCV: Chapter 6: Image Trasnforms
opencv v2.1 documentation — Geometric Image Transformations


1) Undistortion

Learning OpenCV: 396p
"OpenCV provides us with a ready-to-use undistortion algorithm that takes a raw image and the distortion coefficients from cvCalibrateCamera2() and produces a corrected image (see Figure 11-12). We can access this algorithm either through the function cvUndistort2(), which does everything we need in one shot, or through the pair of routines cvInitUndistortMap() and cvRemap(), which allow us to handle things a little more efficiently for video or other situations where we have many images from the same camera. ( * We should take a moment to clearly make a distinction here between undistortion, which mathematically removes lens distortion, and rectifi cation, which mathematically aligns the images with respect to each other. )

입력 영상 (렌즈 왜곡)

출력 영상 (왜곡 제거)






 

# of corresponding pairs = 30 = 30

camera matrix
fx=94.6664 0 cx=206.772
0 fy=78.3349 cy=158.782
0 0 1

lens distortion
k1 = 0.0130734
k2 = -0.000955421
p1 = 0.00287948
p2 = 0.00158042









            if ( ( k1 > 0.3 && k1 < 0.6 ) && ( cx > 150.0 && cx < 170.0 ) && ( cy > 110 && cy < 130 ) )


# of corresponding pairs = 42 = 42

camera matrix
fx=475.98 0 cx=162.47
0 fy=384.935 cy=121.552
0 0 1

lens distortion
k1 = 0.400136
k2 = -0.956089
p1 = 0.00367761
p2 = 0.00547217







2) Recitifaction




cvInitUndistortRectifyMap



3. line detection




4. 패턴 인식 (대응점 찾기)




5. 외부 파라미터 계산 (4의 결과 & lens distortion = 0 입력)
cvFindExtrinsicCameraParams2()



6. reprojection
2에서 얻은 rectificated image에 할 것




posted by maetel