블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

'글타래'에 해당되는 글 750건

  1. 2010.07.16 DLR Camera Calibration Toolbox
  2. 2010.07.07 Test: reprojection error (Quickcam Pro4000)
  3. 2010.07.07 Tomas Svoboda <Multi-Camera Self-Calibration>
  4. 2010.07.06 fprintf() 테스트
  5. 2010.07.04 Ant Colony Optimization (ACO)
  6. 2010.06.22 G. E. Karras et al. "Modeling Distortion Of Super-Wide-Angle Lenses For Architectural And Archaeological Applications"
  7. 2010.06.22 Hans-Paul Schwefel [EVOLUTION AND OPTIMUM SEEKING]
  8. 2010.06.22 Rafael Grompone von Gioi et al. "LSD: A Fast Line Segment Detector with a False Detection Control"
  9. 2010.06.22 Luis Alvarez et al. "An Algebraic Approach to Lens Distortion by Line Rectification"
  10. 2010.06.22 Lens Distortion
  11. 2010.06.21 Particle Swarm Optimization (PSO)
  12. 2010.06.14 E. Trucco and A. Verri <Introductory Techniques for 3-D Computer Vision>
  13. 2010.06.14 pinhole camera model
  14. 2010.06.12 Unscented Transform
  15. 2010.06.11 김준식 & 권인소 "동심원 패턴을 이용한 카메라 내부변수 보정 시스템 및 카메라 보정 방법" 3
  16. 2010.06.08 4D View Solutions
  17. 2010.06.08 G. Jiang and L. Quan "Detection of concentric circles for camera calibration"
  18. 2010.06.08 Xiaochun Cao & Hassan Foroosh "CAMERA CALIBRATION WITHOUT METRIC INFORMATION USING 1D OBJECTS"
  19. 2010.06.04 OpenCV: cvFindCornerSubPix()
  20. 2010.06.02 OpenCV: cvCalibrateCamera2( )
  21. 2010.06.02 Duane C. Brown "Close-Range Camera Calibration"
  22. 2010.05.30 test: composing OpenCV Iplimage and OpenGL graphics in one window screen
  23. 2010.05.27 virtual studio 구현: virtual object rendering test
  24. 2010.05.26 virtual studio 구현: camera calibration
  25. 2010.05.26 equation editor
The DLR Camera Calibration Toolbox
http://www.dlr.de/rm/desktopdefault.aspx/tabid-3925/


Institute of Robotics and Mechatronics, German Aerospace Center (DLR)



> 특징
스테레오 (두 대 이상의 카메라)
자동 캘리브레이션(카메라 렌즈 왜곡, 내부/외부 파라미터)
Hand-Eye Calibration
수동 조작 가능


[1] K. H. Strobl and G. Hirzinger. "Optimal Hand-Eye Calibration." In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006), Beijing, China, pp. 4647-4653, October 2006.

[2] J. Weng, P. Cohen, and M. Herniou. "Camera calibration with distortion models and accuracy evaluation." In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 14(10): 965-980, 1992.

[3] K. H. Strobl and G. Hirzinger. "More Accurate Camera and Hand-Eye Calibrations with Unknown Grid Pattern Dimensions." In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2008), Pasadena, California, USA, May 2008, in press.


posted by maetel



frame # 129  ---------------------------
# of found lines = 10 vertical, 8 horizontal
vertical lines:
horizontal lines:
p.size = 80
corner
35.6611, -6.40545  33.0973, 24.1258  32.2603, 34.0929  29.1089, 71.6216  26.9599, 97.2142  24.1377, 130.823  21.5627, 161.487  18.0768, 203  86.259, -2.68977  84.6836, 27.3713  84.1602, 37.3582  82.2186, 74.405  80.9047, 99.4752  79.1713, 132.552  77.5933, 162.661  75.4792, 203  108.14, -1.08291  106.889, 28.7684  106.47, 38.7618  104.926, 75.595  103.885, 100.438  102.508, 133.286  101.257, 163.156  99.5866, 203  155.983, 2.43046  155.675, 31.8377  155.57, 41.8509  155.189, 78.2292  154.934, 102.578  154.596, 134.923  154.288, 164.267  153.883, 203  168.044, 3.31615  168.351, 32.6352  168.456, 42.6616  168.836, 78.9444  169.09, 103.171  169.427, 135.389  169.733, 164.591  170.135, 203  202.166, 5.82193  202.773, 34.8009  202.983, 44.8339  203.736, 80.7734  204.236, 104.644  204.904, 136.504  205.508, 165.34  206.297, 203  216.478, 6.8729  217.688, 35.7392  218.109, 45.7855  219.61, 81.6053  220.604, 105.33  221.933, 137.039  223.135, 165.709  224.698, 203  246.815, 9.1007  248.312, 37.6659  248.838, 47.7188  250.7, 83.2347  251.927, 106.643  253.572, 138.033  255.057, 166.378  256.976, 203  274.553, 11.1377  276.632, 39.4477  277.371, 49.514  279.96, 84.7681  281.658, 107.889  283.942, 138.987  286.001, 167.026  288.643, 203  312.163, 13.8996  315.103, 41.8681  316.163, 51.9545  319.831, 86.8577  322.22, 109.589  325.447, 140.292  328.35, 167.913  332.038, 203 

please
CRimage.size = 80
index matching:    image# 0, 0     to world# 3, 5
index matching:    image# 0, 1     to world# 3, 6
index matching:    image# 0, 2     to world# 3, 7
index matching:    image# 0, 3     to world# 3, 8
index matching:    image# 0, 4     to world# 3, 9
index matching:    image# 0, 5     to world# 3, 10
index matching:    image# 1, 0     to world# 4, 5
index matching:    image# 1, 1     to world# 4, 6
index matching:    image# 1, 2     to world# 4, 7
index matching:    image# 1, 3     to world# 4, 8
index matching:    image# 1, 4     to world# 4, 9
index matching:    image# 1, 5     to world# 4, 10
index matching:    image# 2, 0     to world# 5, 5
index matching:    image# 2, 1     to world# 5, 6
index matching:    image# 2, 2     to world# 5, 7
index matching:    image# 2, 3     to world# 5, 8
index matching:    image# 2, 4     to world# 5, 9
index matching:    image# 2, 5     to world# 5, 10
index matching:    image# 3, 0     to world# 6, 5
index matching:    image# 3, 1     to world# 6, 6
index matching:    image# 3, 2     to world# 6, 7
index matching:    image# 3, 3     to world# 6, 8
index matching:    image# 3, 4     to world# 6, 9
index matching:    image# 3, 5     to world# 6, 10
index matching:    image# 4, 0     to world# 7, 5
index matching:    image# 4, 1     to world# 7, 6
index matching:    image# 4, 2     to world# 7, 7
index matching:    image# 4, 3     to world# 7, 8
index matching:    image# 4, 4     to world# 7, 9
index matching:    image# 4, 5     to world# 7, 10
index matching:    image# 5, 0     to world# 8, 5
index matching:    image# 5, 1     to world# 8, 6
index matching:    image# 5, 2     to world# 8, 7
index matching:    image# 5, 3     to world# 8, 8
index matching:    image# 5, 4     to world# 8, 9
index matching:    image# 5, 5     to world# 8, 10
index matching:    image# 6, 0     to world# 9, 5
index matching:    image# 6, 1     to world# 9, 6
index matching:    image# 6, 2     to world# 9, 7
index matching:    image# 6, 3     to world# 9, 8
index matching:    image# 6, 4     to world# 9, 9
index matching:    image# 6, 5     to world# 9, 10
index matching:    image# 7, 0     to world# 10, 5
index matching:    image# 7, 1     to world# 10, 6
index matching:    image# 7, 2     to world# 10, 7
index matching:    image# 7, 3     to world# 10, 8
index matching:    image# 7, 4     to world# 10, 9
index matching:    image# 7, 5     to world# 10, 10
index matching:    image# 8, 0     to world# 11, 5
index matching:    image# 8, 1     to world# 11, 6
index matching:    image# 8, 2     to world# 11, 7
index matching:    image# 8, 3     to world# 11, 8
index matching:    image# 8, 4     to world# 11, 9
index matching:    image# 8, 5     to world# 11, 10
index matching:    image# 9, 0     to world# 12, 5
index matching:    image# 9, 1     to world# 12, 6
index matching:    image# 9, 2     to world# 12, 7
index matching:    image# 9, 3     to world# 12, 8
index matching:    image# 9, 4     to world# 12, 9
index matching:    image# 9, 5     to world# 12, 10
coordinate matching:    image 35.5294, -6.40545     to world 62.07, 147.129, 0.0
coordinate matching:    image 33.4637, 24.1207     to world 62.07, 182.066, 0.0
coordinate matching:    image 32.4883, 34.7344     to world 62.07, 193.937, 0.0
coordinate matching:    image 29.3473, 71.7697     to world 62.07, 233.937, 0.0
coordinate matching:    image 27.0674, 97.0156     to world 62.07, 259.855, 0.0
coordinate matching:    image 24.235, 130.495     to world 62.07, 293.265, 0.0
coordinate matching:    image 86.259, -2.68977     to world 102.07, 147.129, 0.0
coordinate matching:    image 84.5506, 27.4862     to world 102.07, 182.066, 0.0
coordinate matching:    image 84.0488, 37.9824     to world 102.07, 193.937, 0.0
coordinate matching:    image 82.2594, 74.6152     to world 102.07, 233.937, 0.0
coordinate matching:    image 80.7738, 99.5121     to world 102.07, 259.855, 0.0
coordinate matching:    image 79.3161, 132.494     to world 102.07, 293.265, 0.0
coordinate matching:    image 108.14, -1.08291     to world 119.45, 147.129, 0.0
coordinate matching:    image 106.447, 28.8103     to world 119.45, 182.066, 0.0
coordinate matching:    image 105.952, 39.3494     to world 119.45, 193.937, 0.0
coordinate matching:    image 104.521, 75.7666     to world 119.45, 233.937, 0.0
coordinate matching:    image 103.606, 100.493     to world 119.45, 259.855, 0.0
coordinate matching:    image 102.511, 133.31     to world 119.45, 293.265, 0.0
coordinate matching:    image 155.327, 2.50148     to world 159.45, 147.129, 0.0
coordinate matching:    image 155.143, 32.0653     to world 159.45, 182.066, 0.0
coordinate matching:    image 154.91, 42.5242     to world 159.45, 193.937, 0.0
coordinate matching:    image 154.682, 78.4145     to world 159.45, 233.937, 0.0
coordinate matching:    image 154.596, 102.636     to world 159.45, 259.855, 0.0
coordinate matching:    image 154.507, 134.865     to world 159.45, 293.265, 0.0
coordinate matching:    image 169.293, 3.464     to world 171.309, 147.129, 0.0
coordinate matching:    image 169.238, 33.0578     to world 171.309, 182.066, 0.0
coordinate matching:    image 169.302, 43.3455     to world 171.309, 193.937, 0.0
coordinate matching:    image 169.333, 79.1385     to world 171.309, 233.937, 0.0
coordinate matching:    image 169.423, 103.342     to world 171.309, 259.855, 0.0
coordinate matching:    image 169.546, 135.378     to world 171.309, 293.265, 0.0
coordinate matching:    image 202.289, 5.97719     to world 199.965, 147.129, 0.0
coordinate matching:    image 202.842, 35.3023     to world 199.965, 182.066, 0.0
coordinate matching:    image 203.138, 45.4776     to world 199.965, 193.937, 0.0
coordinate matching:    image 203.843, 80.9247     to world 199.965, 233.937, 0.0
coordinate matching:    image 204.481, 104.772     to world 199.965, 259.855, 0.0
coordinate matching:    image 205.419, 136.449     to world 199.965, 293.265, 0.0
coordinate matching:    image 217.636, 7.27727     to world 213.626, 147.129, 0.0
coordinate matching:    image 218.523, 36.3636     to world 213.626, 182.066, 0.0
coordinate matching:    image 218.858, 46.4402     to world 213.626, 193.937, 0.0
coordinate matching:    image 220.052, 81.6308     to world 213.626, 233.937, 0.0
coordinate matching:    image 220.875, 105.426     to world 213.626, 259.855, 0.0
coordinate matching:    image 222.084, 136.772     to world 213.626, 293.265, 0.0
coordinate matching:    image 246.503, 9.52037     to world 239.562, 147.129, 0.0
coordinate matching:    image 247.889, 38.3327     to world 239.562, 182.066, 0.0
coordinate matching:    image 248.441, 48.3851     to world 239.562, 193.937, 0.0
coordinate matching:    image 250.452, 82.8983     to world 239.562, 233.937, 0.0
coordinate matching:    image 251.677, 106.428     to world 239.562, 259.855, 0.0
coordinate matching:    image 253.478, 137.564     to world 239.562, 293.265, 0.0
coordinate matching:    image 274.335, 11.6118     to world 265.097, 147.129, 0.0
coordinate matching:    image 276.296, 40.1247     to world 265.097, 182.066, 0.0
coordinate matching:    image 276.934, 50.0298     to world 265.097, 193.937, 0.0
coordinate matching:    image 279.607, 84.2214     to world 265.097, 233.937, 0.0
coordinate matching:    image 281.52, 107.359     to world 265.097, 259.855, 0.0
coordinate matching:    image 283.694, 138.233     to world 265.097, 293.265, 0.0
coordinate matching:    image 312.421, 14.5857     to world 300.95, 147.129, 0.0
coordinate matching:    image 314.962, 42.5479     to world 300.95, 182.066, 0.0
coordinate matching:    image 316.064, 52.2651     to world 300.95, 193.937, 0.0
coordinate matching:    image 320.518, 85.6258     to world 300.95, 233.937, 0.0
coordinate matching:    image 322.22, 109.589     to world 300.95, 259.855, 0.0
coordinate matching:    image 325.447, 138.885     to world 300.95, 293.265, 0.0

camera matrix
fx=301.669 0 cx=149.066
0 fy=237.063 cy=167.683
0 0 1

lens distortion
k1 = 0.0280614
k2 = -0.0278541
p1 = -0.00314248
p2 = 0.00323144

rotation vector
-0.155056  -0.115457  0.0134497

translation vector
-152.854  -325.802  262.059

check reprojection?
reprojection errors
0.769114    0.25348    0.163043    0.217367    0.0561251    0.188387    0.106855    0.103549    0.00969562    0.162657    0.168114    0.0333034    0.609429    0.146523    0.0704871    0.181848    0.0884141    0.106862    0.0507463    0.14039    0.0975367    0.131104    0.0561415    0.086923    0.194587    0.0698258    0.116085    0.080983    0.104469    0.163957    0.221519    0.0241836    0.0841894    0.076401    0.154674    0.239551    0.0717952    0.0691027    0.0618768    0.0627396    0.193959    0.0950587    0.0905574    0.201376    0.209811    0.123518    0.055315    0.0485843    0.0836709    0.204236    0.272494    0.163189    0.1419    0.137937    0.124958    0.370733    0.284505    0.849666    0.627085    0.48934   
error mean = 0.176032     std = 0.107796

posted by maetel
http://cmp.felk.cvut.cz/~svoboda/SelfCal/

Center for Machine Perception, Department of Cybernetics, Czech Technical University in Prague
posted by maetel
2010. 7. 6. 23:51 Computer Vision
테스트: 아웃풋 수치 데이터를 파일로 기록하기

http://cplusplus.com/reference/clibrary/cstdio/fprintf/


#include <iostream>

int main (int argc, char * const argv[]) {

#include <iostream>
   
    FILE *fileI, *fileW;
    fileI = fopen( "image2d.txt", "w" );
    fileW = fopen( "world2d.txt", "w" );
   
    double ix = 1.1;
    double iy = 1.2;
   
    double wx = 2.1;
    double wy = 2.2;
   
   
    fprintf( fileI, "%lf\t%lf\n", ix, iy);
    fprintf( fileW, "%lf\t%lf\n", wx, wy);
   
   
    fclose( fileI );
    fclose( fileW );
   
    return 0;
       
}


posted by maetel
2010. 7. 4. 17:25 Computation/Algorithm
posted by maetel
2010. 6. 22. 18:36 Computer Vision
Modeling Distortion Of Super-Wide-Angle Lenses For Architectural And Archaeological Applications
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.9416
G. E. Karras ,  G. Mountrakis ,  P. Patias ,  E. Petsa

Georgios Karras
posted by maetel
2010. 6. 22. 18:24 Computer Vision
Material zur Vorlesung "Technische Optimierung'':    
EVOLUTION AND OPTIMUM SEEKING
von     Hans-Paul Schwefel


posted by maetel
2010. 6. 22. 17:51 Computer Vision
Rafael Grompone von Gioi, Jérémie Jakubowicz, Jean-Michel Morel, Gregory Randall, LSD: A Fast Line Segment Detector with a False Detection Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 722-732, Apr. 2010. doi:10.1109/TPAMI.2008.300


http://www.ipol.im/pub/algo/gjmr_line_segment_detector/

올~ 짱! 지난 2년 동안 발견한 논문 중 가장 마음에 들고 게다가 와 닿는다! (다른 논문들은 대부분 내게 우이독경이어서 그런 거지만...) 그래 난 이런 가장 상식적이고 기본적인 접근이 좋더라.


http://en.wikipedia.org/wiki/Linear_time#Linear_time


posted by maetel
2010. 6. 22. 17:31 Computer Vision
Luis Alvarez, Luis Gómez and J. Rafael Sendra
An Algebraic Approach to Lens Distortion by Line Rectification
Journal of Mathematical Imaging and Vision, vol. 35, nº 1, pp. 36 - 50, September 2009.



posted by maetel
2010. 6. 22. 16:23 Computer Vision
posted by maetel
2010. 6. 14. 22:14 Computer Vision
E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Englewood Cliffs, NJ: Prentice-Hall, 1998.

(google books overview)
posted by maetel
2010. 6. 14. 22:13 Computer Vision
ref.
Learning OpenCV
Chapter 11: Camera Models and Calibration


Al-Hytham, Book of Optics, 1038

Descartes
Kepler
Galileo
Newton
Hooke
Euler
Fermat
Snell

J. J. O'Connor and E. F. Roberson, "Light through the ages: Ancient Greece to Maxwell," http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Light_1.html

E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Englewood Cliffs, NJ: Prentice-Hall, 1998.

B. Jaehne, Digital Image Processing, 3rd ed., Berlin: Springer-Verlag, 1995.

B. Jaehne, Practical Handbook on Image Processing for Scientific Applications, Boca Raton, FL: CRC Press, 1997

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge, UK: Cambridge University Press, 2006.

D. Forsyth and J. Ponce, Computer Vision: A Modern Approach, Englewood Cliffs, NJ: Prentice-Hall, 2003.

L. G. Shapiro and G. C. Stockman, Computer Vision, Englewood Cliffs, NJ: Prentice-Hall, 2002

G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition, Dordrecht: Kluwer, 1996




posted by maetel
2010. 6. 12. 01:50 Computer Vision




Thrun, Burgard & Fox, Probabilistic Robotics (2006): 55-66p


posted by maetel
2010. 6. 11. 21:28 Computer Vision
- 특허명 : 동심원 패턴을 이용한 카메라 내부변수 보정 시스템 및 카메라 보정 방법
- 등록인 : 김준식, 권인소, 한국과학기술원
- 등록번호 : 386090

http://bsrc.kaist.ac.kr/board/read.cgi?board=s2_license

http://ppms.kaist.ac.kr:8087/sub01_1_view.html?mode=patent&ref_code=P-01246


posted by maetel
2010. 6. 8. 21:39 Computer Vision
Camera Calibration Software, 4D View Solutions: http://r24085.ovh.net/technology.html


Perception research group: Interpretation and Modeling of Images and Videos
&
MOAIS research group
in
GrImage (Grid and Image) lab, INRIA Grenoble

 
posted by maetel
2010. 6. 8. 14:18 Computer Vision
G. Jiang and L. Quan. Detection of concentric circles for camera calibration. Computer Vision, IEEE International Conference on, 1:333– 340, 2005.

Long Quan


matlab code

posted by maetel
2010. 6. 8. 12:27 Computer Vision
CAMERA CALIBRATION WITHOUT METRIC INFORMATION USING 1D OBJECTS

Xiaochun Cao and Hassan Foroosh
School of Computer Science, University of Central Florida



Classical techniques for camera calibration [1, 2, 3] require a so called calibration rig, with a set of correspondences between known points in the 3D space and their projections in the 2D image plane. Recent techniques propose more flexible plane-based calibration approaches [4, 5].

posted by maetel
2010. 6. 4. 22:16 Computer Vision
OpenCV 함수 cvFindCornerSubPix or cv::cornerSubPix

ref.
Learning OpenCV: Chapter 10. Tracking and Motion: "Subpixel Corners"
319p: If you are processing images for the purpose of extracting geometric measurements, as opposed to extracting features for recognition, then you will normally need more resolution than the simple pixel values supplied by cvGoodFeaturesToTrack(). That is subpixels come with integer coordinates whereas we sometimes require real-valued coordinates.

source code file: /opencv/src/cv/cvcornersubpix.cpp
link: https://code.ros.org/trac/opencv/browser/tags/2.1/opencv/src/cv/cvcornersubpix.cpp


fitting a curve (a parabola)
ref. newer techniques
Lucchese02
Chen05

CvTermCriteria


icvGetRectSubPix_8u32f_C1R()
definition: source code file: /opencv/src/cv/cvsamplers.cpp


icvSepConvSmall3_32f()
definition: source code file: /opencv/src/cv/cvderiv.cpp


posted by maetel
2010. 6. 2. 23:53 Computer Vision
cvCalibrateCamera2( ) 함수는 /opencv/src/cv/cvcalibration.cpp 파일에 정의되어 있다.

또는 OpenCV – Trac 웹페이지: /branches/OPENCV_1_0/opencv/src/cv/cvcalibration.cpp





ref. Learning OpenCV: Chapter 11 Camera Models and Calibration

378p: "Calibration"
http://www.vision.caltech.edu/bouguetj/calib_doc/

388p: Intrinsic parameters are directly tied to the 3D geometry (and hence the extrinsic parameters) of where the chessboard is in space; distortion parameters are tied to the 3D geometry of how the pattern of points gets distorted, so we deal with the constraints on these two classes of parameters separately.

389p: The algorithm OpenCV uses to solve for the focal lengths and offsets is based on Zhang's method [Zhang00], but OpenCV uses a different method based on Brown [Brown71] to solve for the distortion parameters.


1) number of views

인자 중 pointCounts – Integer 1xM or Mx1 vector (where M is the number of calibration pattern views)
그런데 우리의 경우 매 입력 프레임 한 개이므로, M = 1
int numView = 1; // number of calibration pattern views
// integer "1*numView" vector
CvMat* countsP = cvCreateMat( numView, 1, CV_32SC1 );
// the sum of vector elements must match the size of objectPoints and imagePoints
// cvmSet( countsP, 0, 0, numPair );    // <-- 이렇게 하면 에러 남. 아래로 변경
cvSet( countsP, cvScalar(numPair) );


2) rotation vector와 translation vector

cvCalibrateCamera2() 함수의 output 중 외부 파라미터를 받을 matrices를 다음과 같이 생성하면
                // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 3, 1, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 3, 1, CV_32FC1 ); // translation vector

아래와 같은 에러 메시지가 난다.

OpenCV ERROR: Bad argument (the output array of rotation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 or nx9 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/cvcalibration.cpp(1488)

OpenCV ERROR: Bad argument (the output array of translation vectors must be 3-channel 1xn or nx1 array or 1-channel nx3 array, where n is the number of views)
    in function cvCalibrateCamera2, ../../src/cv/

다음과 같이 바꾸어야 함.
   // extrinsic parameters
                CvMat* vectorR  = cvCreateMat( 1, 3, CV_32FC1 ); // rotation  vector
                CvMat* vectorT  = cvCreateMat( 1, 3, CV_32FC1 ); // translation vector


3) rotation matrix

카메라 회전을 원하는 형태로 얻으려면 rotation vector를 rotation matrix로 바꾸어야 한다. (아래 설명 참조.)

Learning OpenCV: 394p
"(The rotation vector) represents an axis in three-dimensional space in the camera coordinate system around which (the pattern) was rotated and where the length or magnitude of the vector encodes the counterclock-wise angle of the rotation. Each of these rotation vectors can be converted to a 3-by-3 rotation matrix by calling cvRodrigues2()."








check#1. 대응점 쌍 개수

cvCalibrateCamera2() 함수유효한 대응점 4쌍이 있으면 camera calibration을 한다고 하지만, 함수 실행 에러가 나지 않고 계산값을 내보낼 뿐 그것이 정확한 결과는 아니다. ( OpenCV의 알고리즘은 렌즈 왜곡 변수와 카메라 내부 변수를 따로 분리해서 계산하고 있다. 렌즈 왜곡 변수에 대해서는 radial distortion coefficients 3개와 tangential distortion coefficients 2개, 총 5개의 미지수의 해를 구하고자 하므로 이론상 이미지 상의 3개의 2차원 (x,y) 점으로부터 6개의 값을 얻으면 계산할 수 있다.

그런데, 우리 프로그램에서 4쌍의 대응점으로 카메라 캘리브레이션을 한 결과는 다음과 같다.

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인


 
frame # 103  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=1958.64 0 cx=160.37
0 fy=792.763 cy=121.702
0 0 1

lens distortion
k1 = -8.17823
k2 = -0.108369
p1 = -0.388965
p2 = -0.169033

rotation vector
4.77319  63.4612  0.300428

translation vector
-130.812  -137.452  714.396


재확인...

대응점 4쌍으로 패턴 인식에 성공한 경우

왼쪽 영상에 보이는 대응점 4쌍으로 카메라 캘리브레이션한 결과를 가지고 다시 패턴의 4점을 입력 영상 위에 리프로젝션하여 확인



frame # 87  ---------------------------
# of found lines = 5 vertical, 5 horizontal
vertical lines:
horizontal lines:
p.size = 25
CRimage.size = 25
# of corresponding pairs = 4 = 4

camera matrix
fx=372.747 0 cx=159.5
0 fy=299.305 cy=119.5
0 0 1

lens distortion
k1 = -7.36674e-14
k2 = 8.34645e-14
p1 = -9.57187e-15
p2 = -4.6854e-15

rotation vector
-0.276568  -0.125119  -0.038675

translation vector
-196.841  -138.012  168.806


즉, 4쌍의 대응점으로부터 카메라 매트릭스를 산술적으로 계산해 내기는 한다. 그런데 현재 패턴을 사용하여 대응점이 4쌍이 나오는 경우는 대개 패턴 중 격자 한 개의 코너점 4개가 검출된 경우이다. 그러므로 인접한 4점의 위치 좌표에 렌즈 왜곡의 효과가 충분히 반영되어 있다고 볼 수 없으므로 위의 출력 결과와 같이 k1 = 0으로 렌즈 왜곡이 없다고 보는 것과 같은 결과가 된다.

한 가지 더. cvCalibrateCam2( ) 함수가 내부적으로 시행하는 첫번째 일은 cvConvertPointsHomogeneous( ) 함수를 호출하여, input matrices로 서로 대응하는 world coordinate과 image coordinate을 각기 1차원 Homogeneous coordinate으로 변환하는 것이다. 그런데 함수 설명에 다음과 같은 내용이 있다. "It is always safe to use the function with number of points \texttt{N} \ge 5 , or to use multi-channel Nx1 or 1xN arrays."

카메라 렌즈에 skew가 없다는 가정을 전제로 한다.
캘리브레이션을 위해 대응점을 얻을 패턴이 평면 (z = 0)인 경우에만 intrinsic parameter의 고정값 초기화가 가능하다. (
"where z-coordinates of the object points must be all 0’s.")
 
 
posted by maetel
2010. 6. 2. 20:57 Computer Vision
D.C. Brown, Close-Range Camera Calibration, Photogrammetric Engineering, pages 855-866, Vol. 37, No. 8, 1971.









posted by maetel
2010. 5. 30. 01:59

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 5. 27. 20:53 Computer Vision
virtual object rendering test
가상의 그래픽 합성 시험

OpenGL 그래픽과 OpenCV 이미지 합성

camera로부터의 입력 이미지를 OpenCV 함수로 받아 IplImage 형태로 저장한 것과 OpenGL 함수로 그린 그래픽 정보를 합성하기


Way #1.

OpenCV의 카메라 입력으로 받은 image frame을 texture로 만들어 OpenGL의 디스플레이 창에 배경 (평면에 texture mapping)으로 넣고 여기에 그래픽을 그려 display하는 방법
ref. http://cafe.naver.com/opencv/12266

여차저차 조사 끝에 정리하면,
OpenGL에서 texture mapping을 실행하는 함수 glTexImage2D( )의 입력 텍스처로 OpenCV의 image data structure인 IplImage를 넣어 줄 수 있으면 끝난다.
ref.
http://www.gamedev.net/community/forums/topic.asp?topic_id=205527
http://www.rauwendaal.net/blog/opencvandopengl-1
ARTag source code: cfar_code/OpenGL/opengl_only_test/opengl_only_test.cpp
ARTag source code: cfar_code/IntroARProg/basic_artag_opengl/basic_artag_opengl.cpp

테스팅 중 발견한 문제점:
cvRetrieveFrame() 함수를 while 아래에서 돌려 cvShowImage() 함수로 보여 주는 대신 glutDisplayFunc() 함수에서 불러 glutMainLoop() 함수로 돌리면 시간이 많이 걸린다. (* cvGrabFrame() 함수의 경우는 괜찮음.)


ref. OpenGL Programming Guide - Chapter 9 - Texture Mapping
Textures are simply rectangular arrays of data - for example, color data, luminance data, or color and alpha data. The individual values in a texture array are often called texels.

The data describing a texture may consist of one, two, three, or four elements per texel, representing anything from a modulation constant to an (R, G, B, A) quadruple.

A texture object stores texture data and makes it readily available. You can now control many textures and go back to textures that have been previously loaded into your texture resources.


6일 동안의 갖은 삽질 끝에 몇 줄 되지도 않는 source code. ㅜㅜ

glLoadMarix( ) 함수




Way #2.

OpenCV에서 카메라로부터 얻은 image frame과 이로부터 계산한 OpenGL의 그래픽 정보를 OpenCV의 Iplimage로 넘기는 방법

ref.
http://cafe.naver.com/opencv/12622


http://webcache.googleusercontent.com/search?q=cache:xUG17-FlHQMJ:www.soe.ucsc.edu/classes/cmps260/Winter99/Winter99/handouts/proj1/proj1_99.html+tsai+opengl&cd=5&hl=ko&ct=clnk&gl=kr


http://www.google.com/codesearch/p?hl=ko#zI2h2OEMZ0U/~mgattass/ra/software/tsai.zip%7CsGIrNzsqK4o/tsai/src/main.c&q=tsai%20glut&l=9

http://www.google.com/codesearch/p?hl=ko#XWPk_ZdtAX4/classes/cmps260/Winter99/handouts/proj1/cs260proj1.tar.gz|DRo4_7nUzpo/CS260/camera.c&q=tsai%20glut&d=7


posted by maetel
2010. 5. 26. 22:59 Computer Vision
2010/02/10 - [Visual Information Processing Lab] - Seong-Woo Park & Yongduek Seo & Ki-Sang Hong
2010/05/18 - [Visual Information Processing Lab] - virtual studio 구현: camera calibration test



1. 내부 파라미터 계산

cvCalibrateCamera2() 함수를 이용하여 카메라 내부/외부 파라미터와 렌즈 왜곡 변수를 얻는다.


frame # 191  ---------------------------
# of found lines = 8 vertical, 6 horizontal
vertical lines:
horizontal lines:
p.size = 48
CRimage.size = 48
# of corresponding pairs = 15 = 15

camera matrix
fx=286.148 0 cx=207.625
0 fy=228.985 cy=98.8437
0 0 1

lens distortion
k1 = 0.0728017
k2 = -0.0447815
p1 = -0.0104295
p2 = 0.00914935

rotation vector
-0.117104  -0.109022  -0.0709096

translation vector
-208.234  -160.983  163.298



이 결과를 가지고 cvProjectPoints2()를 써서 패턴의 점에 대응되는 이미지 상의 점을 찾은 결과는 아래와 같다.




1-1.

카메라 내부 파라미터와 외부 파라미터를 모두 계산하는 cvCalibrateCamera2() 함수 대신 내부 파라미터만 계산하는
cvInitIntrinsicParams2D() 함수를 써 본다.



2. lens distortion(kappa1, kappa2)을 가지고 rectification

패턴 인식이 성공적인 경우 당연히 카메라 캘리브레이션 결과가 정확해지며, 이로부터 가상의 물체를 합성하기 위해 필요한 object 또는 graphic coordinate을 실시간으로 계산할 수 있다. 현재 우리 프로그램에서 패턴 인식이 실패하는 원인은 직선 검출의 오차인데, 이 오차의 원인으로는 여러가지가 있지만 가장 큰 것은 렌즈 왜곡이다. (현재 렌즈 왜곡을 고려하지 않고 있다.) 그래서 실제로는 하나의 직선에 대해 여러 개 (2-3개)의 직선을 검출하며 (NMS 알고리즘만으로는 이 오차를 줄이는 데 한계를 보이고 있어), 이로부터 계산된 교차점들의 위치 좌표 오차는 cross ratio 계산에 결정적인 오차로 작용한다. 현재 방식의 패턴 생성과 패턴 인식은 cross ratios 값에 절대적으로 의존하고 있기 때문에 이 문제를 반드시 해결해야 한다. 그러므로 렌즈 왜곡을 고려하여 입력 이미지를 펴서 (rectification) 기존의 패턴 인식 알고리즘을 적용하자.

ref.
Learning OpenCV: Chapter 6: Image Trasnforms
opencv v2.1 documentation — Geometric Image Transformations


1) Undistortion

Learning OpenCV: 396p
"OpenCV provides us with a ready-to-use undistortion algorithm that takes a raw image and the distortion coefficients from cvCalibrateCamera2() and produces a corrected image (see Figure 11-12). We can access this algorithm either through the function cvUndistort2(), which does everything we need in one shot, or through the pair of routines cvInitUndistortMap() and cvRemap(), which allow us to handle things a little more efficiently for video or other situations where we have many images from the same camera. ( * We should take a moment to clearly make a distinction here between undistortion, which mathematically removes lens distortion, and rectifi cation, which mathematically aligns the images with respect to each other. )

입력 영상 (렌즈 왜곡)

출력 영상 (왜곡 제거)






 

# of corresponding pairs = 30 = 30

camera matrix
fx=94.6664 0 cx=206.772
0 fy=78.3349 cy=158.782
0 0 1

lens distortion
k1 = 0.0130734
k2 = -0.000955421
p1 = 0.00287948
p2 = 0.00158042









            if ( ( k1 > 0.3 && k1 < 0.6 ) && ( cx > 150.0 && cx < 170.0 ) && ( cy > 110 && cy < 130 ) )


# of corresponding pairs = 42 = 42

camera matrix
fx=475.98 0 cx=162.47
0 fy=384.935 cy=121.552
0 0 1

lens distortion
k1 = 0.400136
k2 = -0.956089
p1 = 0.00367761
p2 = 0.00547217







2) Recitifaction




cvInitUndistortRectifyMap



3. line detection




4. 패턴 인식 (대응점 찾기)




5. 외부 파라미터 계산 (4의 결과 & lens distortion = 0 입력)
cvFindExtrinsicCameraParams2()



6. reprojection
2에서 얻은 rectificated image에 할 것




posted by maetel
2010. 5. 26. 16:24 Method/Nature


posted by maetel