블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

'글타래'에 해당되는 글 750건

  1. 2010.03.17 ARToolKit - simpleTest
  2. 2010.03.15 Three-dimensional computer vision: a geometric viewpoint By Olivier Faugeras
  3. 2010.03.13 opencv: video capturing from a camera 4
  4. 2010.03.04 Leordeanu & Hebert, "Unsupervised learning for graph matching"
  5. 2010.03.03 ARToolKit test log
  6. 2010.03.02 Jonathan Mooser et al. "Tricodes: A Barcode-Like Fiducial Design for Augmented Reality Media"
  7. 2010.03.02 "Design Patterns for Augmented Reality Systems"
  8. 2010.02.26 virtual studio 구현: cross ratio test
  9. 2010.02.26 Chikara MATSUNAGA et al. "Optimal Grid Pattern for Automated Camera Calibration Using Cross Ratio"
  10. 2010.02.23 virtual studio 구현: workflow
  11. 2010.02.22 chroma keying
  12. 2010.02.22 3차원 인터페이스 시장조사 1
  13. 2010.02.22 Coelho et al. "An experimental evaluation of projective invariants"
  14. 2010.02.19 서용덕 & 김종성 & 홍기상 "증강현실의 기술과 동향"
  15. 2010.02.11 Zhengyou Zhang "A flexible new technique for camera calibration"
  16. 2010.02.10 R. Y. Tsai "A Versatile Camera Calibration Technique for High Accuracy 3-D Maching Vision Metrology Using Off-the-shelf TV Cameras and Lenses"
  17. 2010.02.10 Sawhney & Kumar "True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction"
  18. 2010.02.10 Gibbs et al. "Virtual Studios: An Overview"
  19. 2010.02.10 Seong-Woo Park & Yongduek Seo & Ki-Sang Hong <Real-Time Camera Calibration for Virtual Studio>
  20. 2010.02.09 Moons & Gool & Vergauwen [3D Reconstruction from Multiple Images]
  21. 2010.02.09 Sola & Monin & Devy & Lemaire, "Undelayed initialization in bearing only SLAM"
  22. 2010.02.09 C++ Style 2
  23. 2010.01.25 2-D visual SLAM with Extended Kalman Filter 연습
  24. 2010.01.25 Kragic & Vincze <Vision for Robotics>
  25. 2010.01.22 Z. Wang, S. Huang and G. Dissanayake "D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping"
2010. 3. 17. 00:52 Computer Vision



console:
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
*** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 159.250000 131.750000 104.800000 1.012757
350.47574 0.00000 158.25000 0.00000
0.00000 363.04709 120.75000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
Opening Data File Data/object_data2
About to load 2 Models
Read in No.1
Read in No.2
Objectfile num = 2
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(21.470911)
xdiv2(sqrt(lx1)) =  16(10.816654), ydiv2(sqrt(lx2)) =  16(10.770330)
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(10.770330)
xdiv2(sqrt(lx1)) =  64(83.815273), ydiv2(sqrt(lx2)) =  32(61.400326)
xdiv2(sqrt(lx1)) =  16(23.706539), ydiv2(sqrt(lx2)) =  16(4.472136)
camera transformation: 47.526724  72.503504  361.557196
camera transformation: 48.037951  72.822603  363.026011
camera transformation: 48.046785  72.797217  362.906210
xdiv2(sqrt(lx1)) =  16(21.954498), ydiv2(sqrt(lx2)) =  16(6.082763)
xdiv2(sqrt(lx1)) =  64(89.358827), ydiv2(sqrt(lx2)) =  64(87.367042)
camera transformation: 29.765561  60.651283  316.943385
xdiv2(sqrt(lx1)) =  64(93.193347), ydiv2(sqrt(lx2)) =  64(92.135769)
camera transformation: 29.336377  31.258768  308.552913
camera transformation: 29.326996  31.265709  308.560060
camera transformation: 29.317593  31.272594  308.567678
camera transformation: 29.308167  31.279422  308.575767
camera transformation: 29.300434  31.279400  308.471567
camera transformation: 29.294517  31.279010  308.386048
camera transformation: 29.290911  31.281587  308.389341
camera transformation: 29.289106  31.282873  308.391013
camera transformation: 29.287903  31.283729  308.392137
camera transformation: 29.286700  31.284584  308.393268
xdiv2(sqrt(lx1)) =  64(95.084173), ydiv2(sqrt(lx2)) =  64(93.048375)
camera transformation: 26.966042  18.136324  304.545556
camera transformation: 26.975528  18.123378  304.876362
camera transformation: 26.974050  18.123228  304.940890
camera transformation: 26.972437  18.124391  304.943633
camera transformation: 26.971361  18.125165  304.945467
camera transformation: 26.808263  18.230519  305.010947
xdiv2(sqrt(lx1)) =  64(95.084173), ydiv2(sqrt(lx2)) =  64(94.047860)
camera transformation: 26.002414  9.524376  302.800215
xdiv2(sqrt(lx1)) =  64(94.085068), ydiv2(sqrt(lx2)) =  64(95.047357)
camera transformation: 26.413729  0.689117  303.645529
xdiv2(sqrt(lx1)) =  64(94.085068), ydiv2(sqrt(lx2)) =  64(93.048375)
camera transformation: 25.907495  -6.101957  305.616547
xdiv2(sqrt(lx1)) =  64(92.135769), ydiv2(sqrt(lx2)) =  64(94.132885)
camera transformation: 25.412818  -5.909871  306.780215
xdiv2(sqrt(lx1)) =  64(91.350972), ydiv2(sqrt(lx2)) =  64(93.343452)
camera transformation: 25.781305  2.094997  307.981181
xdiv2(sqrt(lx1)) =  64(92.913939), ydiv2(sqrt(lx2)) =  64(93.648278)
camera transformation: 27.438552  14.266152  310.693037
camera transformation: 27.291663  13.634920  311.326695
xdiv2(sqrt(lx1)) =  64(93.557469), ydiv2(sqrt(lx2)) =  64(93.059121)
camera transformation: 29.431110  30.722648  311.787341
camera transformation: 29.182398  30.236816  310.084471
xdiv2(sqrt(lx1)) =  64(92.417531), ydiv2(sqrt(lx2)) =  64(89.627005)
camera transformation: 33.543470  46.722727  318.630344
camera transformation: 33.279319  46.231533  316.701553
camera transformation: 33.218193  46.126294  316.268524
xdiv2(sqrt(lx1)) =  64(93.770998), ydiv2(sqrt(lx2)) =  64(94.762862)
camera transformation: 22.185723  48.702070  301.635290
xdiv2(sqrt(lx1)) =  64(91.350972), ydiv2(sqrt(lx2)) =  64(91.350972)
camera transformation: 31.843662  22.584935  312.932857
xdiv2(sqrt(lx1)) =  64(87.281155), ydiv2(sqrt(lx2)) =  64(86.284413)
camera transformation: 45.529216  19.369259  325.624182
xdiv2(sqrt(lx1)) =  64(81.614950), ydiv2(sqrt(lx2)) =  64(80.752709)
camera transformation: 73.852204  34.314748  342.625602
camera transformation: 73.974162  34.334751  343.237999
camera transformation: 73.993076  34.340643  343.349751
xdiv2(sqrt(lx1)) =  64(78.917679), ydiv2(sqrt(lx2)) =  64(77.103826)
camera transformation: 88.645309  47.349086  356.684110
camera transformation: 88.758164  47.400553  357.231157
camera transformation: 88.695447  47.401908  357.055015
camera transformation: 88.700800  47.407119  357.092444
camera transformation: 88.706174  47.412326  357.129963
xdiv2(sqrt(lx1)) =  64(76.485293), ydiv2(sqrt(lx2)) =  64(75.504967)
camera transformation: 95.125085  56.840372  365.471078
xdiv2(sqrt(lx1)) =  64(74.732858), ydiv2(sqrt(lx2)) =  64(73.756356)
camera transformation: 101.360782  64.847700  374.473468
xdiv2(sqrt(lx1)) =  64(74.953319), ydiv2(sqrt(lx2)) =  64(72.780492)
camera transformation: 97.170289  68.278512  376.906534
xdiv2(sqrt(lx1)) =  64(75.927597), ydiv2(sqrt(lx2)) =  64(73.756356)
camera transformation: 86.011813  69.023996  372.861548
camera transformation: 86.057924  69.050586  373.066477
camera transformation: 86.063062  69.057045  373.103339
camera transformation: 86.076876  69.074292  373.202098
xdiv2(sqrt(lx1)) =  64(76.687678), ydiv2(sqrt(lx2)) =  64(73.545904)
camera transformation: 69.732429  66.291612  368.969634
xdiv2(sqrt(lx1)) =  64(79.624117), ydiv2(sqrt(lx2)) =  64(74.148500)
camera transformation: 46.840577  63.628929  363.768160
camera transformation: 46.840007  63.632425  363.790938
camera transformation: 46.839450  63.635925  363.813810
camera transformation: 46.838907  63.639426  363.836775
camera transformation: 46.837526  63.648777  363.898466
camera transformation: 46.974486  63.853253  365.578761
xdiv2(sqrt(lx1)) =  64(79.429214), ydiv2(sqrt(lx2)) =  64(74.813100)
camera transformation: 23.081293  61.745840  363.657219
xdiv2(sqrt(lx1)) =  64(78.089692), ydiv2(sqrt(lx2)) =  64(75.538070)
camera transformation: 8.900281  62.544139  365.768910
xdiv2(sqrt(lx1)) =  64(77.103826), ydiv2(sqrt(lx2)) =  64(74.813100)
camera transformation: 0.378742  64.940559  369.306745
xdiv2(sqrt(lx1)) =  64(75.953933), ydiv2(sqrt(lx2)) =  64(73.681748)
camera transformation: -6.822503  69.683364  373.194852
xdiv2(sqrt(lx1)) =  64(74.330344), ydiv2(sqrt(lx2)) =  64(72.560320)
camera transformation: -9.914774  75.492749  381.592839
camera transformation: -9.924722  75.520509  381.798228
camera transformation: -9.924512  75.513877  381.751701
xdiv2(sqrt(lx1)) =  64(102.420701), ydiv2(sqrt(lx2)) =  64(105.546198)
camera transformation: 25.643794  -12.219666  274.388407
xdiv2(sqrt(lx1)) =  64(101.271911), ydiv2(sqrt(lx2)) =  64(104.235311)
camera transformation: 28.719062  -28.140558  278.831061
xdiv2(sqrt(lx1)) =  64(101.271911), ydiv2(sqrt(lx2)) =  64(102.420701)
camera transformation: 29.939512  -32.147970  280.821053
xdiv2(sqrt(lx1)) =  64(101.434708), ydiv2(sqrt(lx2)) =  64(102.420701)
camera transformation: 30.361984  -29.717296  279.929286
xdiv2(sqrt(lx1)) =  64(101.434708), ydiv2(sqrt(lx2)) =  64(103.406963)
camera transformation: 30.174193  -29.017554  279.290796
xdiv2(sqrt(lx1)) =  64(101.788997), ydiv2(sqrt(lx2)) =  64(102.591423)
camera transformation: 32.117728  -25.070535  281.685938
xdiv2(sqrt(lx1)) =  64(100.662803), ydiv2(sqrt(lx2)) =  64(97.862148)
camera transformation: 38.831844  -18.496300  291.887044
camera transformation: 38.841845  -18.503971  291.843948
camera transformation: 38.843803  -18.505502  291.835079
camera transformation: 38.843753  -18.505592  291.835153
camera transformation: 38.843704  -18.505682  291.835227
camera transformation: 37.980938  -19.410609  288.045910
xdiv2(sqrt(lx1)) =  64(97.267672), ydiv2(sqrt(lx2)) =  64(89.050547)
camera transformation: 46.227197  -18.491398  306.727155
camera transformation: 45.798640  -18.550941  304.551447
camera transformation: 45.717704  -18.562751  304.060951
camera transformation: 45.708703  -18.567481  303.930300
camera transformation: 45.713116  -18.570736  303.834144
camera transformation: 45.886030  -18.573805  303.828460
xdiv2(sqrt(lx1)) =  64(94.371606), ydiv2(sqrt(lx2)) =  64(84.929382)
camera transformation: 51.491972  -21.777249  314.936856
camera transformation: 51.397513  -21.778709  314.409264
camera transformation: 51.400957  -21.778955  314.338457
xdiv2(sqrt(lx1)) =  64(90.801982), ydiv2(sqrt(lx2)) =  64(82.000000)
camera transformation: 55.475986  -19.223639  325.819428
xdiv2(sqrt(lx1)) =  64(86.977008), ydiv2(sqrt(lx2)) =  64(78.102497)
camera transformation: 58.131599  -13.545250  340.719883
camera transformation: 58.098739  -13.550505  340.466506
camera transformation: 58.097072  -13.551186  340.341751
xdiv2(sqrt(lx1)) =  64(84.118963), ydiv2(sqrt(lx2)) =  64(75.186435)
camera transformation: 61.767460  -6.498478  353.644297
xdiv2(sqrt(lx1)) =  64(82.219219), ydiv2(sqrt(lx2)) =  64(73.246160)
camera transformation: 63.582900  -1.431624  362.009189
xdiv2(sqrt(lx1)) =  64(81.271151), ydiv2(sqrt(lx2)) =  64(71.309186)
camera transformation: 64.270934  0.570528  368.182904
xdiv2(sqrt(lx1)) =  64(79.056942), ydiv2(sqrt(lx2)) =  64(70.342022)
camera transformation: 64.828170  2.674645  373.263186
xdiv2(sqrt(lx1)) =  64(79.056942), ydiv2(sqrt(lx2)) =  64(70.092796)
camera transformation: 67.302390  5.675445  377.309481
xdiv2(sqrt(lx1)) =  64(78.108898), ydiv2(sqrt(lx2)) =  64(69.123079)
camera transformation: 68.347540  8.393469  381.873230
xdiv2(sqrt(lx1)) =  64(77.162167), ydiv2(sqrt(lx2)) =  64(68.154237)
camera transformation: 69.042038  10.265904  387.758075
xdiv2(sqrt(lx1)) =  64(74.946648), ydiv2(sqrt(lx2)) =  64(66.219333)
camera transformation: 70.165458  11.984999  395.261733
xdiv2(sqrt(lx1)) =  64(73.681748), ydiv2(sqrt(lx2)) =  64(64.031242)
camera transformation: 68.526903  12.455397  403.733036
xdiv2(sqrt(lx1)) =  64(72.422372), ydiv2(sqrt(lx2)) =  32(62.817195)
camera transformation: 67.963463  14.354779  413.581813
xdiv2(sqrt(lx1)) =  64(70.519501), ydiv2(sqrt(lx2)) =  32(60.876925)
camera transformation: 66.137421  17.054604  422.936929
xdiv2(sqrt(lx1)) =  64(69.570109), ydiv2(sqrt(lx2)) =  32(59.908263)
camera transformation: 63.968950  18.263953  431.720565
xdiv2(sqrt(lx1)) =  64(67.052218), ydiv2(sqrt(lx2)) =  32(57.974132)
camera transformation: 62.462242  20.120503  439.513696
xdiv2(sqrt(lx1)) =  64(66.098411), ydiv2(sqrt(lx2)) =  32(57.723479)
camera transformation: 61.941764  22.692295  449.407070
xdiv2(sqrt(lx1)) =  64(65.145990), ydiv2(sqrt(lx2)) =  32(55.785303)
camera transformation: 59.806257  24.724058  458.476846
xdiv2(sqrt(lx1)) =  32(62.936476), ydiv2(sqrt(lx2)) =  32(54.817880)
camera transformation: 59.359972  28.938997  468.908640
xdiv2(sqrt(lx1)) =  32(61.983869), ydiv2(sqrt(lx2)) =  32(53.600373)
camera transformation: 59.681297  34.221026  480.057287
xdiv2(sqrt(lx1)) =  32(61.032778), ydiv2(sqrt(lx2)) =  32(51.662365)
camera transformation: 59.187204  39.293310  491.540486
xdiv2(sqrt(lx1)) =  32(59.135438), ydiv2(sqrt(lx2)) =  32(51.662365)
camera transformation: 61.198962  45.440910  500.587979
xdiv2(sqrt(lx1)) =  32(57.567352), ydiv2(sqrt(lx2)) =  32(49.729267)
camera transformation: 65.380338  52.172304  509.014661
xdiv2(sqrt(lx1)) =  32(56.920998), ydiv2(sqrt(lx2)) =  32(49.729267)
camera transformation: 75.377339  53.673070  514.052120
xdiv2(sqrt(lx1)) =  32(58.821765), ydiv2(sqrt(lx2)) =  32(51.419841)
camera transformation: 93.700312  52.879497  501.357587
xdiv2(sqrt(lx1)) =  32(60.728906), ydiv2(sqrt(lx2)) =  32(53.366656)
camera transformation: 110.786936  47.137332  484.639194
xdiv2(sqrt(lx1)) =  32(62.641839), ydiv2(sqrt(lx2)) =  32(54.129474)
camera transformation: 124.246028  41.597219  472.274088
xdiv2(sqrt(lx1)) =  64(64.560050), ydiv2(sqrt(lx2)) =  32(55.317267)
camera transformation: 135.008844  36.708725  456.711952
xdiv2(sqrt(lx1)) =  64(67.446275), ydiv2(sqrt(lx2)) =  32(57.070132)
camera transformation: 137.579461  31.874597  442.777551
xdiv2(sqrt(lx1)) =  64(67.446275), ydiv2(sqrt(lx2)) =  32(59.228372)
camera transformation: 127.910406  27.598463  434.634811
xdiv2(sqrt(lx1)) =  64(68.410526), ydiv2(sqrt(lx2)) =  32(60.207973)
camera transformation: 112.249511  24.013273  434.361426
xdiv2(sqrt(lx1)) =  64(66.940272), ydiv2(sqrt(lx2)) =  32(59.439044)
camera transformation: 96.317479  19.783530  438.720285
xdiv2(sqrt(lx1)) =  64(64.761099), ydiv2(sqrt(lx2)) =  32(58.463664)
camera transformation: 79.445565  20.994126  448.287909
xdiv2(sqrt(lx1)) =  32(62.817195), ydiv2(sqrt(lx2)) =  32(57.271284)
camera transformation: 66.537571  25.965808  468.052644
xdiv2(sqrt(lx1)) =  32(58.463664), ydiv2(sqrt(lx2)) =  32(53.366656)
camera transformation: 57.936429  29.645999  492.716628
xdiv2(sqrt(lx1)) =  32(58.249464), ydiv2(sqrt(lx2)) =  32(51.971146)
camera transformation: 54.637770  34.427739  505.420015
xdiv2(sqrt(lx1)) =  32(56.515485), ydiv2(sqrt(lx2)) =  32(50.990195)
camera transformation: 55.090245  39.330479  515.034782
xdiv2(sqrt(lx1)) =  32(55.317267), ydiv2(sqrt(lx2)) =  32(50.009999)
camera transformation: 52.836001  41.933900  528.054997
xdiv2(sqrt(lx1)) =  32(54.129474), ydiv2(sqrt(lx2)) =  32(48.052055)
camera transformation: 46.875504  45.714529  537.427256
xdiv2(sqrt(lx1)) =  32(53.366656), ydiv2(sqrt(lx2)) =  32(48.052055)
camera transformation: 42.737270  51.235894  545.906058
xdiv2(sqrt(lx1)) =  32(52.392748), ydiv2(sqrt(lx2)) =  32(47.074409)
camera transformation: 42.607019  57.505001  557.305403
xdiv2(sqrt(lx1)) =  32(52.392748), ydiv2(sqrt(lx2)) =  32(45.891176)
camera transformation: 42.706395  61.832306  564.125210
xdiv2(sqrt(lx1)) =  32(51.419841), ydiv2(sqrt(lx2)) =  32(45.122057)
camera transformation: 44.569870  65.995536  569.286337
xdiv2(sqrt(lx1)) =  32(50.447993), ydiv2(sqrt(lx2)) =  32(45.122057)
camera transformation: 45.407322  69.355797  574.205444
xdiv2(sqrt(lx1)) =  32(50.447993), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 46.935351  72.947170  579.393615
xdiv2(sqrt(lx1)) =  32(49.729267), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 46.394931  76.412312  587.196024
xdiv2(sqrt(lx1)) =  32(49.729267), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 45.200327  81.248592  595.161397
xdiv2(sqrt(lx1)) =  32(49.040799), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 42.080958  89.240594  602.831063
xdiv2(sqrt(lx1)) =  32(48.083261), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 38.516671  97.426379  610.670686
xdiv2(sqrt(lx1)) =  32(46.840154), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 32.168943  103.347263  617.259159
xdiv2(sqrt(lx1)) =  32(45.880279), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 26.487904  108.409702  625.759269
xdiv2(sqrt(lx1)) =  32(45.880279), ydiv2(sqrt(lx2)) =  32(40.521599)
camera transformation: 21.568188  115.906286  633.657622
xdiv2(sqrt(lx1)) =  32(45.221676), ydiv2(sqrt(lx2)) =  32(41.109610)
camera transformation: 22.289380  125.328691  638.688564
xdiv2(sqrt(lx1)) =  32(45.221676), ydiv2(sqrt(lx2)) =  32(40.804412)
camera transformation: 31.223567  126.138528  644.100426
xdiv2(sqrt(lx1)) =  32(43.965896), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 51.375830  123.050894  653.504746
xdiv2(sqrt(lx1)) =  32(43.680659), ydiv2(sqrt(lx2)) =  32(38.832976)
camera transformation: 54.986398  106.280175  660.526525
xdiv2(sqrt(lx1)) =  32(45.607017), ydiv2(sqrt(lx2)) =  32(42.201896)
camera transformation: 36.010280  109.096895  630.149252
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 23.913726  100.715033  624.185981
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(42.953463)
camera transformation: 22.448901  95.809415  630.102810
xdiv2(sqrt(lx1)) =  32(45.354162), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 24.863623  93.022652  628.420166
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(42.953463)
camera transformation: 29.075494  84.506661  626.944774
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.976184)
camera transformation: 32.818225  67.561990  629.946294
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.000000)
camera transformation: 33.948638  50.855880  628.150472
xdiv2(sqrt(lx1)) =  32(46.097722), ydiv2(sqrt(lx2)) =  32(41.773197)
camera transformation: 37.205573  39.658919  626.642224
xdiv2(sqrt(lx1)) =  32(47.074409), ydiv2(sqrt(lx2)) =  32(42.755117)
camera transformation: 48.272787  27.464585  621.807431
xdiv2(sqrt(lx1)) =  32(46.324939), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 48.473773  27.204197  620.426198
xdiv2(sqrt(lx1)) =  32(45.607017), ydiv2(sqrt(lx2)) =  32(44.384682)
camera transformation: 52.061117  29.826330  624.046628
xdiv2(sqrt(lx1)) =  32(44.643029), ydiv2(sqrt(lx2)) =  32(44.643029)
camera transformation: 61.332250  33.508261  630.315033
xdiv2(sqrt(lx1)) =  32(44.384682), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 75.931906  36.455966  643.181794
xdiv2(sqrt(lx1)) =  32(41.976184), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 86.565757  40.688748  656.556474
xdiv2(sqrt(lx1)) =  32(42.755117), ydiv2(sqrt(lx2)) =  32(43.931765)
camera transformation: 90.448446  42.593207  649.464611
xdiv2(sqrt(lx1)) =  32(42.755117), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 93.440056  44.079600  649.385535
xdiv2(sqrt(lx1)) =  32(41.773197), ydiv2(sqrt(lx2)) =  32(44.147480)
camera transformation: 99.813271  46.222595  649.396449
xdiv2(sqrt(lx1)) =  32(41.773197), ydiv2(sqrt(lx2)) =  32(43.174066)
camera transformation: 102.614138  50.375842  655.237988
xdiv2(sqrt(lx1)) =  32(42.201896), ydiv2(sqrt(lx2)) =  32(43.416587)
camera transformation: 107.807744  58.508295  656.829797
xdiv2(sqrt(lx1)) =  32(42.201896), ydiv2(sqrt(lx2)) =  32(42.201896)
camera transformation: 112.601771  64.714215  660.423527
xdiv2(sqrt(lx1)) =  32(42.449971), ydiv2(sqrt(lx2)) =  32(42.449971)
camera transformation: 119.089054  71.487308  666.367434
xdiv2(sqrt(lx1)) =  32(42.449971), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 125.523569  79.032026  674.223484
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 130.053455  84.563865  680.039495
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(40.804412)
camera transformation: 129.275668  86.691488  683.081734
xdiv2(sqrt(lx1)) =  32(41.761226), ydiv2(sqrt(lx2)) =  32(41.484937)
camera transformation: 126.550484  88.506627  682.311115
xdiv2(sqrt(lx1)) =  32(42.059482), ydiv2(sqrt(lx2)) =  32(41.761226)
camera transformation: 121.644265  91.022978  680.049182
xdiv2(sqrt(lx1)) =  32(41.436699), ydiv2(sqrt(lx2)) =  32(42.059482)
camera transformation: 117.740522  96.624935  684.211427
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(41.436699)
camera transformation: 110.276211  100.999170  687.457576
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 97.652541  103.798959  687.316253
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 83.897556  105.688990  687.504940
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 71.000147  108.309008  688.296546
xdiv2(sqrt(lx1)) =  32(40.496913), ydiv2(sqrt(lx2)) =  32(40.853396)
camera transformation: 60.984082  110.858782  687.249040
xdiv2(sqrt(lx1)) =  32(39.560081), ydiv2(sqrt(lx2)) =  32(41.231056)
camera transformation: 51.369035  112.901238  686.984174
xdiv2(sqrt(lx1)) =  32(39.217343), ydiv2(sqrt(lx2)) =  32(39.924930)
camera transformation: 46.166159  112.934981  690.882951
xdiv2(sqrt(lx1)) =  32(39.849718), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 45.556980  106.446232  701.391185
xdiv2(sqrt(lx1)) =  32(38.600518), ydiv2(sqrt(lx2)) =  32(39.217343)
camera transformation: 51.465861  95.475483  707.877746
xdiv2(sqrt(lx1)) =  32(38.327536), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 64.485203  82.420285  719.345051
xdiv2(sqrt(lx1)) =  32(38.078866), ydiv2(sqrt(lx2)) =  32(37.643060)
camera transformation: 72.839799  68.328484  723.620075
xdiv2(sqrt(lx1)) =  32(38.327536), ydiv2(sqrt(lx2)) =  32(38.327536)
camera transformation: 83.043436  61.689418  729.140520
xdiv2(sqrt(lx1)) =  32(39.293765), ydiv2(sqrt(lx2)) =  32(39.051248)
camera transformation: 98.932860  61.449290  721.230400
xdiv2(sqrt(lx1)) =  32(38.897301), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 111.747340  64.018396  725.454519
xdiv2(sqrt(lx1)) =  32(38.897301), ydiv2(sqrt(lx2)) =  32(39.560081)
camera transformation: 119.345649  68.197163  731.840936
xdiv2(sqrt(lx1)) =  32(37.947332), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 123.213043  71.990158  742.869709
xdiv2(sqrt(lx1)) =  32(38.275318), ydiv2(sqrt(lx2)) =  32(38.600518)
camera transformation: 124.928583  75.842952  746.732102
xdiv2(sqrt(lx1)) =  32(38.275318), ydiv2(sqrt(lx2)) =  32(37.947332)
camera transformation: 127.574049  79.661169  755.149807
xdiv2(sqrt(lx1)) =  32(37.336309), ydiv2(sqrt(lx2)) =  32(36.687873)
camera transformation: 128.133593  85.268440  774.011885
xdiv2(sqrt(lx1)) =  32(36.055513), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 125.285369  92.369672  789.529777
xdiv2(sqrt(lx1)) =  32(35.468296), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 125.355465  101.492943  806.804835
xdiv2(sqrt(lx1)) =  32(35.468296), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 126.976068  110.303044  816.147079
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 129.030006  118.465726  823.672709
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 130.345172  124.734486  829.930479
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 132.729278  132.789712  836.521237
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 133.268128  135.881209  839.899669
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 134.236303  135.151276  840.973138
xdiv2(sqrt(lx1)) =  32(34.438351), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 134.174915  135.014464  838.089936
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 132.647385  132.463415  836.767455
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 131.536530  130.889873  836.819482
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 129.640956  129.549781  837.835786
xdiv2(sqrt(lx1)) =  32(34.928498), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 122.416326  127.237410  838.730750
xdiv2(sqrt(lx1)) =  32(33.615473), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 114.681898  130.237589  843.522592
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 105.646409  136.376346  846.852249
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(34.176015)
camera transformation: 96.744903  141.580589  849.498352
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.615473)
camera transformation: 87.025411  145.160624  850.906986
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 79.279170  148.220483  853.990747
xdiv2(sqrt(lx1)) =  32(34.014703), ydiv2(sqrt(lx2)) =  32(33.615473)
camera transformation: 74.042168  149.434630  858.862223
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 73.097454  147.509559  860.838221
xdiv2(sqrt(lx1)) =  32(33.615473), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 76.927423  137.613163  859.651242
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(33.526109)
camera transformation: 90.769451  121.958947  855.920196
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 112.844990  101.514565  857.973255
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 134.242323  85.505279  866.278006
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 158.734774  76.690148  871.135455
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(32.572995)
camera transformation: 184.575685  77.063560  872.870689
xdiv2(sqrt(lx1)) =  32(32.202484), ydiv2(sqrt(lx2)) =  32(32.893768)
camera transformation: 196.739036  81.023969  875.755616
xdiv2(sqrt(lx1)) =  16(31.780497), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 199.430929  81.755961  880.344083
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 200.441920  83.115797  881.501856
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 199.992201  84.056351  882.274928
xdiv2(sqrt(lx1)) =  16(31.304952), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 194.660196  81.144185  879.996992
xdiv2(sqrt(lx1)) =  32(33.105891), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 182.478932  76.688130  862.581188
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(33.837849)
camera transformation: 165.029910  69.956759  846.914938
xdiv2(sqrt(lx1)) =  32(34.539832), ydiv2(sqrt(lx2)) =  32(34.481879)
camera transformation: 148.216787  63.696632  836.139722
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 137.708326  62.434520  837.211154
xdiv2(sqrt(lx1)) =  32(34.176015), ydiv2(sqrt(lx2)) =  32(34.785054)
camera transformation: 130.405199  63.050395  838.392674
xdiv2(sqrt(lx1)) =  32(33.241540), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 121.309247  64.232032  835.057791
xdiv2(sqrt(lx1)) =  32(33.837849), ydiv2(sqrt(lx2)) =  32(35.735137)
camera transformation: 108.562469  62.405185  828.777825
xdiv2(sqrt(lx1)) =  32(33.837849), ydiv2(sqrt(lx2)) =  32(36.400549)
camera transformation: 94.802902  58.440889  818.382655
xdiv2(sqrt(lx1)) =  32(34.481879), ydiv2(sqrt(lx2)) =  32(36.400549)
camera transformation: 78.282615  55.985707  816.155844
xdiv2(sqrt(lx1)) =  32(34.481879), ydiv2(sqrt(lx2)) =  32(36.138622)
camera transformation: 66.155921  56.631358  817.613419
xdiv2(sqrt(lx1)) =  32(33.526109), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 56.963787  62.544571  835.411848
xdiv2(sqrt(lx1)) =  32(32.893768), ydiv2(sqrt(lx2)) =  32(35.440090)
camera transformation: 51.797430  71.337717  857.152748
xdiv2(sqrt(lx1)) =  16(31.953091), ydiv2(sqrt(lx2)) =  32(34.481879)
camera transformation: 49.113705  81.742516  877.848618
xdiv2(sqrt(lx1)) =  16(30.675723), ydiv2(sqrt(lx2)) =  32(34.205263)
camera transformation: 45.462997  91.780838  899.226448
xdiv2(sqrt(lx1)) =  16(30.083218), ydiv2(sqrt(lx2)) =  32(34.205263)
camera transformation: 44.191492  101.376386  913.882100
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 48.080357  104.490597  927.737990
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  32(32.984845)
camera transformation: 47.183950  99.357191  943.043611
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.280025)
camera transformation: 49.021185  95.144362  936.398343
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.280025)
camera transformation: 51.004678  92.495266  930.018137
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(33.241540)
camera transformation: 52.578685  91.887273  928.649743
xdiv2(sqrt(lx1)) =  16(29.410882), ydiv2(sqrt(lx2)) =  32(32.984845)
camera transformation: 54.575349  90.917785  934.008053
xdiv2(sqrt(lx1)) =  16(29.732137), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 61.171677  83.994290  946.765346
xdiv2(sqrt(lx1)) =  16(27.513633), ydiv2(sqrt(lx2)) =  16(32.015621)
camera transformation: 68.022089  79.041471  974.920725
xdiv2(sqrt(lx1)) =  16(27.856777), ydiv2(sqrt(lx2)) =  16(31.780497)
camera transformation: 70.566339  79.631419  1000.697974
camera transformation: 70.566793  79.629331  1000.700421
camera transformation: 70.566983  79.628141  1000.699678
camera transformation: 70.566895  79.627903  1000.696519
camera transformation: 70.566807  79.627665  1000.693356
camera transformation: 68.945868  77.838917  973.900413
camera transformation: 68.957132  77.842488  974.011316
camera transformation: 68.909571  77.778916  973.205540
camera transformation: 68.836672  77.680396  971.969110
camera transformation: 68.681145  77.480247  969.402115
xdiv2(sqrt(lx1)) =  16(26.000000), ydiv2(sqrt(lx2)) =  16(31.048349)
camera transformation: 74.648401  78.625620  981.499671
camera transformation: 74.293364  78.221936  976.312472
camera transformation: 74.112997  77.989945  973.535124
xdiv2(sqrt(lx1)) =  16(26.000000), ydiv2(sqrt(lx2)) =  16(31.048349)
camera transformation: 77.107583  77.283336  979.854111
xdiv2(sqrt(lx1)) =  16(23.769729), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 75.261915  78.521429  988.152351
xdiv2(sqrt(lx1)) =  16(24.166092), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.319139  82.949946  995.178316
xdiv2(sqrt(lx1)) =  16(24.166092), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.257570  87.591186  999.281600
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(30.083218)
camera transformation: 72.639653  90.643699  999.059592
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(28.861739)
camera transformation: 73.839146  96.689996  1007.730422
xdiv2(sqrt(lx1)) =  16(21.931712), ydiv2(sqrt(lx2)) =  16(28.635642)
camera transformation: 75.007170  104.793795  1020.038034
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(27.166155)
camera transformation: 75.424381  114.712060  1035.581940
xdiv2(sqrt(lx1)) =  16(22.847319), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 75.189230  121.234236  1044.450289
xdiv2(sqrt(lx1)) =  16(22.360680), ydiv2(sqrt(lx2)) =  16(26.019224)
camera transformation: 75.101660  127.500667  1042.741247
xdiv2(sqrt(lx1)) =  16(23.259407), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 74.133666  129.633153  1033.337032
xdiv2(sqrt(lx1)) =  16(23.706539), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 73.893372  135.762779  1039.388559
xdiv2(sqrt(lx1)) =  16(22.360680), ydiv2(sqrt(lx2)) =  16(23.194827)
camera transformation: 74.280040  147.109093  1061.245210
xdiv2(sqrt(lx1)) =  16(22.825424), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 72.386508  154.983481  1068.109780
xdiv2(sqrt(lx1)) =  16(21.470911), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 69.840226  161.209263  1080.880767
xdiv2(sqrt(lx1)) =  16(22.472205), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 68.988847  173.214650  1089.041513
xdiv2(sqrt(lx1)) =  16(20.591260), ydiv2(sqrt(lx2)) =  16(20.615528)
camera transformation: 69.312210  181.480066  1096.433613
xdiv2(sqrt(lx1)) =  16(21.095023), ydiv2(sqrt(lx2)) =  16(21.587033)
camera transformation: 69.347715  192.403342  1108.818112
xdiv2(sqrt(lx1)) =  16(19.723083), ydiv2(sqrt(lx2)) =  16(20.615528)
camera transformation: 69.429048  202.875585  1110.353465
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 71.168069  214.979703  1130.873908
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 73.064987  219.576309  1128.449643
xdiv2(sqrt(lx1)) =  16(19.416488), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 75.354988  218.536466  1127.484516
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(23.086793)
camera transformation: 79.496791  208.075638  1101.430550
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 83.502951  197.178803  1101.454158
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(26.019224)
camera transformation: 80.751021  187.231605  1098.142619
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 75.615859  182.953333  1098.242465
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 69.238328  182.836342  1104.420105
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(26.076810)
camera transformation: 58.921256  189.803963  1119.336683
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(25.079872)
camera transformation: 48.130291  203.254595  1150.303345
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(24.186773)
camera transformation: 37.875032  215.397826  1156.775348
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(24.083189)
camera transformation: 26.924012  222.776023  1162.781642
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 20.082917  226.840938  1162.685944
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 22.806747  229.658115  1152.720331
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 29.445635  232.982815  1174.609994
xdiv2(sqrt(lx1)) =  16(15.264338), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 33.481394  226.473703  1157.924273
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(23.194827)
camera transformation: 30.495340  226.090046  1177.193920
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(23.537205)
camera transformation: 24.342926  224.760426  1171.132278
xdiv2(sqrt(lx1)) =  16(15.652476), ydiv2(sqrt(lx2)) =  16(20.124612)
camera transformation: 15.579943  239.866513  1222.534369
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(21.095023)
camera transformation: 5.363581  247.346006  1225.658392
xdiv2(sqrt(lx1)) =  16(15.652476), ydiv2(sqrt(lx2)) =  16(21.540659)
camera transformation: 8.237635  251.988778  1215.926768
xdiv2(sqrt(lx1)) =  16(13.892444), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 25.406377  226.983784  1133.630367
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(25.317978)
camera transformation: 32.922597  231.597615  1169.130556
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(25.495098)
camera transformation: 35.600787  235.118105  1167.194188
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(25.495098)
camera transformation: 32.219501  234.365046  1167.515927
xdiv2(sqrt(lx1)) =  16(11.661904), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 23.162797  235.915385  1178.228608
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(25.709920)
camera transformation: 10.773644  234.532144  1179.040319
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.455318  223.415549  1185.644138
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.467461  223.544679  1186.380390
xdiv2(sqrt(lx1)) =  16(12.806248), ydiv2(sqrt(lx2)) =  16(24.515301)
camera transformation: 19.467041  223.539702  1186.354002
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(25.495098)
xdiv2(sqrt(lx1)) =  16(12.727922), ydiv2(sqrt(lx2)) =  16(25.317978)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(24.515301)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(25.317978)
xdiv2(sqrt(lx1)) =  16(11.313708), ydiv2(sqrt(lx2)) =  16(25.019992)
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(13.453624), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.041595), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.727922), ydiv2(sqrt(lx2)) =  16(22.203603)
xdiv2(sqrt(lx1)) =  16(10.630146), ydiv2(sqrt(lx2)) =  16(23.194827)
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(23.021729)
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(23.021729)
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 132.779468  198.248867  1135.067772
xdiv2(sqrt(lx1)) =  16(19.646883), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 135.904589  194.712040  1138.438276
xdiv2(sqrt(lx1)) =  16(20.615528), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 133.299554  193.016949  1131.889626
xdiv2(sqrt(lx1)) =  16(20.880613), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 137.694795  195.957438  1150.675221
xdiv2(sqrt(lx1)) =  16(21.189620), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 134.499795  193.523258  1133.184371
xdiv2(sqrt(lx1)) =  16(20.880613), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 129.383477  204.042089  1152.272490
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 125.631275  224.387641  1186.884785
xdiv2(sqrt(lx1)) =  16(20.615528), ydiv2(sqrt(lx2)) =  16(24.000000)
camera transformation: 123.228559  242.341359  1194.824739
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 130.017178  241.639230  1170.193548
xdiv2(sqrt(lx1)) =  16(19.924859), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 136.216116  245.059353  1194.806020
xdiv2(sqrt(lx1)) =  16(19.313208), ydiv2(sqrt(lx2)) =  16(23.021729)
camera transformation: 141.220983  227.546628  1187.876159
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(23.000000)
camera transformation: 144.459379  196.596440  1233.376027
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(22.022716)
camera transformation: 146.715979  165.075248  1210.828436
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 164.734280  150.806482  1203.168745
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 183.778311  136.248205  1191.281805
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 204.774009  130.848903  1164.187821
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(25.019992)
camera transformation: 225.157534  124.076992  1147.211236
xdiv2(sqrt(lx1)) =  16(18.681542), ydiv2(sqrt(lx2)) =  16(25.000000)
camera transformation: 233.234676  109.661773  1134.055904
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(24.020824)
camera transformation: 252.341338  102.014798  1156.960839
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(24.083189)
camera transformation: 272.554703  90.582945  1167.346273
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(24.331050)
camera transformation: 283.819182  59.254174  1172.166378
xdiv2(sqrt(lx1)) =  16(16.643317), ydiv2(sqrt(lx2)) =  16(23.537205)
camera transformation: 284.489218  59.398413  1174.954485
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(23.345235)
camera transformation: 284.555884  59.412216  1175.232079
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(22.803509)
xdiv2(sqrt(lx1)) =  16(16.124515), ydiv2(sqrt(lx2)) =  16(22.803509)
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(21.377558)
xdiv2(sqrt(lx1)) =  16(17.000000), ydiv2(sqrt(lx2)) =  16(22.561028)
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(21.213203)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(21.095023)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.104973)
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.000000)
camera transformation: 143.485301  119.275553  1271.117296
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(18.027756)
camera transformation: 129.794106  129.856828  1264.717625
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.026298)
camera transformation: 126.112556  137.745229  1287.226677
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(19.026298)
camera transformation: 123.598688  139.754011  1295.083712
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(19.000000)
camera transformation: 120.084903  141.958021  1286.258525
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(18.110770)
camera transformation: 123.386213  152.249674  1326.457714
xdiv2(sqrt(lx1)) =  16(17.888544), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 126.672879  158.697867  1338.021301
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 130.545663  165.011224  1346.520550
xdiv2(sqrt(lx1)) =  16(18.357560), ydiv2(sqrt(lx2)) =  16(18.000000)
camera transformation: 134.183209  167.887846  1362.855432
xdiv2(sqrt(lx1)) =  16(19.235384), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 136.724235  152.284118  1331.325867
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 141.528110  129.844518  1337.910152
xdiv2(sqrt(lx1)) =  16(18.788294), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 136.807720  98.993744  1344.203463
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(21.377558)
camera transformation: 137.260155  89.288621  1369.681085
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(21.213203)
camera transformation: 148.832467  95.382332  1371.026708
xdiv2(sqrt(lx1)) =  16(18.027756), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 172.351600  125.172297  1381.037411
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 181.816715  145.946362  1410.063424
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 176.931998  166.654626  1446.978089
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(20.396078)
camera transformation: 170.620455  178.757537  1466.531003
xdiv2(sqrt(lx1)) =  16(16.763055), ydiv2(sqrt(lx2)) =  16(20.223748)
camera transformation: 169.875220  188.466436  1483.797895
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 174.987850  189.284149  1495.645347
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 181.267454  174.550566  1508.472203
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 181.191349  179.079500  1523.635679
xdiv2(sqrt(lx1)) =  16(17.720045), ydiv2(sqrt(lx2)) =  16(19.416488)
camera transformation: 183.205024  186.399784  1531.998631
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 185.455632  180.597208  1539.985387
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(19.235384)
camera transformation: 197.189642  164.638217  1552.491839
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 206.173040  145.484625  1549.559130
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 215.987118  135.361241  1571.892509
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 226.958563  128.138355  1585.541132
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(18.110770)
camera transformation: 233.309462  120.679644  1584.757498
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(19.104973)
camera transformation: 238.299973  116.870754  1602.890555
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(18.027756)
camera transformation: 241.279009  118.853804  1645.679400
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 250.395471  132.817954  1684.433255
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 246.302134  143.499304  1672.585730
xdiv2(sqrt(lx1)) =  16(14.866069), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 243.962590  153.779282  1653.906387
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 247.855327  166.700262  1672.224472
xdiv2(sqrt(lx1)) =  16(13.416408), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 263.020140  186.311260  1733.226991
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 265.540248  193.649368  1698.521928
xdiv2(sqrt(lx1)) =  16(12.206556), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 261.202446  194.335060  1638.473022
xdiv2(sqrt(lx1)) =  16(13.601471), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 277.677335  204.125783  1758.083014
xdiv2(sqrt(lx1)) =  16(13.038405), ydiv2(sqrt(lx2)) =  16(15.000000)
camera transformation: 252.234114  187.684918  1704.556790
xdiv2(sqrt(lx1)) =  16(14.764823), ydiv2(sqrt(lx2)) =  16(15.000000)
camera transformation: 230.021565  183.427555  1705.381776
xdiv2(sqrt(lx1)) =  16(15.231546), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 199.096174  175.711371  1632.755823
xdiv2(sqrt(lx1)) =  16(15.231546), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 165.369508  170.797145  1566.022733
xdiv2(sqrt(lx1)) =  16(17.464249), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 147.951303  182.264753  1594.530747
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 129.921272  190.183003  1585.781741
xdiv2(sqrt(lx1)) =  16(17.088007), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 109.351318  187.884061  1551.996125
xdiv2(sqrt(lx1)) =  16(18.681542), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 99.229434  184.290652  1531.791488
xdiv2(sqrt(lx1)) =  16(18.973666), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 104.979937  177.781940  1536.484260
xdiv2(sqrt(lx1)) =  16(18.384776), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 124.332845  190.556711  1582.673109
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 141.986048  202.377179  1631.466171
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 157.015429  225.142302  1670.826105
xdiv2(sqrt(lx1)) =  16(16.552945), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 152.941581  265.432881  1684.669707
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 141.025194  287.346532  1687.439792
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.763055)
camera transformation: 131.603548  304.812650  1740.148738
xdiv2(sqrt(lx1)) =  16(16.155494), ydiv2(sqrt(lx2)) =  16(16.492423)
camera transformation: 119.006914  319.644670  1790.621284
xdiv2(sqrt(lx1)) =  16(15.811388), ydiv2(sqrt(lx2)) =  16(15.297059)
camera transformation: 110.288130  298.452425  1812.736760
xdiv2(sqrt(lx1)) =  16(16.492423), ydiv2(sqrt(lx2)) =  16(16.278821)
camera transformation: 109.521847  296.328554  1799.999192
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(17.262677)
camera transformation: 97.055577  194.365925  1724.268905
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 95.897451  173.718488  1749.545482
xdiv2(sqrt(lx1)) =  16(16.278821), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 110.441981  147.880406  1775.516558
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(17.117243)
camera transformation: 124.130648  129.475194  1826.289845
xdiv2(sqrt(lx1)) =  16(15.132746), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 127.679031  125.075851  1870.393666
xdiv2(sqrt(lx1)) =  16(15.297059), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 131.617102  125.677431  1906.683206
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(16.124515)
camera transformation: 137.063968  125.568816  1931.118619
xdiv2(sqrt(lx1)) =  16(14.317821), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 144.807538  124.538217  1961.758234
xdiv2(sqrt(lx1)) =  16(13.152946), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 154.592253  123.364501  1970.643244
xdiv2(sqrt(lx1)) =  16(13.341664), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 165.553154  131.929389  2001.078743
xdiv2(sqrt(lx1)) =  16(13.341664), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 172.089708  139.656325  2039.037894
xdiv2(sqrt(lx1)) =  16(13.152946), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 168.448797  131.016686  2074.893963
xdiv2(sqrt(lx1)) =  16(12.165525), ydiv2(sqrt(lx2)) =  16(15.132746)
camera transformation: 162.881587  118.086281  2121.776574
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(14.142136)
camera transformation: 165.465607  121.274846  2151.049887
xdiv2(sqrt(lx1)) =  16(11.401754), ydiv2(sqrt(lx2)) =  16(14.035669)
camera transformation: 187.234046  137.079502  2142.325863
xdiv2(sqrt(lx1)) =  16(10.198039), ydiv2(sqrt(lx2)) =  16(14.035669)
camera transformation: 232.688689  157.696996  2211.706249
xdiv2(sqrt(lx1)) =  16(10.198039), ydiv2(sqrt(lx2)) =  16(13.038405)
camera transformation: 265.112194  168.887463  2142.947566
xdiv2(sqrt(lx1)) =  16(9.486833), ydiv2(sqrt(lx2)) =  16(14.000000)
camera transformation: 292.027486  177.698791  2095.847918
xdiv2(sqrt(lx1)) =  16(8.544004), ydiv2(sqrt(lx2)) =  16(15.033296)
camera transformation: 290.009642  175.182421  1958.558153
xdiv2(sqrt(lx1)) =  16(9.486833), ydiv2(sqrt(lx2)) =  16(16.031220)
camera transformation: 285.854613  144.080295  1827.777227
camera transformation: 284.816669  143.551644  1821.122812
camera transformation: 284.751126  143.518367  1820.702402



posted by maetel
2010. 3. 15. 15:56 Computer Vision
Three-dimensional computer vision: a geometric viewpoint 
By Olivier Faugeras

googleBooks
mitpress

'Computer Vision' 카테고리의 다른 글

Canny edge detection  (0) 2010.03.30
ARToolKit - simpleTest  (0) 2010.03.17
opencv: video capturing from a camera  (4) 2010.03.13
Leordeanu & Hebert, "Unsupervised learning for graph matching"  (0) 2010.03.04
ARToolKit test log  (0) 2010.03.03
posted by maetel
2010. 3. 13. 01:16 Computer Vision


// Test: video capturing from a camera

#include <OpenCV/OpenCV.h> // matrix operations

int main()
{
    IplImage* image = 0;
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
   
    while(1) {
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
        cvGrabFrame( capture ); // capture a frame
        image = cvRetrieveFrame(capture); // retrieve the caputred frame
       
        cvShowImage( "camera", image );
       
        if( cvWaitKey(10) >= 0 )
            break;
        }
    }
   
    cvReleaseCapture( &capture ); // release the capture source
    cvDestroyWindow( "camera" );

    return 0;
}


posted by maetel
2010. 3. 4. 16:56 Computer Vision
Unsupervised learning for graph matching - 2009
Marius Leordeanu, Martial Hebert
Conference: Computer Vision and Pattern Recognition - CVPR

posted by maetel
2010. 3. 3. 19:54 Computer Vision
http://www.hitl.washington.edu/artoolkit/

ARToolKit Patternmaker
Automatically create large numbers of target patterns for the ARToolKit, by the University of Utah.


ARToolKit-2.72.tgz 다운로드

http://www.openvrml.org/

DSVideoLib
A DirectShow wrapper supporting concurrent access to framebuffers from multiple threads. Useful for developing applications that require live video input from a variety of capture devices (frame grabbers, IEEE-1394 DV camcorders, USB webcams).


openvrml on macports
http://trac.macports.org/browser/trunk/dports/graphics/openvrml/Portfile


galaxy:~ lym$ port search openvrml
openvrml @0.17.12 (graphics, x11)
    a cross-platform VRML and X3D browser and C++ runtime library
galaxy:~ lym$ port info openvrml
openvrml @0.17.12 (graphics, x11)
Variants:    js_mozilla, mozilla_plugin, no_opengl, no_x11, player, universal,
             xembed

OpenVRML is a free cross-platform runtime for VRML and X3D available under the
GNU Lesser General Public License. The OpenVRML distribution includes libraries
you can use to add VRML/X3D support to an application. On platforms where GTK+
is available, OpenVRML also provides a plug-in to render VRML/X3D worlds in Web
browsers.
Homepage:    http://www.openvrml.org/

Build Dependencies:   pkgconfig
Library Dependencies: boost, libpng, jpeg, fontconfig, mesa, libsdl
Platforms:            darwin
Maintainers:          raphael@ira.uka.de openmaintainer@macports.org
galaxy:~ lym$ port deps openvrml
openvrml has build dependencies on:
    pkgconfig
openvrml has library dependencies on:
    boost
    libpng
    jpeg
    fontconfig
    mesa
    libsdl
galaxy:~ lym$ port variants openvrml
openvrml has the variants:
    js_mozilla: Enable support for JavaScript in the Script node with Mozilla
    no_opengl: Do not build the GL renderer
    xembed: Build the XEmbed control
    player: Build the GNOME openvrml-player
    mozilla_plugin: Build the Mozilla plug-in
    no_x11: Disable support for X11
    universal: Build for multiple architectures


openvrml 설치



ARToolKit-2.72.1 설치 후 테스트

graphicsTest on the bin directory
-> This test confirms that your camera support ARToolKit graphics module with OpenGL.

videoTest on the bin directory
-> This test confirms that your camera supports ARToolKit video module and ARToolKit graphics module.

simpleTest on the bin directory
-> You need to notice that better the format is similar to ARToolKit tracking format, faster is the acquisition (RGB more efficient).


"hiro" 패턴을 쓰지 않으면, 아래와 같은 에러가 난다.

/Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
galaxy:~ lym$ /Users/lym/ARToolKit/build/ARToolKit.build/Development/simpleTest.build/Objects-normal/i386/simpleTest ; exit;
Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
Camera parameter load error !!
logout


Using default video config.
Opening sequence grabber 1 of 1.
vid->milliSecPerFrame: 200 forcing timer period to 100ms
Video cType is raw , size is 320x240.
Image size (x,y) = (320,240)
*** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 159.250000 131.750000 104.800000 1.012757
350.47574 0.00000 158.25000 0.00000
0.00000 363.04709 120.75000 0.00000
0.00000 0.00000 1.00000 0.00000
--------------------------------------
Opening Data File Data/object_data2
About to load 2 Models
Read in No.1
Read in No.2
Objectfile num = 2


arGetTransMat() 안에서 다음과 같이 pattern의 transformation 값을 출력해 보면,
    // http://www.hitl.washington.edu/artoolkit/documentation/tutorialcamera.htm
    printf("camera transformation: %f  %f  %f\n",conv[0][3],conv[1][3],conv[2][3]);

결과:


Feature List     
* A simple framework for creating real-time augmented reality applications    
* A multiplatform library (Windows, Linux, Mac OS X, SGI)    
* Overlays 3D virtual objects on real markers ( based on computer vision algorithm)    
* A multi platform video library with:          
o multiple input sources (USB, Firewire, capture card) supported          
o multiple format (RGB/YUV420P, YUV) supported          
o multiple camera tracking supported          
o GUI initializing interface    
* A fast and cheap 6D marker tracking (real-time planar detection)    
* An extensible markers patterns approach (number of markers fct of efficency)    
* An easy calibration routine    
* A simple graphic library (based on GLUT)    
* A fast rendering based on OpenGL    
* A 3D VRML support    
* A simple and modular API (in C)    
* Other language supported (JAVA, Matlab)    
* A complete set of samples and utilities    
* A good solution for tangible interaction metaphor    
* OpenSource with GPL license for non-commercial usage


framework



"ARToolKit is able to perform this camera tracking in real time, ensuring that the virtual objects always appear overlaid on the tracking markers."

how to
1. 매 비디오 프레임 마다 사각형 모양을 찾기
2. 검은색 사각형에 대한 카메라의 상대적 위치를 계산
3. 그 위치로부터 컴퓨터 그래픽 모델이 어떻게 그려질지를 계산
4. 실제 영상의 마커 위에 모델을 그림

limitations
1. 추적하는 마커가 영상 안에 보일 때에만 가상 물체를 합성할 수 있음
2. 이 때문에 가상 물체들의 크기나 이동이 제한됨
3. 마커의 패턴의 일부가 가려지는 경우 가상 물체를 합성할 수 없음
4. range(거리)의 제한: 마커의 모양이 클수록 멀리 떨어진 패턴까지 감지할 수 있으므로 추적할 수 있는 volume(범위)이 더 커짐
(이때 거리는  pattern complexity (패턴의 복잡도)에 따라 달라짐: 패턴이 단순할수록 한계 거리가 길어짐)
5. 추적 성능이 카메라에 대한 마커의 상대적인 orientation(방향)에 따라 달라짐
: 마커가 많이 기울어 수평에 가까워질수록 보이는 패턴의 부분이 줄어들기 때문에 recognition(인식)이 잘 되지 않음(신뢰도가 떨어짐)
6. 추적 성능이 lighting conditions (조명 상태)에 따라 달라짐
: 조명에 의해 종이 마커 위에 reflection and glare spots (반사)가 생기면 마커의 사각형을 찾기가 어려워짐
: 종이 대신 반사도가 적은 재료를 쓸 수 있음


ARToolKit Vision Algorithm



Development
Initialization    
1. Initialize the video capture and read in the marker pattern files and camera parameters. -> init()
Main Loop    
2. Grab a video input frame. -> arVideoGetImage()
3. Detect the markers and recognized patterns in the video input frame. -> arDetectMarker()
4. Calculate the camera transformation relative to the detected patterns. -> arGetTransMat)
5. Draw the virtual objects on the detected patterns. -> draw()
Shutdown    
6. Close the video capture down. -> cleanup()

ref.
http://king8028.tistory.com/entry/ARToolkit-simpletestc-%EC%84%A4%EB%AA%8512
http://kougaku-navi.net/ARToolKit.html



ARToolKit video configuration



camera calibration

Default camera properties are contained in the camera parameter file camera_para.dat, that is read in each time an application is started.

The program calib_dist is used to measure the image center point and lens distortion, while calib_param produces the other camera properties. (Both of these programs can be found in the bin directory and their source is in the utils/calib_dist and utils/calib_cparam directories.)



ARToolKit gives the position of the marker in the camera coordinate system, and uses OpenGL matrix system for the position of the virtual object.


ARToolKit API Documentation
http://artoolkit.sourceforge.net/apidoc/


ARMarkerInfo Main structure for detected marker
ARMarkerInfo2 Internal structure use for marker detection
ARMat Matrix structure
ARMultiEachMarkerInfoT Multi-marker structure
ARMultiMarkerInfoT Global multi-marker structure
ARParam Camera intrinsic parameters
arPrevInfo Structure for temporal continuity of tracking
ARVec Vector structure


arVideoGetImage()

video.h
/**
 * \brief get the video image.
 *
 * This function returns a buffer with a captured video image.
 * The returned data consists of a tightly-packed array of
 * pixels, beginning with the first component of the leftmost
 * pixel of the topmost row, and continuing with the remaining
 * components of that pixel, followed by the remaining pixels
 * in the topmost row, followed by the leftmost pixel of the
 * second row, and so on.
 * The arrangement of components of the pixels in the buffer is
 * determined by the configuration string passed in to the driver
 * at the time the video stream was opened. If no pixel format
 * was specified in the configuration string, then an operating-
 * system dependent default, defined in <AR/config.h> is used.
 * The memory occupied by the pixel data is owned by the video
 * driver and should not be freed by your program.
 * The pixels in the buffer remain valid until the next call to
 * arVideoCapNext, or the next call to arVideoGetImage which
 * returns a non-NULL pointer, or any call to arVideoCapStop or
 * arVideoClose.
 * \return A pointer to the pixel data of the captured video frame,
 * or NULL if no new pixel data was available at the time of calling.
 */
AR_DLL_API  ARUint8*        arVideoGetImage(void);


ARParam

param.h
/** \struct ARParam
* \brief camera intrinsic parameters.
*
* This structure contains the main parameters for
* the intrinsic parameters of the camera
* representation. The camera used is a pinhole
* camera with standard parameters. User should
* consult a computer vision reference for more
* information. (e.g. Three-Dimensional Computer Vision
* (Artificial Intelligence) by Olivier Faugeras).
* \param xsize length of the image (in pixels).
* \param ysize height of the image (in pixels).
* \param mat perspective matrix (K).
* \param dist_factor radial distortions factor
*          dist_factor[0]=x center of distortion
*          dist_factor[1]=y center of distortion
*          dist_factor[2]=distortion factor
*          dist_factor[3]=scale factor
*/
typedef struct {
    int      xsize, ysize;
    double   mat[3][4];
    double   dist_factor[4];
} ARParam;

typedef struct {
    int      xsize, ysize;
    double   matL[3][4];
    double   matR[3][4];
    double   matL2R[3][4];
    double   dist_factorL[4];
    double   dist_factorR[4];
} ARSParam;




arDetectMarker()

ar.h 헤더 파일의 설명:
/**
* \brief main function to detect the square markers in the video input frame.
*
* This function proceeds to thresholding, labeling, contour extraction and line corner estimation
* (and maintains an history).
* It's one of the main function of the detection routine with arGetTransMat.
* \param dataPtr a pointer to the color image which is to be searched for square markers.
*                The pixel format depend of your architecture. Generally ABGR, but the images
*                are treated as a gray scale, so the order of BGR components does not matter.
*                However the ordering of the alpha comp, A, is important.
* \param thresh  specifies the threshold value (between 0-255) to be used to convert
*                the input image into a binary image.
* \param marker_info a pointer to an array of ARMarkerInfo structures returned
*                    which contain all the information about the detected squares in the image
* \param marker_num the number of detected markers in the image.
* \return 0 when the function completes normally, -1 otherwise
*/
int arDetectMarker( ARUint8 *dataPtr, int thresh,
                    ARMarkerInfo **marker_info, int *marker_num );


You need to notice that arGetTransMat give the position of the marker in the camera coordinate system (not the reverse). If you want the position of the camera in the marker coordinate system you need to inverse this transformation (arMatrixInverse()).



XXXBK: not be sure of this function: this function must just convert 3x4 matrix to classical perspective openGL matrix. But in the code, you used arParamDecompMat that seem decomposed K and R,t, aren't it ? why do this decomposition since we want just intrinsic parameters ? and if not what is arDecomp ?




double arGetTransMat()

ar.h 헤더 파일의 설명:
/**
* \brief compute camera position in function of detected markers.
*
* calculate the transformation between a detected marker and the real camera,
* i.e. the position and orientation of the camera relative to the tracking mark.
* \param marker_info the structure containing the parameters for the marker for
*                    which the camera position and orientation is to be found relative to.
*                    This structure is found using arDetectMarker.
* \param center the physical center of the marker. arGetTransMat assumes that the marker
*              is in x-y plane, and z axis is pointing downwards from marker plane.
*              So vertex positions can be represented in 2D coordinates by ignoring the
*              z axis information. The marker vertices are specified in order of clockwise.
* \param width the size of the marker (in mm).
* \param conv the transformation matrix from the marker coordinates to camera coordinate frame,
*             that is the relative position of real camera to the real marker
* \return always 0.
*/
double arGetTransMat( ARMarkerInfo *marker_info,
                      double center[2], double width, double conv[3][4] )



arUtilMatInv()

ar.h 헤더 파일의 설명:
/**
* \brief Inverse a non-square matrix.
*
* Inverse a matrix in a non homogeneous format. The matrix
* need to be euclidian.
* \param s matrix input   
* \param d resulted inverse matrix.
* \return 0 if the inversion success, -1 otherwise
* \remark input matrix can be also output matrix
*/
int    arUtilMatInv( double s[3][4], double d[3][4] );






posted by maetel
2010. 3. 2. 20:31 Computer Vision
Tricodes: A Barcode-Like Fiducial Design for Augmented Reality Media - 2006
Jonathan Mooser, Suya You, Ulrich Neumann
International Conference on Multimedia Computing and Systems/International Conference on Multimedia and Expo - ICME(ICMCS)

posted by maetel
2010. 3. 2. 20:26 Computer Vision
Design Patterns for Augmented Reality Systems - 2004
Asa Macwilliams, Thomas Reicher, Gudrun Klinker, Bernd Brügge
Conference: Workshop on Exploring the Design and Engineering of Mixed Reality Systems - MIXER


Figure 2: Relationships between the individual patterns for augmented reality systems. Several approaches are used in combination within an augmented reality system. One approach might require the use of another approach or prevent its usage.


posted by maetel
2010. 2. 26. 01:11 Computer Vision
cross ratio test


Try #1. pi 값 이용

pi = 3.14159265358979323846264338327950288...
pi 값을 이용하여 cross ratio를 구해 보면, 다음과 같이 나온다.



cross ratio = 1.088889
cross ratio = 2.153846
cross ratio = 1.185185
cross ratio = 1.094737
cross ratio = 2.166667
cross ratio = 1.160714
cross ratio = 1.274510
cross ratio = 1.562500
cross ratio = 1.315789
cross ratio = 1.266667
cross ratio = 1.266667
cross ratio = 1.446429
cross ratio = 1.145455
cross ratio = 1.441176
cross ratio = 1.484848
cross ratio = 1.421875
cross ratio = 1.123457
cross ratio = 1.600000
cross ratio = 1.142857
cross ratio = 1.960784
cross ratio = 1.142857
cross ratio = 1.350000
cross ratio = 1.384615
cross ratio = 1.529412
cross ratio = 1.104575
cross ratio = 1.421875
cross ratio = 1.711111
cross ratio = 1.178571
cross ratio = 1.200000
cross ratio = 1.098039
cross ratio = 2.800000
cross ratio = 1.230769
cross ratio = 1.142857


다른 식 적용

cross ratio = 0.040000
cross ratio = 0.666667
cross ratio = 0.107143
cross ratio = 0.064935
cross ratio = 0.613636
cross ratio = 0.113636
cross ratio = 0.204545
cross ratio = 0.390625
cross ratio = 0.230769
cross ratio = 0.203620
cross ratio = 0.205882
cross ratio = 0.316406
cross ratio = 0.109375
cross ratio = 0.300000
cross ratio = 0.360000
cross ratio = 0.290909
cross ratio = 0.090909
cross ratio = 0.400000
cross ratio = 0.100000
cross ratio = 0.562500
cross ratio = 0.100000
cross ratio = 0.257143
cross ratio = 0.285714
cross ratio = 0.363636
cross ratio = 0.074380
cross ratio = 0.290909
cross ratio = 0.466667
cross ratio = 0.125000
cross ratio = 0.156250




Try #2. swPark_2000rti: 43p: figure 7의 cross ratio 값들로 패턴의 그리드 (격자 위치)를 역추산
 
40개의 수직선에 대한 37개의 cross ratio :
0.47, 0.11, 0.32, 0.17, 0.44, 0.08, 0.42, 0.25, 0.24, 0.13, 0.46, 0.18, 0.19, 0.29, 0.21, 0.37, 0.16, 0.38, 0.23, 0.09, 0.37, 0.26, 0.31, 0.18, 0.30, 0.15, 0.39, 0.16, 0.32, 0.27, 0.20, 0.28, 0.39, 0.12, 0.23, 0.28, 0.35
20개의 수평선에 대한 17개의 cross ratio :
0.42, 0.13, 0.32, 0.16, 0.49, 0.08, 0.40, 0.20, 0.29, 0.19, 0.37, 0.13, 0.26, 0.38, 0.21, 0.16, 0.42




뭥미?????

# of cross-ratios in vertical lines = 37
# of cross-ratios in horizontal lines = 17

x[0]=1  x[1]=2  x[2]=4
x[3]=-2.87805  x[4]=-1.42308  x[5]=-0.932099  x[6]=-0.787617  x[7]=-0.596499  x[8]=-0.55288  x[9]=-0.506403  x[10]=-0.456778  x[11]=-0.407892  x[12]=-0.390887  x[13]=-0.363143  x[14]=-0.338174  x[15]=-0.324067  x[16]=-0.312345  x[17]=-0.305022  x[18]=-0.293986  x[19]=-0.286594  x[20]=-0.273759  x[21]=-0.251966  x[22]=-0.244977  x[23]=-0.238299  x[24]=-0.231391  x[25]=-0.219595  x[26]=-0.20838  x[27]=-0.192558  x[28]=-0.183594  x[29]=-0.16952  x[30]=-0.159689  x[31]=-0.147983  x[32]=-0.131036  x[33]=-0.114782  x[34]=-0.0950305  x[35]=0.0303307  x[36]=0.964201  x[37]=-0.959599  x[38]=-0.519287  x[39]=-0.356521 


posted by maetel
2010. 2. 26. 00:07 Computer Vision
Optimal Grid Pattern for Automated Camera Calibration Using Cross Ratio

Chikara MATSUNAGA  Yasushi KANAZAWA  Kenichi KANATANI 

Publication IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences  Vol.E83-A  No.10  pp.1921-1928
Publication Date: 2000/10/20
Online ISSN: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Information Theory and Its Applications)
Category: Image Processing
Keyword: cross ratioMarkov processerror analysisreliability evaluationvirtual studio
Full Text:
출처:  http://www.suri.it.okayama-u.ac.jp/~kanatani/data/ejournal.html

MVA2000 IAPR Workshop on Machine Vision Applications, Nov. 28-30,2000, The University of Tokyo, Japan
13-28
Optimal Grid Pattern for Automated Matching Using Cross Ratio
Chikara Matsunaga (Broadcast Division, FOR-A Co. Ltd.)
Kenichi Kanatanit (Department of Computer Science, Gunma University)


Kenichi Kanatani  金谷健一   http://www.suri.it.okayama-u.ac.jp/%7Ekanatani/
Yasushi Kanazawa 金澤靖     http://www.img.tutkie.tut.ac.jp/~kanazawa/

IEICE (
The Institute of Electronics Information and Communication Engineers)   http://www.ieice.org
IAPR (International Association of Pattern Recognition)   http://www.iapr.org
IAPR - Machine Vision & Applications



Summary: 
With a view to virtual studio applications, we design an optimal grid pattern such that the observed image of a small portion of it can be matched to its corresponding position in the pattern easily. The grid shape is so determined that the cross ratio of adjacent intervals is different everywhere. The cross ratios are generated by an optimal Markov process that maximizes the accuracy of matching. We test our camera calibration system using the resulting grid pattern in a realistic setting and show that the performance is greatly improved by applying techniques derived from the designed properties of the pattern.


Camera calibration is a first step in all vision and media applications.
> pre-calibration (Tsai) vs. self-calibration (Pollefeys)
=> "simultaneous calibration" by placing an easily distinguishable planar pattern in the scene

Introducing a statistic model of image noise, we generate the grid intervals by an optimal Markov process that maximizes the accuracy of matching.
: The pattern is theoretically designed by statistical analysis

If the cross rations are given, the sequence is determined as follows.


To find a sequence of cross ratios such that the sequence of numbers is a homogeneous increasing with the average interval being 1 and the minimum width as specified.
=> To generate the sequence of cross ratios stochastically, according to a probability distribution defined in such a way that the resulting sequence of numbers has the desired properties
=> able to optimize the probability distribution so that the matching performance is maximized by analyzing the statistical properties of image noise

 



 

출처: C. Matsunaga, Y. Kanazawa, and K. Kanatani, Optimal grid pattern for automated camera calibration using cross ratio , IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Vol. E83-A, No. 10, pp. 1921--1928, 2000. 중 1926쪽 Fig.8 4배 확대 캡처





'Computer Vision' 카테고리의 다른 글

"Design Patterns for Augmented Reality Systems"  (0) 2010.03.02
virtual studio 구현: cross ratio test  (0) 2010.02.26
virtual studio 구현: workflow  (0) 2010.02.23
chroma keying  (0) 2010.02.22
3차원 인터페이스 시장조사  (1) 2010.02.22
posted by maetel
2010. 2. 23. 00:47 Computer Vision
1> pattern identification 패턴 인식

rough preview
1) 무늬의 deep/light 색의 경계점들 찾기 edge detection
2) 찾은 점들을 직선으로 연결
3) 검출된 가로선과 세로선의 cross ratio와 실제 무늬의 cross ratio를 비교하여, 몇 번째 선인지 인식

detailed preview
1. initial identification process 초기 인식 과정 (특징점 인식)

1) chroma keying:  RGB -> YUV 변환

2) gradient filtering: first-order derivative Gaussian filter (length = 7)
 -1) 세로축에 대해 영상 축소 (1/4)하여 필터링
 -2) Gx, Gy 절대값 비교하여 vertical / horizontal direction 판별
 -3) 가로축에 대해

3) line fitting: lens distortion coefficient을 고려하여 이차곡선으로 피팅

4) identification
 -1) 영상에서 찾아진 선들이 실제 무늬에서 몇 번째 선인지 인식
 -2) feature points는 직선 식에 의해 피팅된 선들의 교점으로 정확하게 구할 수 있음

2. feature point tracking 실제 동작 과정 (특징점 위치 추적)
: feature points corresponding 검출된 특징점을 무늬의 교점과 매칭

  1) intersection filter H (교점 필터)로 local maximum & minimum를 가지는 교점 검출

  2) 검출된 교점의 부호를 판별하여 두 부류로 나눔

  3) 이전 프레임에서의 교점의 위치를 기준으로 현재 프레임에서 검출된 교점에 대해 가장 가까운 이전 점을 찾음

  * 다음 프레임에서 새로 나타난 특징점에 대해서도 이전 프레임에서의 카메라 변수를 이용해 실제 패턴 상의 교점을 영상으로 투영시켜 기준점으로 삼을 수 있음




2> real-time camera parameter extraction 실시간 카메라 변수 추출: Tsai's algorithm

1. determining image center 영상 중심 구하기: zooming
: using the center of expansion as a constant image-center

1) (lens distortion을 구하기 위한 초기화 과정에서) 정지된 카메라의 maximum zoom-out과 maximum zoom-in 상태에서 찾아서 인식한 특징점들을 저장

2) 두 개의 프레임에서 같은 점으로 나타난 특징점들을 연결한 line segments의 common intersection 교점을 계산

* 실제로 zooming은 여러 개의 lens들의 조합으로 작동하기 때문에 카메라의 zoom에 따라서 image center가 변하게 되지만, 이에 대한 표준 편차가 작으므로 무시하기로 함

2. lens distortion coefficient 계산
zooming이 없다면 고정된 값이 되므로 이하와 같이 매번 계산해 줄 필요가 없어짐

(1) f-k1 look-up table을 참조하는 방법
: zooming하는 과정에서 초점 거리 f와 렌즈 왜곡 변수 k1이 계속 변하게 되므로, 이에 대한 참조표를 미리 만들어 두고 나서 실제 동작 과정에서 참조
* 특징점들이 모두 하나의 평면에 존재하는 경우에는 초점거리 f와 카메라의 z 방향으로의 이동 Tz가 서로 coupled되기 때문에 카메라 변수가 제대로 계산되기 어렵다는 점을 고려하여 평면 상의 특징점들에 대해서 Tz/f를 인덱스로 사용하는 편법을 쓴다면, 카메라가 z 방향으로는 이동하지 않고 고정되어 있어야 한다는 (T1z = 0)조건이 붙게 됨

(2) collinearity를 이용하는 방법
: searching for k1 which maximally preserves collinearity 인식된 교점들에 대해 원래 하나의 직선에 속하는 점들이 왜곡 보상 되었을 때 가장 직선이 되게 하는 왜곡변수를 구함

  1) 영상에서 같은 가로선에 속하는 교점들 (Xf, Yf) 가운데 세 개를 고름

  2) 식7로부터 왜곡된 영상면 좌표 (Xd, Yd)를 구함
 
  3) 식5로부터 왜곡 보상된 영상면 좌표 (Xu, Yu)를 구함

  4) 식21과 같은 에러 함수 E(k1)를 정의

  5) 영상에 나타난 N개의 가로선들에 대해서 E(k1) 값을 최소화하는 k1을 구함 (식 23) -> 비선형 최적화이나 iteration은 한 번
 
3. Tsai's algorithm
렌즈 왜곡 변수를 알면 카메라 캘리브레이션은 선형적 방법으로 구할 수 있게 됨




3> filtering
잡음으로 인해 검출된 교점에 오차가 생기므로 카메라변수가 틀려지게 됨
(->카메라가 정지해 있어도 카메라변수에 변화가 생겨 결과적으로 그래픽으로 생성된 가상의 무대에 떨림이 나타나게 됨)

averaging filter 평균 필터 (전자공학회논문지 제36권 S편 제7호 식19)









posted by maetel
2010. 2. 22. 22:50 Computer Vision


http://en.wikipedia.org/wiki/Chroma_key
Green is currently used as a backdrop more than any other color because image sensors in digital video cameras are most sensitive to green, due to the Bayer pattern allocating more pixels to the green channel, this mimicks the human increased sensitivity to green light.









The Foundry
http://www.thefoundry.co.uk/

KeyLight

posted by maetel
2010. 2. 22. 19:25 Computer Vision
UX, interactive art 분야 조사

1
Buzz 3D 사의 - 3D Interface - High Definition digital 3D Marketing solution
http://www.buzz3d.com/3d_interface_index.html
: 웹에서 실시간으로 동작 가능한 3차원 가상 현실 구현 플랫폼/애플리케이션
-> 사용자 트래킹: http://www.buzz3d.com/3d_interface_features_user.html
을 통해 행동을 분석하고
-> 햅틱: http://www.buzz3d.com/3d_interface_features_haptics.html
기능을 통해 체감형 경험을 제공함


2
HTC 사의 휴대폰 - HTC Touch Diamond
-> TouchFLO 3D 기능: http://www.htc.com/www/product/touchdiamond/touchflo-3d.html
: finger gesture를 통해 메뉴 선택, 웹 브라우징, 이메일링 등을 할 수 있음


3
CityWall
http://citywall.org/
: 핀란드 헬싱키 중심에 설치되어 있는 대형 멀티 터치 디스플레이
Helsinki Institute for Information TechnologyMultitouch 사에서 공동 개발
http://www.youtube.com/watch?v=WkNq3cYGTPE


4
Microsoft 사의 Bing Maps의 augmented reality mapping 기술
: 사용자가 찍고 있는 동영상의 2차원 이미지를 웹 상의 3차원 지도와 실시간으로 매칭시켜 보여 줌 (이때 시간 정보까지 반영함으로써 4D를 실현)
http://www.ted.com/talks/blaise_aguera.html



5
3차원 UX 구현 툴 목록 (입력이 3차원이 아니므로 주제와 별개일 수도 있음)
http://www.artefactgroup.com/blog/2010/02/tools-for-building-a-3d-ux/
- PapervisionAway 3d 는 Adobe Flash의 plug-in들
- Electric Rain 사가 개발한 Swift 3d
- Scaleform 사의 게임 개발용 솔루션 GFx 3.0
- Microsoft 사의 Expression Blend
- Electric Rain 사의 3D XAML 툴 ZAM 3D
- 비프로그래머들을 위한 Processing으로 만들어진 ATOMIC Authoring Tool은 ARToolkit 라이브러리로 제공되는 증강 현실 저작 툴
- TAT kaster UI 렌더링 플랫폼
- Kanzi


6
R.U.S.E. : 손가락 동작을 통해 조작하는 게임으로 E3에 발표됐음
ref. http://www.artefactgroup.com/blog/2009/09/3d-ui-useful-usable-or-desirable/

7
SKT의 증강 현실 서비스 Ovjet
http://ovjet.com/
: 휴대폰 카메라로 보는 실제화면 위에 실시간으로 다양한 정보를 결합하여 보여주는 증강현실(Augmented Reality) 서비스
ref. http://news.cnbnews.com/category/read.html?bcode=102993


8
CATIA
제품디자인 제작을 위한 가상현실 저작 툴


9
Marisil (Mobile Augmented Reality Interface Sign Interpretation Language)
http://marisil.org/
손동작에 기반한 증강 현실을 인터페이스로하는 모바일 기술




10
http://www.engadget.com/2005/10/02/pioneer-develops-input-device-for-3d-drawing/


http://en.wikipedia.org/wiki/Gesture_recognition

http://en.wikipedia.org/wiki/Depth_perception

"human detection IP"
http://www.visionbib.com/bibliography/motion-f733.html
VideoProtein  http://www.videoprotein.com/

"depth map IP application"
http://altruisticrobot.tistory.com/219
posted by maetel
2010. 2. 22. 19:21 Computer Vision
Coelho, C., Heller, A., Mundy, J. L., Forsyth, D. A., and Zisserman, A.1992. An experimental evaluation of projective invariants. In Geometric invariance in Computer Vision, J. L. Mundy and A. Zisserman, Eds. Mit Press Series Of Artificial Intelligence Series. MIT Press, Cambridge, MA, 87-104.

posted by maetel
2010. 2. 19. 16:40 Computer Vision
[소특집:이미지 인식 및 Understanding 기술] 증강현실의 기술과 동향        
서용덕 · 김종성 · 홍기상 (포항공과대학교 전기전자공학과 영상처리연구실)
대한전자공학회, 전자공학회지 제29권 제7호, 2002. 7, pp. 110 ~ 120 (11pages)



camera self-calibration 카메라 자동 보정
: 어떤 물체의 삼차원 VRML 모델을 여러 장의 영상에서 얻고자 하는 경우 그 영상들을 얻는 데 사용된 카메라에 대한 위치, 방향, 초점거리 등의 정보를 구하는 과정

projective geometric method 투영기하방법


1. 삼차원 모델 복원을 통한 증강 현실

SFM = structure from motion
: 카메라 파라미터와 영상열 (image sequence) 각 프레임의 카메라간의 상대적인 위치를 계산하고, 이러한 정보를 이용하여 영상열에 보이는 물체의 대략적인 3차원 구조를 계산

trilinearity
: 임의의 3차원 구조를 보고 있는 세 개의 투시뷰 (perspective view) 사이의 대수학적 연결 관계

trifocal tensor
: trilinearity를 수학적으로 모델링한 것
(영상에서 특징점과 직선을 정합하고, 투영 카메라를 계산하며, 투영 구조를 복원하는 데 이용됨)
(기존 epipolar geometry를 이용한 방법보다 더 정확한 결과를 얻을 수 있는 것으로 알려짐)



SFM 시스템의 핵심 기술을 영상열에서 정확하게 카메라를 계산하는 것이다.

1) projective reconstruction 투영 기하 복원
영상열에서 추출되는 특징점과 특징선들을 정확하게 연결하여 영상열에서 관찰되는 2차원 특징들과 우리가 복원하여 모델을 만들고자 하는 3차원 구조와의 초기 관계를 (실제 영상을 획득한 카메라의 파라미터들과 카메라간의 상대적인 파악함으로써) 계산



Trifocal tensor의 계산은 아주 정밀한 값을 요구하므로 잘못된 특징점이나 특징선의 연결이 들어가서는 안 된다. 이러한 잘못된 연결 (Outlier)를 제거하는 방법으로 LMedS (Least Median Square)나 RANSAC (Random Sampling Consensus) 기법이 사용된다.

세 개의 뷰 단위로 연속된 영상열에서 계속적으로 계산된 trifocal tensor들과 특징점과 특징선들은 임의의 기준 좌표계를 중심으로 정렬하는 다중 뷰 정렬 (Multi-view Registration) 처리를 통하여 통합된다. 이렇게 통합된 값들은 투영 통합 최적화 (Projective Bundle Adjustment)를 거쳐 투영 기하 아래에서 발생한 에러를 최소화한다.


2) camera auto-calibration 카메라 자동 보정
투영 기하 정보를 유클리드 기하 정보로 변환하기 위해서 필요한 것으로, 2차원 영상 정보의 기하학적인 특징을 이용하여 투영 기하에서 유클리드 기하로 3차원 구조를 변환하는 동시에 실세계에서의 카메라 변수(초점 거리, 카메라 중심, 축 비율, 비틀림 상수)와 카메라 간의 상대적인 위치를 정확하게 계산




카메라 자동 보정의 방법론에서는 카메라 보정을 위한 패턴을 따로 디자인하여 사용하지 않는다. 이 경우 실시간 계산의 기능은 없어지게 된다.


3) (유클리드) 구조 복원
모델의 3차원 기하 정보를 구함
: 자동 보정 단계를 통하여 복원된 3차원 정보를 그래픽 모델로 만들기 위해서 먼저 3차원 데이터를 데이터 삼각법 (Triangulation)을 이용하여 다각형 메쉬 모델 (Polygonal mesh model)로 만든 후에, 텍스처 삽입을 통해서 모델의 현실성을 증가시킴


2. 카메라 보정에 의한 증강 현실

1) 보정 패턴에 고정된 좌표계(W)와 카메라 좌표계(C) 그리고 그래픽을 위한 좌표계(G) 사이의 관계를 미리 설정해 둠

2) 카메라와 보정 패턴 사이의 상대적인 좌표변환 관계는 영상처리를 통하여 매 프레임마다 계산을 통해 얻음

3) (그래픽 좌표계와 보정 패턴 좌표계는 미리 결정되어 있어서 컴퓨터 그래픽에 의해 합성될 가상물체들의 상대적인 위치가 카메라 보정 단계 이전에 이미 알려져 있는 것이므로,) 카메라로부터 얻은 영상을 분석하여 보정 패턴을 인식하고 고정 패턴의 삼차원 좌표와 그 좌표의 영상 위치를 알아낸 후 그 둘 사이의 관계를 이용하여 카메라 보정값을 얻음

cross ratio 비조화비

4) trifocal tensor를 계산하여 카메라의 초기 정보를 계산하여 초기 영상복원을 구함

5) 영상열의 각 영상에 대해서 이전 시점의 영상 사이의 정합점들을 영상 정합 (image based matching: normalized cross correlation (NCC)를 이용하는) 방법을 통해 구함

6) RANSAC 알고리듬을 기초로 새로운 영상에 대한 카메라 행렬을 구하고, 새롭게 나타난 정합점들에 대해서 삼차원 좌표를 구함 (잘못 얻어진 정합점들을 제거하는 과정을 포함)

7) Euclidean Reconstruction: 투영 기하 공간에서 정의된 값들을 유클리드 공간에서 정의되는 값으로 변환


카메라 자동 보정 및 투영기하공간에서의 계산식들은 모두 비선형 방정식으로 되어 있기 때문에 최소자승 오차법 (Least square error method)으로 구해지는 값들이 원래 방정식을 제대로 따르지 못하는 경우가 많이 생긴다. 따라서 비선형 최적화 과정이 항상 필요하며 유클리드 공간으로의 변환과정의 최종 단계에서 그리고 투영기하공간에서 복원값들 구할 때 최적화 과정을 적절히 배치할 필요가 있다.










 

posted by maetel
2010. 2. 11. 03:03 Computer Vision
Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000. http://doi.ieeecomputersociety.org/10.1109/34.888718
presentation


Z. Zhang. Flexible Camera Calibration By Viewing a Plane From Unknown Orientations. International Conference on Computer Vision (ICCV'99), Corfu, Greece, pages 666-673, September 1999.


http://research.microsoft.com/en-us/um/people/zhang/calib/

MATLAB code:
http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/example.html


Microsoft Research report


cv. Image Processing and Computer Vision — OpenCV 2.0 C Reference
-> Camera Calibration and 3D Reconstruction — OpenCV 2.0 C Reference

OpenCV: Image Processing and Computer Vision Reference Manual
file:///opt/local/share/opencv/doc/ref/opencvref_cv.htm

posted by maetel
2010. 2. 10. 18:27 Computer Vision
Sawhney, H. S. and Kumar, R. 1999. True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 21, 3 (Mar. 1999), 235-243. DOI= http://dx.doi.org/10.1109/34.754589

posted by maetel
2010. 2. 10. 16:53 Computer Vision
Gibbs, S., Arapis, C., Breiteneder, C., Lalioti, V., Mostafawy, S., and Speier, J. 1998.
Virtual Studios: An Overview. IEEE MultiMedia 5, 1 (Jan. 1998), 18-35.
DOI= http://dx.doi.org/10.1109/93.664740


posted by maetel
2010. 2. 10. 15:47 Computer Vision
Seong-Woo Park, Yongduek Seo, Ki-Sang Hong: Real-Time Camera Calibration for Virtual Studio. Real-Time Imaging 6(6): 433-448 (2000)
doi:10.1006/rtim.1999.0199

Seong-Woo Park, Yongduek Seo and Ki-Sang Hong1

Dept. of E.E. POSTECH, San 31, Hyojadong, Namku, Pohang, Kyungbuk, 790-784, Korea


Abstract

In this paper, we present an overall algorithm for real-time camera parameter extraction, which is one of the key elements in implementing virtual studio, and we also present a new method for calculating the lens distortion parameter in real time. In a virtual studio, the motion of a virtual camera generating a graphic studio must follow the motion of the real camera in order to generate a realistic video product. This requires the calculation of camera parameters in real-time by analyzing the positions of feature points in the input video. Towards this goal, we first design a special calibration pattern utilizing the concept of cross-ratio, which makes it easy to extract and identify feature points, so that we can calculate the camera parameters from the visible portion of the pattern in real-time. It is important to consider the lens distortion when zoom lenses are used because it causes nonnegligible errors in the computation of the camera parameters. However, the Tsai algorithm, adopted for camera calibration, calculates the lens distortion through nonlinear optimization in triple parameter space, which is inappropriate for our real-time system. Thus, we propose a new linear method by calculating the lens distortion parameter independently, which can be computed fast enough for our real-time application. We implement the whole algorithm using a Pentium PC and Matrox Genesis boards with five processing nodes in order to obtain the processing rate of 30 frames per second, which is the minimum requirement for TV broadcasting. Experimental results show this system can be used practically for realizing a virtual studio.


전자공학회논문지 제36권 S편 제7호, 1999. 7 
가상스튜디오 구현을 위한 실시간 카메라 추적 ( Real-Time Camera Tracking for Virtual Studio )   
박성우 · 서용덕 · 홍기상 저 pp. 90~103 (14 pages)
http://uci.or.kr/G300-j12265837.v36n07p90

서지링크     한국과학기술정보연구원
가상스튜디오의 구현을 위해서 카메라의 움직임을 실시간으로 알아내는 것이 필수적이다. 기존의 가상스튜디어 구현에 사용되는 기계적인 방법을 이용한 카메라의 움직임 추적하는 방법에서 나타나는 단점들을 해결하기 위해 본 논문에서는 카메라로부터 얻어진 영상을 이용해 컴퓨터비전 기술을 응용하여 실시간으로 카메라변수들을 알아내기 위한 전체적인 알고리듬을 제안하고 실제 구현을 위한 시스템의 구성 방법에 대해 다룬다. 본 연구에서는 실시간 카메라변수 추출을 위해 영상에서 특징점을 자동으로 추출하고 인식하기 위한 방법과, 카메라 캘리브레이션 과정에서 렌즈의 왜곡특성 계산에 따른 계산량 문제를 해결하기 위한 방법을 제안한다.



Practical ways to calculate camera lens distortion for real-time camera calibration
Pattern Recognition, Volume 34, Issue 6, June 2001, Pages 1199-1206
Seong-Woo Park, Ki-Sang Hong




generating virtual studio




Matrox Genesis boards
http://www.matrox.com/imaging/en/support/legacy/

http://en.wikipedia.org/wiki/Virtual_studio
http://en.wikipedia.org/wiki/Chroma_key

camera tracking system : electromechanical / optical
pattern recognition
2D-3D pattern matches
planar pattern


feature extraction -> image-model matching & identification -> camera calibration
: to design the pattern by applying the concept of cross-ratio and to identify the pattern automatically


영상에서 찾아진 특징점을 자동으로 인식하기 위해서는 공간 상의 점들과 영상에 나타난 그것들의 대응점에 대해서 같은 값을 갖는 성질이 필요한데 이것을 기하적 불변량 (Geometric Invariant)이라고 한다. 본 연구에서는 여러 불변량 가운데 cross-ratio를 이용하여 패턴을 제작하고, 영상에서 불변량의 성질을 이용하여 패턴을 자동으로 찾고 인식할 수 있게 하는 방법을 제안한다.


Tsai's algorithm
R. Y. Tsai, A Versatile Camera Calibration Technique for High Accuracy 3-D Maching Vision Metrology Using Off-the-shelf TV Cameras and Lenses. IEEE Journal of Robotics & Automation 3 (1987), pp. 323–344.

direct image mosaic method
Sawhney, H. S. and Kumar, R. 1999. True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 21, 3 (Mar. 1999), 235-243. DOI= http://dx.doi.org/10.1109/34.754589

Lens distortion
Richard Szeliski, Computer Vision: Algorithms and Applications: 2.1.6 Lens distortions & 6.3.5 Radial distortion

radial alignment constraint
"If we presume that the lens has only radial distortion, the direction of a distorted point is the same as the direction of an undistorted point."

cross-ratio  http://en.wikipedia.org/wiki/Cross_ratio
: planar projective geometric invariance
 - "pencil of lines"
http://mathworld.wolfram.com/CrossRatio.html
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MOHR_TRIGGS/node25.html
http://www.cut-the-knot.org/pythagoras/Cross-Ratio.shtml
http://web.science.mq.edu.au/~chris/geometry/


pattern identification

 카메라의 움직임을 알아내기 위해서는 공간상에 인식이 가능한 물체가 있어야 한다. 즉, 어느 위치에서 보더라도 영상에 나타난 특징점을 찾을 수 있고, 공간상의 어느 점에 대응되는 점인지를 알 수 있어야 한다.

패턴이 인식 가능하기 위해서는 카메라가 어느 위치, 어느 자세로 보던지 항상 같은 값을 갖는 기하적 불변량 (Geometric Invariant)이 필요하다.

Coelho, C., Heller, A., Mundy, J. L., Forsyth, D. A., and Zisserman, A.1992. An experimental evaluation of projective invariants. In Geometric invariance in Computer Vision, J. L. Mundy and A. Zisserman, Eds. Mit Press Series Of Artificial Intelligence Series. MIT Press, Cambridge, MA, 87-104.


> initial identification process
extracting the pattern in an image: chromakeying -> gradient filtering: a first-order derivative of Gaussian (DoG) -> line fitting: deriving a distorted line (that is actually a curve) equation -> feature point tracking (using intersection filter)


R1x = 0



http://en.wikipedia.org/wiki/Difference_of_Gaussians



real-time camera parameter extraction

이상적인 렌즈의 optical axis가 영상면에 수직이고 변하지 않는다고 할 때, 영상 중심은 카메라의 줌 동작 동안 고정된 값으로 계산된다. (그러나 실제 렌즈의 불완전한 특성 때문에 카메라의 줌 동작 동안 영상 중심 역시 변하게 되는데, 이 변화량은 적용 범위 이내에서 2픽셀 이하이다. 따라서 본 연구에서는 이러한 변화를 무시하고 이상적인 렌즈를 가정하여 줌동작에 의한 영상 중심을 구하게 된다.)

For zoom lenses, the image centers vary as the camera zooms because the zooming operation is executed by a composite combination of several lenses. However, when we examined the location of the image centers, its standard deviation was about 2 pixels; thus we ignored the effect of the image center change.


calculating lens distortion coefficient

Zoom lenses are zoomed by a complicated combination of several lenses so that the effective focal length and distortion coefficient vary during zooming operations.

When using the coplanar pattern with small depth variation, it turns out that focal length and z-translation cannot be separated exactly and reliably even with small noise.

카메라 변수 추출에 있어서 공간상의 특징점들이 모두 하나의 평면상에 존재할 때는 초점거리와 z 방향으로의 이동이 상호 연관 (coupling)되어 계산값의 안정성이 결여되기 쉽다.


collinearity

Collinearity represents a property when the line in the world coordinate is also shown as a line in the image. This property is not preserved when the lens has a distortion.


Once the lens distortion is calculated, we can execute camera calibration using linear methods.


filtering

가상 스튜디오 구현에 있어서는 시간 지연이 항상 같은 값을 가지게 하는 것이 필수적이므로, 실제 적용에서는 예측 (prediction)이 들어가는 필터링 방법(예를 들면, Kalman filter)은 사용할 수가 없었다.

averaging filter 평균 필터








Orad  http://www.orad.co.il

Evans & Sutherland http://www.es.com









posted by maetel
2010. 2. 9. 21:22 Computer Vision

Foundations and Trends® in
Computer Graphics and Vision

Volume 4 Issue 4

3D Reconstruction from Multiple Images: Part 1 Principles

Theo Moons
KU Brussel

Luc Van Gool
KU Leuven and ETH Zurich

Maarten Vergauwen
GeoAutomation

Abstract

The issue discusses methods to extract 3-dimensional (3D) models from plain images. In particular, the 3D information is obtained from images for which the camera parameters are unknown. The principles underlying such uncalibrated structure-from-motion methods are outlined. First, a short review of 3D acquisition technologies puts such methods in a wider context, and highlights their important advantages. Then, the actual theory behind this line of research is given. The authors have tried to keep the text maximally self-contained, therefore also avoiding to rely on an extensive knowledge of the projective concepts that usually appear in texts about self-calibration 3D methods. Rather, mathematical explanations that are more amenable to intuition are given. The explanation of the theory includes the stratification of reconstructions obtained from image pairs as well as metric reconstruction on the basis of more than 2 images combined with some additional knowledge about the cameras used. Readers who want to obtain more practical information about how to implement such uncalibrated structure-from-motion pipelines may be interested in two more Foundations and Trends issues written by the same authors. Together with this issue they can be read as a single tutorial on the subject.

posted by maetel
2010. 2. 9. 17:50 Computer Vision

Undelayed initialization in bearing only SLAM


Sola, J.   Monin, A.   Devy, M.   Lemaire, T.  
CNRS, Toulouse, France;

This paper appears in: Intelligent Robots and Systems, 2005. (IROS 2005). 2005 IEEE/RSJ International Conference on
Publication Date: 2-6 Aug. 2005
On page(s): 2499- 2504
ISBN: 0-7803-8912-3
INSPEC Accession Number: 8750433
Digital Object Identifier: 10.1109/IROS.2005.1545392
Current Version Published: 2005-12-05


ref. http://homepages.laas.fr/jsola/JoanSola/eng/bearingonly.html




 기존 SLAM에서 쓰이는 레이저 레인지 스캐너 등 range and bearing 센서 대신 공간에 대한 풍부한 정보를 주는 카메라를 쓰면, 1차원 (인식된 물체까지의 거리 정보, depth)을 잃게 되어 bearing-only SLAM이 된다.

EKF requires Gaussian representations for all the involved random variables that form the map (the robot pose and all landmark's positions). Moreover, their variances need to be small to be able to approximate all the non linear functions with their linearized forms.

두 입력 이미지 프레임 사이에 baseline을 구할 수 있을 만큼 충분한 시점 차가 존재해야 랜드마크의 위치를 결정할 수 있으므로, 이를 확보하기 위한 시간이 필요하게 된다.

http://en.wikipedia.org/wiki/Structure_from_motion
  1. Extract features from images
  2. Find an initial solution for the structure of the scene and the motion of the cameras
  3. Extend the solution and optimise it
  4. Calibrate the cameras
  5. Find a dense representation of the scene
  6. Infer geometric, textural and reflective properties of the scene.

sequential probability ratio test
http://en.wikipedia.org/wiki/Sequential_probability_ratio_test
http://www.agrsci.dk/plb/bembi/africa/sampling/samp_spr.html
http://eom.springer.de/S/s130240.htm

EKF (extended Kalman filter) - inconsistency and divergence
GSF (Gaussian sum filter) - computation load
FIS (Federated Information Sharing)


posted by maetel
2010. 2. 9. 01:34 Computation/Language
Google C++ Style Guide
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml    informed by jinho

C++ coding standards: 101 rules, guidelines, and best practices  By Herb Sutter, Andrei Alexandrescu

Code Complete by Steven C. McConnell
http://cc2e.com/


Code Craft: the practice of writing excellent code By Pete Goodliffe
http://oreilly.com/catalog/9781593271190   informed by neuralix



'Computation > Language' 카테고리의 다른 글

windows.h  (0) 2008.09.10
Communications Functions (Windows)  (0) 2008.08.14
visual c++ solution - dsw  (0) 2008.08.12
16진수 10진수로 변환 strtol()  (0) 2008.08.12
numeric library  (0) 2008.03.18
posted by maetel
2010. 1. 25. 17:36 Computer Vision
2-D visual SLAM with Extended Kalman Filter 연습

가정1: 2차원 공간의 랜드마크들에 대해 카메라로부터 1차원 인풋 이미지(랜드마크에 대한 관측)를 얻는다.

가정2: 우선, 매 프레임마다 모든 랜드마크를 관측한다.


다음은, EKF 알고리즘으로 실행되는 코드
슬램 창에 로봇의 방위 + 1-D 입력 이미지 라인 추가






다음에 할 일:
1. 1대의 카메라로부터 랜드마크들을 초기화하는 문제
2. 프레임마다 관측되는 랜드마크들과 그 수가 달라지는 문제 --> dynamic data structure

 
posted by maetel
2010. 1. 25. 02:50 Computer Vision

Foundations and Trends® in
Robotics

Vol. 1, No. 1 (2010) 1–78
© 2009 D. Kragic and M. Vincze
DOI: 10.1561/2300000001

Vision for Robotics

Danica Kragic1 and Markus Vincze2
1 Centre for Autonomous Systems, Computational Vision and Active Perception Lab, School of Computer Science and Communication, KTH, Stockholm, 10044, Sweden, dani@kth.se
2 Vision for Robotics Lab, Automation and Control Institute, Technische Universitat Wien, Vienna, Austria, vincze@acin.tuwien.ac.at

SUGGESTED CITATION:
Danica Kragic and Markus Vincze (2010) “Vision for Robotics”,
Foundations and Trends® in Robotics: Vol. 1: No. 1, pp 1–78.
http:/dx.doi.org/10.1561/2300000001


Abstract

Robot vision refers to the capability of a robot to visually perceive the environment and use this information for execution of various tasks. Visual feedback has been used extensively for robot navigation and obstacle avoidance. In the recent years, there are also examples that include interaction with people and manipulation of objects. In this paper, we review some of the work that goes beyond of using artificial landmarks and fiducial markers for the purpose of implementing visionbased control in robots. We discuss different application areas, both from the systems perspective and individual problems such as object tracking and recognition.


1 Introduction 2
1.1 Scope and Outline 4

2 Historical Perspective 7
2.1 Early Start and Industrial Applications 7
2.2 Biological Influences and Affordances 9
2.3 Vision Systems 12

3 What Works 17
3.1 Object Tracking and Pose Estimation 18
3.2 Visual Servoing–Arms and Platforms 27
3.3 Reconstruction, Localization, Navigation, and Visual SLAM 32
3.4 Object Recognition 35
3.5 Action Recognition, Detecting, and Tracking Humans 42
3.6 Search and Attention 44

4 Open Challenges 48
4.1 Shape and Structure for Object Detection 49
4.2 Object Categorization 52
4.3 Semantics and Symbol Grounding: From Robot Task to Grasping and HRI 54
4.4 Competitions and Benchmarking 56

5 Discussion and Conclusion 59

Acknowledgments 64
References 65


posted by maetel
2010. 1. 22. 00:20 Computer Vision
D-SLAM: A Decoupled Solution to Simultaneous Localization and Mapping  
Z. Wang, S. Huang and G. Dissanayake
ARC Centre of Excellence for Autonomous Systems (CAS), Faculty of Engineering, University of Technology, Sydney, Australia
International Journal of Robotics Research Volume 26 Issue 2 - Publication Date: 1 February 2007 (Special Issue on the Fifth International Conference on Field and Service Robotics, 2005)
http://dx.doi.org/10.1177/0278364906075173


posted by maetel