블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2010. 1. 14. 17:27 Footmarks
RoSEC international summer/winter school
Robotics-Specialized Education Consortium for Graduates sponsored by MKE

로봇 특성화 대학원 사업단 주관
2010 RoSEC International Winter School
2010년 1월 11일(월)부터 1월 16일(토)
한양대학교 HIT(한양종합기술연구원) 6층 제1세미나실(606호)



Robot mechanism
Byung-Ju Yi (Hanyang University, Korea)
한양대 휴먼로보틱스 연구실 이병주 교수님  bj@hanyang.ac.kr
- Classification of robotic mechanism and Design consideration of robotic mechanism
- Design Issue and application examples of master slave robotic system
- Trend of robotic mechanism research

Actuator and Practical PID Control
Youngjin Choi (Hanyang University, Korea)
한양대 휴먼로이드 연구실 최영진 교수님 cyj@hanyang.ac.kr
- Operation Principle of DC/RC/Stepping Motors & Its Practice
- PID Control and Tuning
- Stability of PID Control and Application Examples

Coordination of Robots and Humans
Kazuhiro Kosuge (Tohoku University, Japan)
일본 도호쿠 대학 시스템 로보틱스 연구실 고스게 카즈히로 교수님
- Robotics as systems integration
- Multiple Robots Coordination
- Human Robot Coordination and Interaction

Robot Control
Rolf Johansson (Lund University, Sweden)
스웨덴 룬드 대학 로보틱스 연구실 Rolf.Johansson@control.lth.se
- Robot motion and force control
- Stability of motion
- Robot obstacle avoidance

Lecture from Industry or Government
(S. -R. Oh, KIST)

Special Talk from Government
(Y. J. Weon, MKE)

Mobile Robot Navigation
Jae-Bok Song (Korea University, Korea)
고려대 지능로봇 연구실 송재복 교수님 jbsong@korea.ac.kr
- Mapping
- Localization
- SLAM

3D Perception for Robot Vision
In Kyu Park (Inha University, Korea)
인하대 영상미디어 연구실 박인규 교수님 pik@inha.ac.kr
- Camera Model and Calibration
- Shape from Stereo Views
- Shape from Multiple Views

Lecture from Industry or Government
(H. S. Kim, KITECH)

Roboid Studio
Kwang Hyun Park (Kwangwoon University, Korea)
광운대 정보제어공학과 박광현 교수님 akaii@kw.ac.kr
- Robot Contents
- Roboid Framework
- Roboid Component

Software Framework for LEGO NXT
Sanghoon Lee (Hanyang University, Korea)
한양대 로봇 연구실 이상훈 교수님
- Development Environments for LEGO NXT
- Programming Issues for LEGO NXT under RPF of OPRoS
- Programming Issues for LEGO NXT under Roboid Framework

Lecture from Industry or Government
(Robomation/Mobiletalk/Robotis)

Robot Intelligence : From Reactive AI to Semantic AI
Il Hong Suh (Hanyang University, Korea)
한양대 로봇 지능/통신 연구실 서일홍 교수님
- Issues in Robot Intelligence
- Behavior Control: From Reactivity to Proactivity
- Use of Semantics for Robot Intelligence

AI-Robotics
Henrik I. Christensen (Georgia Tech., USA)

-
Semantic Mapping
- Physical Interaction with Robots
- Efficient object recognition for robots

Lecture from Industry or Government
(M. S. Kim, Director of CIR, 21C Frontier Program)

HRI
Dongsoo Kwon (KAIST, Korea)

- Introduction to human-robot interaction
- Perception technologies of HRI
- Cognitive and emotional interaction

Robot Swarm for Environmental Monitoring
Nak Young Chong (JAIST, Japan)

- Self-organizing Mobile Robot Swarms: Models
- Self-organizing Mobile Robot Swarms: Algorithms
- Self-organizing Mobile Robot Swarms: Implementation


posted by maetel
2009. 7. 23. 18:53 Computer Vision
Brian Williams, Georg Klein and Ian Reid
(Department of Engineering Science, University of Oxford, UK)
Real-Time SLAM Relocalisation
In Proceedings of the International Conference on Computer Vision, Rio de Janeiro, Brazil, 2007
demo 1
demo 2


• real-time, high-accuracy localisation and mapping during tracking
• real-time (re-)localisation when when tracking fails
• on-line learning of image patch appearance so that no prior training or map structure is required and features are added and removed during operation.


Lepetit's image patch classifier (feature appearance learning)
=> integrating the classifier more closely into the process of map-building
(by using classification results to aid in the selection of new points to add to the map)


> recovery from tracking failure: local vs. global
local -  particle filter -> rich feature descriptor
global - proximity using previous key frames


- based on SceneLib (Extended Kalman Filter)
- rotational (and a degree of perspective) invariance via local patch warping
- assuming the patch is fronto-parallel when first seen
http://freshmeat.net/projects/scenelib/

active search

innovation covariance

joint compatibility test

randomized lists key-point recognition algorithm
1. randomized: (2^D  - 1) tests -> D tests
2. independent treatment of classes
3. binary leaf scores (2^D * C * N bits for all scores)
4. intensity offset
5. explicit noise handing

training the classifier

The RANSAC (Random Sample Consensus) Algorithm




ref.
Davison, A. J. and Molton, N. D. 2007.
MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29, 6 (Jun. 2007), 1052-1067. DOI= http://dx.doi.org/10.1109/TPAMI.2007.1049

Vision-based global localization and mapping for mobile robots
Se, S.   Lowe, D.G.   Little, J.J.   (MD Robotics, Brampton, Ont., Canada)

Lepetit, V. 2006.
Keypoint Recognition Using Randomized Trees. IEEE Trans. Pattern Anal. Mach. Intell. 28, 9 (Sep. 2006), 1465-1479. DOI= http://dx.doi.org/10.1109/TPAMI.2006.188

Lepetit, V., Lagger, P., and Fua, P. 2005.
Randomized Trees for Real-Time Keypoint Recognition. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cvpr'05) - Volume 2 - Volume 02 (June 20 - 26, 2005). CVPR. IEEE Computer Society, Washington, DC, 775-781. DOI= http://dx.doi.org/10.1109/CVPR.2005.288

Fischler, M. A. and Bolles, R. C. 1981.
Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 6 (Jun. 1981), 381-395. DOI= http://doi.acm.org/10.1145/358669.358692
posted by maetel
2009. 3. 31. 22:25 Computer Vision

Simultaneous localization and mapping with unknown data association using FastSLAM
Montemerlo, M.   Thrun, S.  
Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, USA


This paper appears in: Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on
Publication Date: 14-19 Sept. 2003
Volume: 2
On page(s): 1985 - 1991 vol.2
Number of Pages: 3 vol.lii+4450
ISSN: 1050-4729
ISBN: 0-7803-7736-2
INSPEC Accession Number:7877180
Digital Object Identifier: 10.1109/ROBOT.2003.1241885
Current Version Published: 2003-11-10


Michael Montemerlo @ Field Robotics Center, Carnegie Mellon University 
http://en.wikipedia.org/wiki/Michael_Montemerlo

Sebastian Thrun @ Stanford Artificial Intelligence Laboratory, Stanford University
http://en.wikipedia.org/wiki/Sebastian_Thrun

http://www.probabilistic-robotics.org/


FastSLAM
http://robots.stanford.edu/probabilistic-robotics/ppt/fastslam.ppt

Rao-Blackwellized Particle Filter
http://en.wikipedia.org/wiki/Particle_filter


I. Introduction



SLAM

mobile robotics

the problem of building a map of an unknown environment from a sequence of noisy landmark measurements obtained from a moving robot + a robot localization problem => SLAM

autonomous robots operating in environments where precise maps and positioning are not available


 

Extended Kalman Filter (EKF)
: used for incrementally estimating the joint posterior distribution over robot pose and landmark positions

limitations of EKF
1) Quadratic complexity (: sensor updates require time quadratic in the total number of landmarks in the map)
=> limiting the number of landmarks to only a few hundred (whereas natural environment models frequently contain millions of features
2) Data association / correspondence (: mapping of observations to landmarks)
=> associating a small numver of observations with incorrect landmarks to cause the filter to diverge



 

FastSLAM decomposes the SLAM problem into a robot localization problem, and a collection of landmark estimation problems that are conditioned on the robot pose estimate.

ref. Montemerlo & Thrun & Koller & Wegbreit <FastSLAM: A factored solution to the simultaneous localization and mapping problem>   In Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002. AAAI.

 

FastSLAM factors the SLAM posterior into a localization problem and K independent landmark estimation problems conditioned on the robot pose estimate.

> a modified particle filter to estimate the posterior over robot paths
> each particle possessing K independent Kalman filters that estimate the landmark locations conditioned on the particle's path
=> an instance of the Rao-Blackwellized particle filter

Representing particles as binary trees of Kalman Filters
-> Incorporating observations into FastSLAM in O(MlogK) time  (M, # of particles; K, # of landmarks)

Each particle represents a different robot pose hypothesis.
=> Data association can be considered separately for every particle.
=> 1) The noise of robot motion does not affect the accuracy of data association.
2) Incorrectly associated particls will receive lower probabilities and will be removed in future resampling steps.


 

On real-world data with ambiguous data association
Adding extra odometric noise
( Odometry is the use of data from the movement of actuators to estimate change in position over time. )
Estimating an accurate map without any odometry in the environment in which the Kalman Filter inevitably diverges.
How to incorporate negative information resulting in a measurable increase in the accuracy of the resulting map



 

II. SLAM Problem Definition


probabilistic Markov chain
http://en.wikipedia.org/wiki/Markov_chain

robot's position & heading orientation, s

K landmarks' locations, θ

i) The robot's current pose is a probabilistic function of the robot control and the previous pose at time t.

ii) The sensor measurement, range and bearing to landmarks, is a probabilistic function of the robot's current pose and the landmakr being at time t.

=> SLAM is the problem of determining the locations of all landmarks and robot poses from measurements and controls.


III. Data Association


uncertainty in the SLAM posterior, mapping between observations and landmarks
=> ambiguity in data association

i) measurement noise: uncertain landmark positions
<= 1) measurement ambiquity
2) confusion between nearby landmarks

ii) motion noise: robot pose uncertainty after incorporating a control
=> 1) adding a large amount of error to the robot's pose
2) causing a filter to diverge


IV. FastSLAM with Known Data Association


dynamic Bayes network
http://en.wikipedia.org/wiki/Dynamic_Bayesian_network

conditional independece
: The problem of determining the landmark locations could be decoupled into K independent estimation problems, one for each landmark.


FastSLAM estimates the factored SLAM posterior using a modified particle filter, with K independent Kalman Filters for each particle to estimate the landmark positions conditioned on the hypothesized robot paths. The resulting algorithm is an instance of the Rao-Blackwellized particle filter.



A. Particle Filter Path Estimation

Monte Carlo Localization (MCL) algorithm
http://en.wikipedia.org/wiki/Monte_Carlo_localization
 
particle set, representing the posterior ("guess") of a robot path

proposal distribution of particle filtering








posted by maetel
2009. 3. 31. 15:22 Computer Vision

VIP paper-reading
2009-03-31 @GA606


by adrift  



Robot Sensor Calibration: Solving AX = XB on the Euclidean Group
Frank C. Park and Bryan J. Martin

This paper appears in: Robotics and Automation, IEEE Transactions on
Publication Date: Oct 1994
Volume: 10,  Issue: 5
On page(s): 717-721
ISSN: 1042-296X
References Cited: 8
CODEN: IRAUEZ
INSPEC Accession Number: 4803588
Digital Object Identifier: 10.1109/70.326576
Current Version Published: 2002-08-06

Abstract
The equation AX=XB on the Euclidean group arises in the problem of calibrating wrist-mounted robotic sensors. In this article the authors derive, using methods of Lie theory, a closed-form exact solution that can be visualized geometrically, and a closed-form least squares solution when A and B are measured in the presence of noise 




http://en.wikipedia.org/wiki/Lie_group

http://en.wikipedia.org/wiki/Lie_algebra

http://en.wikipedia.org/wiki/SO(3)

http://en.wikipedia.org/wiki/SO(4)



posted by maetel
2009. 3. 27. 21:33 Computer Vision

Scalable Monocular SLAM
Eade, E.   Drummond, T.  
Cambridge University;

This paper appears in: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on
Publication Date: 17-22 June 2006
Volume: 1,  On page(s): 469- 476
ISSN: 1063-6919
ISBN: 0-7695-2597-0
Digital Object Identifier: 10.1109/CVPR.2006.263
Current Version Published: 2006-07-05

 
Ethan Eade & Tom Drummond
Machine Intelligence Laboratory
the Division of Information Engineering at Cambridge University Engineering Department




monocular SLAM
particle filter + top-down search => real-time, large number  of landmarks

the first to apply this FastSLAM-type particle filter to single-camera SLAM


1. Introduction


SLAM = Simultaneous Localization and Mapping
: process of causally estimating both egomotion and structure in an online system

 SLAM using visual data in computer vision

SFM (= structure from motion): reconstructing scene geometry
+ causal or recursive estimation techniques

perspective-projection cameras

filtering methods to allow indirect observation models

Kalman filtering framework

Extended Kalman filter = EKF (-> to linearize the observation and dynamics models of the system)

causal estimation with recursive algorithms (cp. estimation depending only on observations up to the current time)
=> online operation (cp. SFM on global nonlinear optimization)


Davision's SLAM with a single camera
> EKF estimation framework
> top-down Bayesian estimation approach searching for landmarks in image regions constrained by estimate > uncertainty (instead of performing extensive bottom-up image processing and feature matching)
> Bayesian partial-initialization scheme for incorporating new landmarks
- cannot scale to large environment


EKF = the Extended Kalman filter
-  N*N covariace matrix for N landmarks
- updated with N*N computation cost

> SLAM system using a single camera as the only sensor
> frame-rate operation with many landmarks
> FastSLAM-style particle filter (the first use of such an approach in a monocular SLAM setting)
> top-down active search
> an efficient algorithm for discovering the depth of new landmarks that avoids linearization errors
> a novel method for using partially initialized landmarks to help constrain camera pose


FastSLAM
: based on the Rao-Blackwellized Particle Filter

2. Background

2.1 Scalable SLAM

> submap
bounded complexity -> bounded computation and space requirements

Montemerlo & Thrun
If the entire camera motion is known then the estimates of the positions of different landmarks become independent of each other.







Rao-Blackwellized Particle Filter



ZNCC = the Zero mean Normalized Cross-Correlation function epipolar constraint


epipolar constraint

http://en.wikipedia.org/wiki/Epipolar_geometry


posted by maetel
2009. 3. 26. 19:56 Computer Vision

Inverse Depth Parametrization for Monocular SLAM
Civera, J.   Davison, A.J.   Montiel, J. 


This paper appears in: Robotics, IEEE Transactions on
Publication Date: Oct. 2008
Volume: 24,  Issue: 5
On page(s): 932-945
ISSN: 1552-3098
INSPEC Accession Number: 10301459
Digital Object Identifier: 10.1109/TRO.2008.2003276
First Published: 2008-10-03
Current Version Published: 2008-10-31

Javier Civera, Departamento de Informática e Ingeniería de Sistemas, Universidad de Zaragoza

Andrew J. Davison, Reader in Robot Vision at the Department of Computing, Imperial College London

Jose Maria Martinez Montiel, Robotics and Real Time Group, Universidad de Zaragoza




monocular simultaneous localization and mapping  (SLAM)

representation of uncertainty

the standard extended Kalman filter (EKF)

direct parametrization of the inverse depth of features

feature initialization

camera motion estimates

6-D state vector --> converted to the Euclidean XYZ form

linearity index => automatic detection and conversion to maintain maximum efficiency



I. Introduction


monocular camera
: projective sensor measuring the beairng of image features

monocular (adj) 단안(單眼)(용)의, 외눈의

A stereo camera is a type of camera with two or more lenses. This allows the camera to simulate human binocular vision.

structure from motion = SFM
1) feature matching
2) global camera location & scene feature position estimates

sliding window processing

Sliding Window Protocol is a bi-directional data transmission protocol used in the data link layer (OSI model) as well as in TCP (transport layer of the OSI model). It is used to keep a record of the frame sequences sent and their respective acknowledgements received by both the users.

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images.

Odometry is the use of data from the movement of actuators to estimate change in position over time. Odometry is used by some robots, whether they be legged or wheeled, to estimate (not determine) their position relative to a starting location.

visual SLAM

probabilistic filtering approach

initializing uncertain depth estimates for distance features

Gaussian distributions implicit in the EKF

a new feature parametrization that is able to smoothly cope with initialization of features at all depth - even up to "infinity" - within the standard EKF framework: direct parametrization of inverse depth relative to the camera position from which a feature was first observed


A. Delayed and Undelayed Initialization

main map; main probabilistic state; main state vector

test for inclusion

delayed initialization
> treating newly detected features separately from the main map to reduce depth uncertainty before insertion into the full filter (with a standard XYZ representation)
- Features that retain low parallax over many frames (those very far from the camera or close to the motion epipole) are usually rejected completely because they never pass the test for inclusion
> (in 2-D and simulation) Initialization is delayed until the measurement equation is approximately Gaussian and the point can be safely triangulated.
> 3-D monocular vision with inertial sensing + auxiliary particle filter (in high frame rate sequence)

undelayed initialization
> While features with highly uncertain depths provide little information on camera translation, they are extremely useful as bearing references for orientation estimation.
: a multiple hypothesis scheme, initializing features at various depths and pruning those not reobserved in subsequent images
> Gaussian sum filter approximated by a federated information sharing method to keep the computational overhead low
-> to spread the Gaussian depth hypotheses along the ray according to inverse depth

Davision's particle method --> (Sola et al.) Gaussian sum filter --> (Civera at al.) new inverse depth scheme

 

A Gaussian sum is more efficient representation than particles (efficient enough that the separate Gaussians can call be put into the main state vector), but not as efficient as the single Gaussian representation that the inverse depth parametrization aalows.



B. Points at Infinity

efficient undelayed initialization + features at all depths (in outdoor scenes)


Point at infinity: a feature that exhibits no parallax during camera motion due to its extreme depth
-> not used for estimating camera translationm but for estimating rotation

The homogeneous coordinate systems of visual projective geometry used normally in SFM allow explicit representation of points at infinity(, and they have proven to play an important role during offline structure and motion estimation).

sequential SLAM system

Montiel and Davison: In special case where all features are known to be infinite -- in very-large-scale outdoor scenes or when the camera rotates on a tripod -- SLAM in pure angular coordinates turns the camera into a real-time visual compass.


Our probabilistic SLAM algorithm must be able to represent the uncertainty in depth of seemingly infinite features. Observing no parallax for a feature after 10 units of camera translation does tell us something about its depth -- it gives a reliable lower bound, which depends on the amount of motion made by the camera (if the feature had been closer than this, we would have observed parallax).

The explicit consideration of uncertainty in the locations of points has not been previously required in offline computer vision algorithms, but is very important in a more difficult online case.



C. Inverse Depth Representation

There is a unified and straightforward parametrization for feature locations that can handle both initialization and standard tracking of both close and very distant features within the standard EKF framework.


standard tracking

An explicit parametrization of the inverse depth of a feature along a semiinfinite ray from the position from which it was first viewed allows a Gaussian distribution to cover uncertainty in depth that spans a depth range from nearby to infinity, and permits seamless crossing over to finite depth estimates of features that have been apparently infinite for long periods of time.

linearity index + inverse depth parametrization

The projective nature of a camera means that the image measurement process is nearly linear in this inverse depth coordinate.


Inverse depth appears in the relation between image disparity and point depth in a stereo vision; it is interpreted as the parallax with respect to the plane at infinity. (Hartley and Zisserman)

Inverse depth is used to relate the motion field induced by scene points with the camera velocity in optical flow analysis. 

modified polar coordinates

target motion analysis = TMA

EKF-based sequential depth estimation from camera-known motion

multibaseline stereo

matching robustness for scene symmetries

sequential EKF process using inverse depth
( ref. Stochastic Approximation and Rate-Distortion Analysis for Robust Structure and Motion Estimation )

undelayed initialization for 2-D monocular SLAM 
( ref. A unified framework for nearby and distant landmarks in bearing-only SLAM )

FastSLAM-based system for monocular SLAM
( ref. Ethan Eade &  Tom Drummond,  Scalable Monocular SLAM )

special epipolar update step

FastSLAM

( ref. Civera, J.   Davison, A.J.   Montiel, J.M.M., Inverse Depth to Depth Conversion for Monocular SLAM 
J. Montiel and A. J. Davison “A visual compass based on SLAM,” )

loop-closing



II. State Vector Definition


handheld camera motion
> constant angular and linear velocity model

quaternion








posted by maetel