블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2009. 7. 21. 16:16 Computer Vision
임현, 이영삼 (인하대 전기공학부)
이동로봇의 동시간 위치인식 및 지도작성(SLAM)
제어 로봇 시스템 학회지 제15권 제2호 (2009년 6월)
from kyu


> definition
mapping: 환경을 인식가능한 정보로 변환하고
localization: 이로부터 자기 위치를 추정하는 것

> issues
- uncertainty <= sensor
- data association (데이터 조합): 차원이 높은 센서 정보로부터 2-3차원 정도의 정보를 추려내어 이를 지속적으로 - 대응시키는 것
- 관찰된 특징점 자료들을 효율적으로 관리하는 방법


> localization (위치인식)
: 그 위치가 미리 알려진 랜드마크를 관찰한 정보를 토대로 자신의 위치를 추정하는 것
: 초기치 x0와 k-1시점까지의 제어 입력, 관측벡터와 사전에 위치가 알려진 랜드마크를 통하여 매 k시점마다 로봇의 위치를 추정하는 것
- 로봇의 위치추정의 불확실성은 센서의 오차로부터 기인함.

> mapping (지도작성)
: 기준점과 상대좌표로 관찰된 결과를 누적하여 로봇이 위치한 환경을 모델링하는 것
: 위치와 관측정보 그리고 제어입력으로부터 랜드마크 집합을 추정하는 것
- 지도의 부정확성은 센서의 오차로부터 기인함.

> Simultaneous Localization and Mapping (SLAM, 동시간 위치인식 및 지도작성)
: 위치한 환경 내에서 로봇의 위치를 추정하는 것
: 랜드마크 관측벡터와 초기값 그리고 적용된 모든 제어입력이 주어진 상태에서 랜드마크의 위치와 k시점에서의 로봇 상태벡터 xk의 결합확률
- 재귀적 방법 + Bayes 정리
- observation model (관측 모델) + motion model (상태 공간 모델, 로봇의 움직임 모델)
- motion model은 상태 천이가 Markov 과정임을 의미함. (현재 상태는 오직 이전 상태와 입력 벡터로서 기술되고, 랜드마크 집합과 관측에 독립임.)
- prediction (time-update) + correction (measurement-update)
- 불확실성은 로봇 주행거리계와 센서 오차로부터 유발됨.


conditional Bayes rule
http://en.wikipedia.org/wiki/Bayes%27_theorem
 P(A|B \cap C) = \frac{P(A \cap B \cap C)}{P(B \cap C)} = \frac{P(B|A \cap C) \, P(A|C) \, P(C)}{P(C) \, P(B|C)} = \frac{P(B|A \cap C) \, P(A|C)}{P(B|C)}\,.

Markov process

total probability theorem: "law of alternatives"
http://en.wikipedia.org/wiki/Total_probability_theorem
\Pr(A)=\sum_{n} \Pr(A\cap B_n)\,
\Pr(A)=\sum_{n} \Pr(A\mid B_n)\Pr(B_n).\,

> Extended Kalman filter (EKF, 확장 칼만 필터)


http://en.wikipedia.org/wiki/Ground_truth

posted by maetel
Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 10 Stochastic Processes

http://en.wikipedia.org/wiki/Stochastic_process
One approach to stochastic processes treats them as functions of one or several deterministic arguments ("inputs", in most cases regarded as "time") whose values ("outputs") are random variables: non-deterministic (single) quantities which have certain probability distributions.


stochastic processes = random functions of time
-> time suquence of the events

time structure of a process vs. amplitude structure of a random variable
(autocorrelation function and autocovariance function vs. expected value and variance)

Poisson

Brownian

Gaussian

Wide sense stationary processes

cross-correlation



10.1 Definitions and Examples



NRZ 파형
http://en.wikipedia.org/wiki/Non-return-to-zero


posted by maetel
Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 9 Estimation of a Random Variable

prediction
A predictor uses random variables produced in early subexperiments to estimate a random variable produced by a future subexperiment.


9.1 Optimum Estimation Given Another Random Variable


The estimate of X that produces the minimum mean square error is the expected value (or conditional expected value) of X calculated with the probability model that incorporates the available information.

Bind estimation of X

Estimation of X given an event

Minimum Mean Square Estimation of X given Y



9.2 Linear Estimation of X given Y


9.3 MAP and ML Estimation


9.4 Linear Estimation of Random Varaiables from Random Vectors




posted by maetel

Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 8 Hypothesis Testing


statistical inference method
1) perform an experiment
2) observe an outcome
3) state a conclusion
 
http://en.wikipedia.org/wiki/Statistical_inference


8.1 Significance Testing

http://en.wikipedia.org/wiki/Significance_testing
In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference.  




http://en.wikipedia.org/wiki/Hypothesis_testing

8.2 Binary Hypothesis Testing

http://en.wikipedia.org/wiki/Binary_classification


Maximum A Posteriori Probability (MAP) Test

http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation

Minimum Cost Test

Neyman-Pearson Test

http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma

Maximum Likelihood Test

http://en.wikipedia.org/wiki/Maximum_likelihood


8.3 Multiple Hypothesis Test

posted by maetel
Yates and Goodman
Chapter 7 Parameter Estimation Using the Sample Mean

statistical inference

http://en.wikipedia.org/wiki/Statistical_inference
Statistical inference or statistical induction comprises the use of statistics and random sampling to make inferences concerning some unknown aspect of a population


7.1  Sample Mean: Expected Value and Variance

The sample mean converges to a constant as the number of repetitions of an experiment increases.

Althouth the result of a single experiment is unpredictable, predictable patterns emerge as we collect more and more data.


sample mean
= numerical average of the observations
: the sum of the sample values divided by the number of trials


7.2 Deviation of a Random Variable from the Expected Value

Markov Inequality
: an upper bound on thte probability that a sample value of a nonnegative random variable exceeds the expected value by any arbitrary factor

http://en.wikipedia.org/wiki/Markov_inequality


Chebyshev Inequality 
: The probability of a large deviation from the mean is inversely proportional to the square of the deviation

http://en.wikipedia.org/wiki/Chebyshev_inequality


7.3 Point Estimates of Model Parameters

http://en.wikipedia.org/wiki/Estimation_theory
estimating the values of parameters based on measured/empirical data. The parameters describe an underlying physical setting in such a way that the value of the parameters affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

http://en.wikipedia.org/wiki/Point_estimation
the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" for an unknown (fixed or random) population parameter


relative frequency (of an event)

point estimates :
bias
consistency
accuracy

consistent estimator
: sequence of estimates which converges in probability to a parameter of the probability model.



The sample mean is an unbiased, consistent estimator of the expected value of a random variable.

The sample variance is a biased estimate of the variance of a random variable.

mean square error
: expected squared difference between an estimate and the estimated parameter

 The standard error of the estimate of the expected value converges to zero as n grows without bound.

http://en.wikipedia.org/wiki/Law_of_large_numbers


7.4 Confidence Intervals

accuracy of estimate

confidence interval
: difference between a random variable and its expected value

confidence coefficient
: probability that a sample value of the random variable will be within the confidence interval

posted by maetel
central limit theorem
http://en.wikipedia.org/wiki/Central_limit_theorem


6.1 Expected Values of Sums

Theorem 6.1
The expected value of the sum equals the sum of the expected values whether or not each variables are independent.

Theorem 6.2
The variance of the sum is the sum of all the elements of the covariance matirx.


6.2 PDF of the Sum of Two Random Variables

Theorem 6.5
http://en.wikipedia.org/wiki/Convolution

linear system
http://en.wikipedia.org/wiki/Linear_system
 

6.3 Moment Generating Functions

In linear system theory, convolution in the time domain corresponds to multiplication in the frequency domain with time functions and frequency functions related by the Fourier transform.

In probability theory, we can use transform methods to replace the convolution of PDFs by multiplication of transforms.

moment generating function
: the transform of a PDF or a PMF

http://en.wikipedia.org/wiki/Moment_generating_function

http://en.wikipedia.org/wiki/Laplace_transform

region of convergence


6.4 MGF of the Sum of Independent Random Variables

Theorem 6.8
Moment generating functions provide a convenient way to study the properties of sums of independent finite discrete random variables.

Theorem 6.9
The sum of independent Poisson random variables is a Poisson random variable.

Theorem 6.10
The sum of independent Gaussian random variables is a Gaussian random variable.

In general, the sum of independent random variables in one family is a different kind of random variable.

Theorem 6.11
The Erlang random variable is the sum of n independent exponential random variables.


6.5 Random Sums of Independent Random Variables

random sum
: sum of iid random variables in which the number of terms in the sum is also a random variable

It is possible to express the probability model of R as a formula for the moment generating function.


The number of terms in the random sum cannot depend on the actual values of the terms in the sum.


6.6 Central Limit Theorem

Probability theory provides us with tools for interpreting observed data.

bell-shaped curve = normal distribution


So many practical phenomena produce data that can be modeled as Gaussian random variables.


http://en.wikipedia.org/wiki/Central_limit_theorem

Central Limit Theorem
The CDF of a sum of random variables more and more resembles a Gaussian CDF as the number of terms in the sum increases.
 
Central Limit Theorem Approximation = Gaussian approximation


6.7 Applications of the Central Limit Theorem

De Moivre-Laplace Formula

http://en.wikipedia.org/wiki/Theorem_of_de_Moivre%E2%80%93Laplace
normal approximation to the binomial distribution


6.8 The Chernoff Bound

Chernoff Bound

http://en.wikipedia.org/wiki/Chernoff_bound
exponentially decreasing bounds on tail distributions of sums of independent random variables


posted by maetel