블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

2009. 6. 11. 13:45

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 10 Stochastic Processes

http://en.wikipedia.org/wiki/Stochastic_process
One approach to stochastic processes treats them as functions of one or several deterministic arguments ("inputs", in most cases regarded as "time") whose values ("outputs") are random variables: non-deterministic (single) quantities which have certain probability distributions.


stochastic processes = random functions of time
-> time suquence of the events

time structure of a process vs. amplitude structure of a random variable
(autocorrelation function and autocovariance function vs. expected value and variance)

Poisson

Brownian

Gaussian

Wide sense stationary processes

cross-correlation



10.1 Definitions and Examples



NRZ 파형
http://en.wikipedia.org/wiki/Non-return-to-zero


posted by maetel
Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 9 Estimation of a Random Variable

prediction
A predictor uses random variables produced in early subexperiments to estimate a random variable produced by a future subexperiment.


9.1 Optimum Estimation Given Another Random Variable


The estimate of X that produces the minimum mean square error is the expected value (or conditional expected value) of X calculated with the probability model that incorporates the available information.

Bind estimation of X

Estimation of X given an event

Minimum Mean Square Estimation of X given Y



9.2 Linear Estimation of X given Y


9.3 MAP and ML Estimation


9.4 Linear Estimation of Random Varaiables from Random Vectors




posted by maetel

Yates & Goodman
Probability and Stochastic Process, 2nd ed.
Chapter 8 Hypothesis Testing


statistical inference method
1) perform an experiment
2) observe an outcome
3) state a conclusion
 
http://en.wikipedia.org/wiki/Statistical_inference


8.1 Significance Testing

http://en.wikipedia.org/wiki/Significance_testing
In statistics, a result is called statistically significant if it is unlikely to have occurred by chance. "A statistically significant difference" simply means there is statistical evidence that there is a difference.  




http://en.wikipedia.org/wiki/Hypothesis_testing

8.2 Binary Hypothesis Testing

http://en.wikipedia.org/wiki/Binary_classification


Maximum A Posteriori Probability (MAP) Test

http://en.wikipedia.org/wiki/Maximum_a_posteriori_estimation

Minimum Cost Test

Neyman-Pearson Test

http://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma

Maximum Likelihood Test

http://en.wikipedia.org/wiki/Maximum_likelihood


8.3 Multiple Hypothesis Test

posted by maetel
Yates and Goodman
Chapter 7 Parameter Estimation Using the Sample Mean

statistical inference

http://en.wikipedia.org/wiki/Statistical_inference
Statistical inference or statistical induction comprises the use of statistics and random sampling to make inferences concerning some unknown aspect of a population


7.1  Sample Mean: Expected Value and Variance

The sample mean converges to a constant as the number of repetitions of an experiment increases.

Althouth the result of a single experiment is unpredictable, predictable patterns emerge as we collect more and more data.


sample mean
= numerical average of the observations
: the sum of the sample values divided by the number of trials


7.2 Deviation of a Random Variable from the Expected Value

Markov Inequality
: an upper bound on thte probability that a sample value of a nonnegative random variable exceeds the expected value by any arbitrary factor

http://en.wikipedia.org/wiki/Markov_inequality


Chebyshev Inequality 
: The probability of a large deviation from the mean is inversely proportional to the square of the deviation

http://en.wikipedia.org/wiki/Chebyshev_inequality


7.3 Point Estimates of Model Parameters

http://en.wikipedia.org/wiki/Estimation_theory
estimating the values of parameters based on measured/empirical data. The parameters describe an underlying physical setting in such a way that the value of the parameters affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements.

http://en.wikipedia.org/wiki/Point_estimation
the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" for an unknown (fixed or random) population parameter


relative frequency (of an event)

point estimates :
bias
consistency
accuracy

consistent estimator
: sequence of estimates which converges in probability to a parameter of the probability model.



The sample mean is an unbiased, consistent estimator of the expected value of a random variable.

The sample variance is a biased estimate of the variance of a random variable.

mean square error
: expected squared difference between an estimate and the estimated parameter

 The standard error of the estimate of the expected value converges to zero as n grows without bound.

http://en.wikipedia.org/wiki/Law_of_large_numbers


7.4 Confidence Intervals

accuracy of estimate

confidence interval
: difference between a random variable and its expected value

confidence coefficient
: probability that a sample value of the random variable will be within the confidence interval

posted by maetel
central limit theorem
http://en.wikipedia.org/wiki/Central_limit_theorem


6.1 Expected Values of Sums

Theorem 6.1
The expected value of the sum equals the sum of the expected values whether or not each variables are independent.

Theorem 6.2
The variance of the sum is the sum of all the elements of the covariance matirx.


6.2 PDF of the Sum of Two Random Variables

Theorem 6.5
http://en.wikipedia.org/wiki/Convolution

linear system
http://en.wikipedia.org/wiki/Linear_system
 

6.3 Moment Generating Functions

In linear system theory, convolution in the time domain corresponds to multiplication in the frequency domain with time functions and frequency functions related by the Fourier transform.

In probability theory, we can use transform methods to replace the convolution of PDFs by multiplication of transforms.

moment generating function
: the transform of a PDF or a PMF

http://en.wikipedia.org/wiki/Moment_generating_function

http://en.wikipedia.org/wiki/Laplace_transform

region of convergence


6.4 MGF of the Sum of Independent Random Variables

Theorem 6.8
Moment generating functions provide a convenient way to study the properties of sums of independent finite discrete random variables.

Theorem 6.9
The sum of independent Poisson random variables is a Poisson random variable.

Theorem 6.10
The sum of independent Gaussian random variables is a Gaussian random variable.

In general, the sum of independent random variables in one family is a different kind of random variable.

Theorem 6.11
The Erlang random variable is the sum of n independent exponential random variables.


6.5 Random Sums of Independent Random Variables

random sum
: sum of iid random variables in which the number of terms in the sum is also a random variable

It is possible to express the probability model of R as a formula for the moment generating function.


The number of terms in the random sum cannot depend on the actual values of the terms in the sum.


6.6 Central Limit Theorem

Probability theory provides us with tools for interpreting observed data.

bell-shaped curve = normal distribution


So many practical phenomena produce data that can be modeled as Gaussian random variables.


http://en.wikipedia.org/wiki/Central_limit_theorem

Central Limit Theorem
The CDF of a sum of random variables more and more resembles a Gaussian CDF as the number of terms in the sum increases.
 
Central Limit Theorem Approximation = Gaussian approximation


6.7 Applications of the Central Limit Theorem

De Moivre-Laplace Formula

http://en.wikipedia.org/wiki/Theorem_of_de_Moivre%E2%80%93Laplace
normal approximation to the binomial distribution


6.8 The Chernoff Bound

Chernoff Bound

http://en.wikipedia.org/wiki/Chernoff_bound
exponentially decreasing bounds on tail distributions of sums of independent random variables


posted by maetel
2009. 4. 2. 01:48

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

Yates & Goodman
Probability and Stochastic Process, 2nd ed.

3.1 The Cumulative Distribution Function



3.2 Probability Density Function



3.3 Expected Values



3.4 Families of Continuous Random Variables

http://en.wikipedia.org/wiki/Uniform_distribution_(continuous)


http://en.wikipedia.org/wiki/Exponential_random_variable


http://en.wikipedia.org/wiki/Erlang_random_variable




3.5 Gaussian Random Variables

http://en.wikipedia.org/wiki/Gaussian_random_variable


3.6 Delta Functions, Mixed Random Variables


3.7 Probability Models of Derived Random Variables



3.8 Conditioning a Continuous Random Variable



3.9 MATLAB

http://en.wikipedia.org/wiki/Quantile_function











posted by maetel
2.1 Definitions


2.2 Probability Mass Function


2.3 Families of Discrete Random Variables

http://en.wikipedia.org/wiki/Bernoulli_random_variable



2.4 Cumulative Distribution Function (CDF)



2.5 Averages

http://www.amstat.org/publications/jse/v13n2/vonhippel.html



http://en.wikipedia.org/wiki/Mode_(statistics)




2.6 Functions of a Random Variable


2.7 Expected Value of a Derived Random Variable



2.8 Variance and Stand Deviation


2.9 Conditional Probaility Mass Function

posted by maetel
2009. 3. 3. 17:40

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

Yates and Goodman, Probability and Stochastic Processes,
2nd Ed. John Wiley & Sons, 2005

(공)저: Roy D. Yates, David J. Goodman
기고가 David J. Goodman
Edition: 2, illustrated, revised
출판사: John Wiley & Sons, 2004
ISBN 0471272140, 9780471272144
544페이지


http://www.wiley.com/college/yates
출판사 페이지: http://he-cda.wiley.com/WileyCDA/HigherEdTitle/productCd-0471272140.html

Dr. Roy Yates received the B.S.E. degree in 1983 from Princeton University, and the S.M. and Ph.D. degrees in 1986 and 1990 from M.I.T., all in Electrical Engineering. Since 1990, he has been with the Wireless Information Networks Laboratory (WINLAB) and the ECE department at Rutgers, University. He is currently an associate professor.
 
David J. Goodman is Director of WINLAB and a Professor of Electrical and Computer Engineering at Rutgers University. Before coming to Rutgers, he enjoyed a twenty year research career at Bell Labs where he was a Department Head in Communications Systems Research. He has made fundamental contributions to digital signal processing, speech coding, and wireless information networks.  


학습 페이지: http://bcs.wiley.com/he-bcs/Books?action=index&itemId=0471272140&bcsId=1991



관련 강의>

Stochastic Signals and Systems (332:541)
Department of Electrical and Computer Engineering
Rutgers University

http://www.soe.ucsc.edu/classes/cmpe107/Fall01/

ECE 302 Probabilistic Methods in Electrical Engineering

EE8103 – Random Processes
Department of Electrical and Computer Engineering
RYERSON UNIVERSITY

ECE 153: Probability and Random Processes for Engineers (Fall 2008)
Department of Electrical and Computer Engineering
University of California, San Diego

ECE 316: Probability Theory and Random Processes (Spring 2007)

EECS 126: Probability and Random Processes (Spring 2004)
Dept of Electrical Engineering & Computer Sciences
University of California at Berkeley

Probabilistic Systems Analysis and Applied Probability, Fall 2002
Electrical Engineering and Computer Science
MIT OpenCourseWare

MN308 Statistics and Quality Engineering, and
EC381 Probability in Electrical and Computer Engineering (Spring 2008)
College of Engineering
Boston University

 

 

posted by maetel