블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
  • total
  • today
  • yesterday

Category

2010. 12. 14. 20:02 Computer Vision
Michael I. Jordan & Christopher M. Bishop, "Neural Networks", In Tucker, A. B. (Ed.) CRC Handbook of Computer Science, Boca Raton, FL: CRC Press, 1997.
download: http://www.cs.berkeley.edu/~jordan/papers/crc.ps







1. Introduction


Neural network methods have had their greatest impact in problems where statistical issues dominate and where data are easily obtained.

"conjunction of graphical algorithms and probability theory":
A neural network is first and foremost a graph with patterns represented in terms of numerical values attached to the nodes of the graph and transformations between patterns achieved via simple message-passing algorithms. Many neural network architectures, however, are also statistical processors, characterized by making particular probabilistic assumptions about data.


Based on a source of training data, the aim is to produce a statistical model of the process from which the data are generated so as to allow the best predictions to be made for new data.

statistical modeling - density estimation (unsupervised learning), classification & regression

density estimation ("unsupervised learning")
: to model the unconditional distribution of data described by some vector
- to train samples and a network model to build a representation of the probability density 
- to label regions for a new input vector
 
classification & regression ("supervised learning")
: to distinguish between input variables and target variables
- to assign each input vector to one of classes and target variables to class labels 
-> estimation of conditional densities from the joint input-target space



2. Representation

2.1 Density estimation

To form an explicit model of the input density

Gaussian mixture distribution







2.2 Linear regression and linear discriminants






posted by maetel
2010. 11. 11. 01:26 Computer Vision
Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification (2nd ed-4th print), Wiley-Interscience

RICHARD O. DUDA
PhD, Professor in the Electrical Engineering Department at San Jose State University, San Jose, California.

PETER E. HART
PhD, Chief Executive Officer and President of Ricoh Innovations, Inc. in Menlo Park, California.

PhD, Chief Scientist, also at Ricoh Innovations, Inc.

lectures: 



cf.
Computer Manual in MATLAB to accompany Pattern Classification, 2nd EditionDavid G. Stork ( Ricoh Silicon Valley ), Elad Yom-Tov, John Wiley & Sons, 2004
loyola: 1관 4층 006.4 S885c 2004

DAVID G. STORK
PhD, Chief Scientist at Ricoh Innovations, Inc., and Consulting Professor of Electrical Engineering at Stanford University. A graduate of MIT and the University of Maryland, he is the founder and leader of the Open Mind Initiative and the coauthor, with Richard Duda and Peter Hart, of Pattern Classification, Second Edition, as well as four other books.

PhD, research scientist at IBM Research Lab in Haifa, working on the applications of machine learning to search technologies, bioinformatics, and hardware verification (among others). He is a graduate of Tel-Aviv University and the Technion.



Preface


"(Our purpose is) to give a systematic account of the major topics  in pattern recognition, based on fundamental principles"

pattern recognition

speech recognition
optical character recognition
signal classification

pattern classification
scene analysis
machine learning

handwriting & gesture recognition
lipreading
geological analysis
document searching
recognition of bubble chamber tracks of subatomic particles
human-machine interface - eg. pen-based computing
human and animal nervous systems

neurobiology
psychology

"We address a specific class of problems - pattern recognition problems - and consider the wealth of different techniques that can be applied to it."

"We discuss the relative strengths and weaknesses of various classification techniques"


statistical methods vs. syntactic methods





posted by maetel
2010. 9. 16. 00:40

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 9. 15. 21:33

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 9. 7. 17:29 Computer Vision
OpenCV 함수 cvFindChessboardCorners()와 cvDrawChessboardCorners() 사용


bool findChessboardCorners(const Mat& image, Size patternSize, vector<Point2f>& corners, int flags=CV_CALIB_CB_ADAPTIVE_THRESH+ CV_CALIB_CB_NORMALIZE_IMAGE)

Finds the positions of the internal corners of the chessboard.

Parameters:
  • image – Source chessboard view; it must be an 8-bit grayscale or color image
  • patternSize – The number of inner corners per chessboard row and column ( patternSize = cvSize(points _ per _ row,points _ per _ colum) = cvSize(columns,rows) )
  • corners – The output array of corners detected
  • flags

    Various operation flags, can be 0 or a combination of the following values:

    • CV_CALIB_CB_ADAPTIVE_THRESH use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
    • CV_CALIB_CB_NORMALIZE_IMAGE normalize the image gamma with EqualizeHist before applying fixed or adaptive thresholding.
    • CV_CALIB_CB_FILTER_QUADS use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads that are extracted at the contour retrieval stage.



file:///Users/lym/opencv/src/cv/cvcalibinit.cpp
https://code.ros.org/trac/opencv/browser/tags/2.1/opencv/src/cv/cvcalibinit.cpp



void drawChessboardCorners(Mat& image, Size patternSize, const Mat& corners, bool patternWasFound)

Renders the detected chessboard corners.

Parameters:
  • image – The destination image; it must be an 8-bit color image
  • patternSize – The number of inner corners per chessboard row and column. (patternSize = cvSize(points _ per _ row,points _ per _ colum) = cvSize(columns,rows) )
  • corners – The array of corners detected
  • patternWasFound – Indicates whether the complete board was found or not . One may just pass the return value FindChessboardCorners her



Learning OpenCV: Chapter 11. Camera Models and Calibration: Chessboards
381p


chessboard.bmp

640x480





console:
finding chessboard corners...
what = 1
chessboard corners: 215.5, 179
#0=(215.5, 179)    #1=(237.5, 178.5)    #2=(260.5, 178)    #3=(283.5, 177.5)    #4=(307, 177)    #5=(331.5, 175.5)    #6=(355.5, 174.5)    #7=(380.5, 174)    #8=(405.5, 173.5)    #9=(430.5, 172.5)    #10=(212.5, 201.5)    #11=(235.5, 201.5)    #12=(258, 200.5)    #13=(280.5, 200.5)    #14=(305.5, 199.5)    #15=(330, 198.5)    #16=(354.5, 198)    #17=(379.5, 197.5)    #18=(405.5, 196.5)    #19=(430.5, 196)    #20=(210, 224.5)    #21=(232.5, 224.5)    #22=(256, 223.5)    #23=(280, 224)    #24=(304, 223)    #25=(328.5, 222.5)    #26=(353.5, 222)    #27=(378.5, 221.5)    #28=(404.5, 221.5)    #29=(430.5, 220.5)    #30=(207, 247.5)    #31=(230.5, 247.5)    #32=(253.5, 247.5)    #33=(277.5, 247)    #34=(303, 247)    #35=(327, 246.5)    #36=(352, 246.5)    #37=(377.5, 246)    #38=(403.5, 245.5)    #39=(430, 245.5)    #40=(204.5, 271.5)    #41=(227.5, 271.5)    #42=(251.5, 271.5)    #43=(275.5, 271.5)    #44=(300, 272)    #45=(325.5, 271.5)    #46=(351, 271)    #47=(376.5, 271.5)    #48=(403, 271.5)    #49=(429.5, 271)    #50=(201.5, 295.5)    #51=(225.5, 295.5)    #52=(249.5, 296)    #53=(273.5, 296.5)    #54=(299, 297)    #55=(324, 296)    #56=(349.5, 296.5)    #57=(375.5, 296.5)    #58=(402.5, 296.5)    #59=(429, 297)   

finished









finding chessboard corners...
what = 0
chessboard corners: 0, 0
#0=(0, 0)    #1=(0, 0)    #2=(0, 0)    #3=(0, 0)    #4=(0, 0)    #5=(0, 0)    #6=(0, 0)    #7=(0, 0)    #8=(0, 0)    #9=(0, 0)    #10=(0, 0)    #11=(0, 0)    #12=(0, 0)    #13=(0, 0)    #14=(0, 0)    #15=(0, 0)    #16=(0, 0)    #17=(0, 0)    #18=(0, 0)    #19=(0, 0)    #20=(0, 0)    #21=(0, 0)    #22=(0, 0)    #23=(0, 0)    #24=(0, -2.22837e-29)    #25=(-2.22809e-29, -1.99967)    #26=(4.2039e-45, -2.22837e-29)    #27=(-2.22809e-29, -1.99968)    #28=(4.2039e-45, 1.17709e-43)    #29=(6.72623e-44, 1.80347e-42)    #30=(0, 0)    #31=(4.2039e-45, 1.45034e-42)    #32=(-2.2373e-29, -1.99967)    #33=(4.2039e-45, 2.52094e-42)    #34=(-2.2373e-29, -1.99969)    #35=(-2.22634e-29, -1.99968)    #36=(4.2039e-45, 1.17709e-43)    #37=(6.72623e-44, 1.80347e-42)    #38=(0, 0)    #39=(0, 1.80347e-42)    #40=(3.36312e-44, 5.46787e-42)    #41=(6.45718e-42, 5.04467e-44)    #42=(0, 1.80347e-42)    #43=(6.48101e-42, 5.48188e-42)    #44=(0, 1.4013e-45)    #45=(4.2039e-45, 0)    #46=(1.12104e-44, -2.22837e-29)    #47=(-2.22809e-29, -1.99969)    #48=(4.2039e-45, 6.72623e-44)    #49=(6.16571e-44, 1.80347e-42)    #50=(0, 0)    #51=(1.4013e-45, -2.27113e-29)    #52=(4.56823e-42, -1.99969)    #53=(4.2039e-45, -2.20899e-29)    #54=(-2.2373e-29, -1.9997)    #55=(-2.22619e-29, -1.99969)    #56=(4.2039e-45, 6.72623e-44)    #57=(-1.9997, 1.80347e-42)    #58=(0, -2.22957e-29)    #59=(-2.23655e-29, -2.20881e-29)   

finished










finding chessboard corners...
what = 0
chessboard corners: 0, 0
#0=(0, 0)    #1=(0, 0)    #2=(0, 0)    #3=(0, 0)    #4=(0, 0)    #5=(0, 0)    #6=(0, 0)    #7=(0, 0)    #8=(0, 0)    #9=(0, 0)    #10=(0, 0)    #11=(0, 0)    #12=(0, 0)    #13=(0, 0)    #14=(0, 0)    #15=(0, 0)    #16=(0, 0)    #17=(0, 0)    #18=(0, 0)    #19=(0, 0)    #20=(0, 0)    #21=(0, 0)    #22=(0, 0)    #23=(0, 0)    #24=(0, -2.22837e-29)    #25=(-2.22809e-29, -1.99967)    #26=(4.2039e-45, -2.22837e-29)    #27=(-2.22809e-29, -1.99968)    #28=(4.2039e-45, 1.17709e-43)    #29=(6.72623e-44, 1.80347e-42)    #30=(0, 0)    #31=(4.2039e-45, 1.45034e-42)    #32=(-2.2373e-29, -1.99967)    #33=(4.2039e-45, 2.52094e-42)    #34=(-2.2373e-29, -1.99969)    #35=(-2.22634e-29, -1.99968)    #36=(4.2039e-45, 1.17709e-43)    #37=(6.72623e-44, 1.80347e-42)    #38=(0, 0)    #39=(0, 1.80347e-42)    #40=(3.36312e-44, 5.46787e-42)    #41=(6.45718e-42, 5.04467e-44)    #42=(0, 1.80347e-42)    #43=(6.48101e-42, 5.48188e-42)    #44=(0, 1.4013e-45)    #45=(4.2039e-45, 0)    #46=(1.12104e-44, -2.22837e-29)    #47=(-2.22809e-29, -1.99969)    #48=(4.2039e-45, 6.72623e-44)    #49=(6.16571e-44, 1.80347e-42)    #50=(0, 0)    #51=(1.4013e-45, -2.27113e-29)    #52=(4.56823e-42, -1.99969)    #53=(4.2039e-45, -2.20899e-29)    #54=(-2.2373e-29, -1.9997)    #55=(-2.22619e-29, -1.99969)    #56=(4.2039e-45, 6.72623e-44)    #57=(-1.9997, 1.80347e-42)    #58=(0, -2.22957e-29)    #59=(-2.23655e-29, -2.20881e-29)   

finished







source code:
// Test: chessboard detection

#include <OpenCV/OpenCV.h> // frameworks on mac
//#include <cv.h>
//#include <highgui.h>

#include <iostream>
using namespace std;


int main()
{

    IplImage* image = cvLoadImage( "DSCN3310.jpg", 1 );
   
/*    IplImage* image = 0;
    // initialize capture from a camera
    CvCapture* capture = cvCaptureFromCAM(0); // capture from video device #0
    cvNamedWindow("camera");
                
    while(1) {
        if ( !cvGrabFrame(capture) ){
            printf("Could not grab a frame\n\7");
            exit(0);
        }
        else {
            cvGrabFrame( capture ); // capture a frame
            image = cvRetrieveFrame(capture); // retrieve the captured frame
*/           
//            cvShowImage( "camera", image );
            cvNamedWindow( "camera" );  cvShowImage( "camera", image );
   
            cout << endl << "finding chessboard corners..." << endl;
            CvPoint2D32f corners[60];
            int numCorners[60];
            //cvFindChessboardCorners(<#const void * image#>, <#CvSize pattern_size#>, <#CvPoint2D32f * corners#>, <#int * corner_count#>, <#int flags#>)
            int what = cvFindChessboardCorners( image, cvSize(10,6), corners, numCorners, CV_CALIB_CB_ADAPTIVE_THRESH );
            cout << "what = " << what << endl;
            cout << "chessboard corners: " << corners[0].x << ", " << corners[0].y << endl;            
       
    for( int n = 0; n < 60; n++ )
    {
        cout << "#" << n << "=(" << corners[n].x << ", " << corners[n].y << ")\t";
    }
    cout << endl;
       
            // cvDrawChessboardCorners(<#CvArr * image#>, <#CvSize pattern_size#>, <#CvPoint2D32f * corners#>, <#int count#>, <#int pattern_was_found#>)
    cvDrawChessboardCorners( image, cvSize(10,6), corners, 60, what );   
   
   
    cvNamedWindow( "chessboard" ); cvMoveWindow( "chessboard", 200, 200 ); cvShowImage( "chessboard", image );
    cvSaveImage( "chessboard.bmp", image );       
            cvWaitKey(0);
//        }
//    }
   
    cout << endl << "finished" << endl;
//   cvReleaseCapture( &capture ); // release the capture source
    cvDestroyAllWindows();
   
    return 0;
}





posted by maetel
2010. 9. 4. 05:36

보호되어 있는 글입니다.
내용을 보시려면 비밀번호를 입력하세요.

2010. 6. 8. 14:18 Computer Vision
G. Jiang and L. Quan. Detection of concentric circles for camera calibration. Computer Vision, IEEE International Conference on, 1:333– 340, 2005.

Long Quan


matlab code

posted by maetel
2010. 2. 10. 18:27 Computer Vision
Sawhney, H. S. and Kumar, R. 1999. True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 21, 3 (Mar. 1999), 235-243. DOI= http://dx.doi.org/10.1109/34.754589

posted by maetel
2010. 2. 10. 15:47 Computer Vision
Seong-Woo Park, Yongduek Seo, Ki-Sang Hong: Real-Time Camera Calibration for Virtual Studio. Real-Time Imaging 6(6): 433-448 (2000)
doi:10.1006/rtim.1999.0199

Seong-Woo Park, Yongduek Seo and Ki-Sang Hong1

Dept. of E.E. POSTECH, San 31, Hyojadong, Namku, Pohang, Kyungbuk, 790-784, Korea


Abstract

In this paper, we present an overall algorithm for real-time camera parameter extraction, which is one of the key elements in implementing virtual studio, and we also present a new method for calculating the lens distortion parameter in real time. In a virtual studio, the motion of a virtual camera generating a graphic studio must follow the motion of the real camera in order to generate a realistic video product. This requires the calculation of camera parameters in real-time by analyzing the positions of feature points in the input video. Towards this goal, we first design a special calibration pattern utilizing the concept of cross-ratio, which makes it easy to extract and identify feature points, so that we can calculate the camera parameters from the visible portion of the pattern in real-time. It is important to consider the lens distortion when zoom lenses are used because it causes nonnegligible errors in the computation of the camera parameters. However, the Tsai algorithm, adopted for camera calibration, calculates the lens distortion through nonlinear optimization in triple parameter space, which is inappropriate for our real-time system. Thus, we propose a new linear method by calculating the lens distortion parameter independently, which can be computed fast enough for our real-time application. We implement the whole algorithm using a Pentium PC and Matrox Genesis boards with five processing nodes in order to obtain the processing rate of 30 frames per second, which is the minimum requirement for TV broadcasting. Experimental results show this system can be used practically for realizing a virtual studio.


전자공학회논문지 제36권 S편 제7호, 1999. 7 
가상스튜디오 구현을 위한 실시간 카메라 추적 ( Real-Time Camera Tracking for Virtual Studio )   
박성우 · 서용덕 · 홍기상 저 pp. 90~103 (14 pages)
http://uci.or.kr/G300-j12265837.v36n07p90

서지링크     한국과학기술정보연구원
가상스튜디오의 구현을 위해서 카메라의 움직임을 실시간으로 알아내는 것이 필수적이다. 기존의 가상스튜디어 구현에 사용되는 기계적인 방법을 이용한 카메라의 움직임 추적하는 방법에서 나타나는 단점들을 해결하기 위해 본 논문에서는 카메라로부터 얻어진 영상을 이용해 컴퓨터비전 기술을 응용하여 실시간으로 카메라변수들을 알아내기 위한 전체적인 알고리듬을 제안하고 실제 구현을 위한 시스템의 구성 방법에 대해 다룬다. 본 연구에서는 실시간 카메라변수 추출을 위해 영상에서 특징점을 자동으로 추출하고 인식하기 위한 방법과, 카메라 캘리브레이션 과정에서 렌즈의 왜곡특성 계산에 따른 계산량 문제를 해결하기 위한 방법을 제안한다.



Practical ways to calculate camera lens distortion for real-time camera calibration
Pattern Recognition, Volume 34, Issue 6, June 2001, Pages 1199-1206
Seong-Woo Park, Ki-Sang Hong




generating virtual studio




Matrox Genesis boards
http://www.matrox.com/imaging/en/support/legacy/

http://en.wikipedia.org/wiki/Virtual_studio
http://en.wikipedia.org/wiki/Chroma_key

camera tracking system : electromechanical / optical
pattern recognition
2D-3D pattern matches
planar pattern


feature extraction -> image-model matching & identification -> camera calibration
: to design the pattern by applying the concept of cross-ratio and to identify the pattern automatically


영상에서 찾아진 특징점을 자동으로 인식하기 위해서는 공간 상의 점들과 영상에 나타난 그것들의 대응점에 대해서 같은 값을 갖는 성질이 필요한데 이것을 기하적 불변량 (Geometric Invariant)이라고 한다. 본 연구에서는 여러 불변량 가운데 cross-ratio를 이용하여 패턴을 제작하고, 영상에서 불변량의 성질을 이용하여 패턴을 자동으로 찾고 인식할 수 있게 하는 방법을 제안한다.


Tsai's algorithm
R. Y. Tsai, A Versatile Camera Calibration Technique for High Accuracy 3-D Maching Vision Metrology Using Off-the-shelf TV Cameras and Lenses. IEEE Journal of Robotics & Automation 3 (1987), pp. 323–344.

direct image mosaic method
Sawhney, H. S. and Kumar, R. 1999. True Multi-Image Alignment and Its Application to Mosaicing and Lens Distortion Correction. IEEE Trans. Pattern Anal. Mach. Intell. 21, 3 (Mar. 1999), 235-243. DOI= http://dx.doi.org/10.1109/34.754589

Lens distortion
Richard Szeliski, Computer Vision: Algorithms and Applications: 2.1.6 Lens distortions & 6.3.5 Radial distortion

radial alignment constraint
"If we presume that the lens has only radial distortion, the direction of a distorted point is the same as the direction of an undistorted point."

cross-ratio  http://en.wikipedia.org/wiki/Cross_ratio
: planar projective geometric invariance
 - "pencil of lines"
http://mathworld.wolfram.com/CrossRatio.html
http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/MOHR_TRIGGS/node25.html
http://www.cut-the-knot.org/pythagoras/Cross-Ratio.shtml
http://web.science.mq.edu.au/~chris/geometry/


pattern identification

 카메라의 움직임을 알아내기 위해서는 공간상에 인식이 가능한 물체가 있어야 한다. 즉, 어느 위치에서 보더라도 영상에 나타난 특징점을 찾을 수 있고, 공간상의 어느 점에 대응되는 점인지를 알 수 있어야 한다.

패턴이 인식 가능하기 위해서는 카메라가 어느 위치, 어느 자세로 보던지 항상 같은 값을 갖는 기하적 불변량 (Geometric Invariant)이 필요하다.

Coelho, C., Heller, A., Mundy, J. L., Forsyth, D. A., and Zisserman, A.1992. An experimental evaluation of projective invariants. In Geometric invariance in Computer Vision, J. L. Mundy and A. Zisserman, Eds. Mit Press Series Of Artificial Intelligence Series. MIT Press, Cambridge, MA, 87-104.


> initial identification process
extracting the pattern in an image: chromakeying -> gradient filtering: a first-order derivative of Gaussian (DoG) -> line fitting: deriving a distorted line (that is actually a curve) equation -> feature point tracking (using intersection filter)


R1x = 0



http://en.wikipedia.org/wiki/Difference_of_Gaussians



real-time camera parameter extraction

이상적인 렌즈의 optical axis가 영상면에 수직이고 변하지 않는다고 할 때, 영상 중심은 카메라의 줌 동작 동안 고정된 값으로 계산된다. (그러나 실제 렌즈의 불완전한 특성 때문에 카메라의 줌 동작 동안 영상 중심 역시 변하게 되는데, 이 변화량은 적용 범위 이내에서 2픽셀 이하이다. 따라서 본 연구에서는 이러한 변화를 무시하고 이상적인 렌즈를 가정하여 줌동작에 의한 영상 중심을 구하게 된다.)

For zoom lenses, the image centers vary as the camera zooms because the zooming operation is executed by a composite combination of several lenses. However, when we examined the location of the image centers, its standard deviation was about 2 pixels; thus we ignored the effect of the image center change.


calculating lens distortion coefficient

Zoom lenses are zoomed by a complicated combination of several lenses so that the effective focal length and distortion coefficient vary during zooming operations.

When using the coplanar pattern with small depth variation, it turns out that focal length and z-translation cannot be separated exactly and reliably even with small noise.

카메라 변수 추출에 있어서 공간상의 특징점들이 모두 하나의 평면상에 존재할 때는 초점거리와 z 방향으로의 이동이 상호 연관 (coupling)되어 계산값의 안정성이 결여되기 쉽다.


collinearity

Collinearity represents a property when the line in the world coordinate is also shown as a line in the image. This property is not preserved when the lens has a distortion.


Once the lens distortion is calculated, we can execute camera calibration using linear methods.


filtering

가상 스튜디오 구현에 있어서는 시간 지연이 항상 같은 값을 가지게 하는 것이 필수적이므로, 실제 적용에서는 예측 (prediction)이 들어가는 필터링 방법(예를 들면, Kalman filter)은 사용할 수가 없었다.

averaging filter 평균 필터








Orad  http://www.orad.co.il

Evans & Sutherland http://www.es.com









posted by maetel
350p
7.1 Introduction

search for finding acceptable model parameters

stochastic methods relied on randomness
(1) Boltzmann learning (from statistical mechanics in physics)
(2) genetic algorithms (from the mathematical theory of evolution in biology)
: preferable in complex problems
: high computational burden

351p
7.2 Stochastic Search

If we suggest an associated interaction energy of each pair of magnets,
then, to optimize the energy of the full system, we are to find the configuration of states of the magnets.
As the temperature is lowered, the system has increased probability of finding the optimum configuration.

successful in a wide range of energy functions or energy landscapes,
unlikely so in cases such as the "golf course landscape".


352p
7.2.2 The Boltzmann Factor

Boltzmann factor
http://en.wikipedia.org/wiki/Boltzmann_factor

partition function for a normalization constant
http://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)

http://en.wikipedia.org/wiki/Boltzmann_distribution


the # of configurations = 2^N

Fig.7.1 - Boltzmann networks is to indicate the state fo each node
: The optimization problem is to find a configuration (i.e., assignment of all s_i) that minimizes the energy.

Fig.7.2
: Simulated annealing method uses randomness, governed by a control parameter or "temperature" T.

The number of states declines exponentially with increasing energy.

Because of the statistical independence of the magnets, for large N the probability of finding the state in energy E also decays exponentially.

The dependence of the probability upon T in the Boltzmann factor:
At high T, the probability is distributed roughly evenly among all configurations, while at low T it is concetrated at the lowest-energy configurations.

In the case of large N, the number of configurations decays exponentially with the energy of the configuration.


> Simulated Annealing Algorithm
1) initiate randomized states & select a high initial temperature T
(in the simulation, T is a control parameter that will control the randomness)
2) choose randomly a node i with its state s_i = +1
3) calculate the system energy in this configuration
4) recalculate the energy for a candidate new state s_i = -1
5) accept this change in state if this candidate state has a lower energy, or if the energy is higher, accept this with a probability from the Boltzmann factor.
6) poll (select and test) the nodes randomly several times and set their states
7) lower the temperature and repeat the polling
8) simluated annealing terminates when the temperature is very low (near zero)
(if the cooling has been sufficiently slow, the system has a high probability of being in a low-energy state)

The occational acceptance of a state that is energetically less favorable allows the system to jump out of unacceptable local energy minima.

 
Algorithm 1. Stochastic Simulated Annealing





posted by maetel
2008. 11. 25. 22:10 @GSMC/박래홍: Computer Vision
380p

pattern recognition
: classifying based on the features

http://en.wikipedia.org/wiki/Pattern_recognition


Knowledge about objects and their classes gives the necessary information for object classification.

381p
9.1 Knowledge representation

artificial intelligence (AI)
http://en.wikipedia.org/wiki/Artificial_intelligence

http://en.wikipedia.org/wiki/Knowledge_representation


knowledge representation design
- simple control strategies
- a rich, well structured representation of a large set of a priori data and hypotheses

syntax
-> the symbols that may be used and the ways that they may be arranged

semantics
-> how meaning is embodied in the symbols and the symbol arrangement allowed by the syntax

representation
-> a set of syntactic and semantic conventions that make it possible ot describe things

formal grammars and languages
predicate logic
production rules
semantic nets
frames
-> data structures (lists, trees, graphs, tables, hierarchies, sets, rings, nets, and matrices)


Descriptions, features
: scalar properties of objects -> feature vectors

Grammars, languages
- structures from primitives (elementary structural properties of the objects) and the relations between them
-> rules defining how the chains, trees, or graphs can be constructed from a set of symbols (primitives)

Predicate logic (술어논리)
- combinations of logic variables (true/false), quantifiers, and logic operators
- modus ponens and resolution
- PROLOG
- 'pure truth'
-> logic conditions and constraints into knowledge processing

Production rules
- condition action pairs
- procedural character of knowledge
=> handling uncertainty information -> expert systems

Fuzzy logic
- some reasonable threshold
- fuzzy rule
- linguistic variables

Semantic nets
- objects, their description, and a description of relations between objects
- local information and local knowledge
- relations between partial representations described at all appropriate hierarchical levels (nodes: objects / arcs: relations between objects)
- degrees of closeness

Frames, scripts
- a substitute for missing information
-> to overcome the lack of continuous information
- a tool for organizing knowledge in prototypical objects, and for description of mutual influences of objects using stereotypes of behavior in specific situations


386p
9.2 Statistical pattern recognition


object recognition -- classifier --> assigning classes to objects

pattern
: formal description of sensed objects

The classes form clusters in the feature space, which can be separated by a discrimination curve (or hyper-surface in a multi-dimensional feature space).

If the discrimination hyper-surfaces are hyper-planes, it is called a linearly separable task.


387p
9.2.1 Classification principles

input - information about features from an object
output - decision about the class of the object
class identifiers - generated output symbols
decision rule - the function describing relations between the classifier inputs and the output
discrimination functions - scalar functions defining the discrimination hyper-surfaces 

The determination of discrimination hyper-surfaces (or definition of the decision rule) is the goal of classifier design.

decision rule of a classifier
(1) find a maximum discrimination function -> linear classifier
(2) calculate the minimum distance from exemplars (sample patterns) -> piecewise linear discrimination hyper-planes

Φ-classifier
non-linear discrimination hypersurfaces of the original feature space -- non-linear transformation --> hyper-planes in the transformed freature space


390p
9.2.2 Classifier setting

Settings of the optimal classifier should be probabilistic.

optimality criteria: the value of the mean loss caused by classification

optimum decision rule:
minimum mean loss + the vector of optimal parameters

(1) minimum error criterion (Bayes criterion, maximum likelihood)

minimum-loss optimality criterion

Bayes formula
- a posteriori probabilities of classes, the conditional probability densities of objects, a priori probability, the mixture density

theoretical optimum
: "no other classifier setting can give a lower probability of the decision loss."

posteriori probabilities -- a classifier --> corresponding discrimination hyper-surfaces


(2) best approximation criterion
: the best approximation of discrimination functions by linear combinations of pre-determined functions

classifier learning: training set --> classification parameters


The information obtained from the elements of the training set must be generalized to cover the whole feature space.
(=> The classifier should be able to recognize even those objects that it has never seen before.)

sequential increase in training set size -> sequential processing of information -> classification correctness

learning: the process of automated system optimization based on the sequential presentation of examples, to minimize the optimality criterion


object description
- permissible classification error
- complexity of the classifier construction
- time required for classification
-> determination of informativity and discriminativity of measured features


393p
9.2.3 Classifier learning

(1) probability density estimation
-> minimum error criterion -> discriminant functions

priori information -> the shape of probability density functions => parametric learning

eg. normal distribution - dispersion matrix, mean value vector
+ priori estimate + confidence (weight)
 

(2) direct loss minimization
-> best approximation criterion -> decision rule

no prior information -> stochastic approximations
http://en.wikipedia.org/wiki/Stochastic_approximation

 Jack Sklansky
Pattern Classifiers and Trainable Machines

minimum distance classifier


396p
9.2.4 Support Vector Machines

http://en.wikipedia.org/wiki/Support_vector_machines

maximizing the width of the empty area (margin) between the two classes

The margin width is defined as the distance between the discrimination hypersurface in n-dimensional feature space and the closest training patterns called support vectors.
-> An optimal hypersurface has the lowest capacity.

The support vectors specify the discrimination function.


To maximize the margin, we need to minimize ∥w∥ for a quadratic programming optimization problem.

( http://en.wikipedia.org/wiki/Lagrangian 아니고 이거 http://en.wikipedia.org/wiki/Lagrange_multipliers )
 
If we want to solve:
maximize f\left( x, y \right)
subject to g\left( x, y \right) = c

We introduce a new variable (λ) called a Lagrange multiplier to rewrite the problem as:

maximize  f\left( x, y \right) + \lambda\left(  g\left( x, y \right) - c \right)

Solving this new unconstrained problem for x, y, and λ will give us the solution (x, y) for our original constrained problem.  The gradients of f and g are parallel vectors at the maximum, since the gradients are always normal to the contour lines. This occurs just when the gradient of f has no component tangential to the level sets of g. This condition is equivalent to  \nabla_{x,y} f(x,y) = \lambda \nabla_{x,y} g(x,y) for some λ.

 
 F \left( x , y, \lambda \right) = f \left(x, y \right) + \lambda \left(g \left(x, y \right) - c \right),

and solve

 \nabla_{x,y,\lambda} F \left( x , y, \lambda \right)=0 .

Lagrange duality

Given a convex optimization problem in standard form

\begin{align}
\text{minimize }    &f_0(x) \\
\text{subject to } &f_i(x) \leq 0,\ i \in \left \{1,\dots,m \right \} \\
                    &h_i(x) = 0,\ i \in \left \{1,\dots,p \right \}
\end{align}

with the domain \mathcal{D} \subset \mathbb{R}^n having non-empty interior, the Lagrangian function L: \mathbb{R}^n \times \mathbb{R}^m \times \mathbb{R}^p \to \mathbb{R} is defined as

L(x,\lambda,\nu) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x).

The vectors λ and ν are called the dual variables or Lagrange multiplier vectors associated with the problem. The Lagrange dual function g:\mathbb{R}^m \times \mathbb{R}^p \to \mathbb{R} is defined as

g(\lambda,\nu) = \inf_{x\in\mathcal{D}} L(x,\lambda,\nu) = \inf_{x\in\mathcal{D}} \left ( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right ).

The dual function g is concave, even when the initial problem is not convex. The dual function yields lower bounds on the optimal value p * of the initial problem; for any \lambda \geq 0 and any ν we have g(\lambda,\nu) \leq p^* . If a constraint qualification such as Slater's condition holds and the original problem is convex, then we have strong duality, i.e. d^* = \max_{\lambda \ge 0, \nu} g(\lambda,\nu) = \inf f_0 = p^*.




ref.
Lagrange Multipliers without Permanent Scarring Dan Klein (Assistant Professor, Computer Science Division, University of California at Berkeley)

http://en.wikipedia.org/wiki/Karush-Kuhn-Tucker_conditions
a generalization of the method of Lagrange multipliers to inequality constraints.


Each of the Lagrange multipliers shares the corresponding support vectors.

The discrimination hyperplane in the feature space can be obrained from vectors in the input space and dot products in the feature space.


ref.
Dmitriy Fradkin and Ilya Muchnik "Support Vector Machines for Classification" in J. Abello and G. Carmode (Eds) "Discrete Methods in Epidemiology", DIMACS Series in Discrete Mathematics and Theoretical Computer Science, volume 70, pp. 13-20, 2006.
Succinctly describes theoretical ideas behind SVM. 
 

soft margin -> support vector machine

kernel trick to determine the similarity of two patterns
http://en.wikipedia.org/wiki/Kernel_trick

(eg. Gaussian radial basis fn. kernel -> Hilbert space)


402p
9.2.5 Cluster analysis

http://en.wikipedia.org/wiki/Cluster_analysis

1) MacQueen k-means
http://en.wikipedia.org/wiki/K-means_algorithm

within-cluster variance

2) ISODATA cluster analysis

3) mean shift algorithm

4) fuzzy c-means algorithm

5) expectation-maximazation algorithm
http://en.wikipedia.org/wiki/Expectation-maximization_algorithm
EM alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated.



404p
9.3 Neural nets

 
http://en.wikipedia.org/wiki/Neural_net

ref.
Ben Krose & Patrick van der Smagt <An introduction to Neural Networks>


9.3.1 Feed-forward networks

9.3.2 Unsupervised learning

9.3.3 Hopfield neural nets

http://en.wikipedia.org/wiki/Hopfield_neural_network


410p
9.4 Syntactic pattern recognition


9.4.1 Grammers and languages

formal language

9.4.2 Syntactic analysis, syntactic classifier

9.4.3 Syntactic classifier learning, grammar inference


418p
9.5 Recognition as graph matching


9.5.1 Isomorphism of graphs and sub-graphs

9.5.2 Similarity of graphs


424p
9.6 Optimization techniques in recognition


9.6.1 Genetic algorithms

Reproduction
Crossover
Mutation

9.6.2 Simulated annealing


430p
9.7 Fuzzy systems


9.7.1 Fuzzy sets and fuzzy membership functions


'@GSMC > 박래홍: Computer Vision' 카테고리의 다른 글

Ch.10 Image Understanding  (0) 2008.12.15
2.4 Color images & 2.5 Cameras  (0) 2008.10.16
Ch. 6 Segmentation I  (0) 2008.09.20
Ch. 2 The image, its representations and properties  (0) 2008.09.10
Ch.1 Introduction  (0) 2008.09.04
posted by maetel
215p
5.1 Introduction

http://en.wikipedia.org/wiki/Statistical_classification

http://en.wikipedia.org/wiki/Linear_classifier

http://en.wikipedia.org/wiki/Linear_discriminant_analysis


linear discriminant functions
1) linear in the components of a feature vector
2) linear in some given set of functions of a feature vector

finding a linear discriminant function
-> minimizing a criterion fuction - sample risk / training error
: "A small training error does not guarantee a small test error."
=> to derive the minimum-risk linear discriminant

-> "convergence properties & computational complexities" of gradient descent procedures for minimizing criterion functions


216p
5.2 Linear Discriminant Functions and Decision Surfaces

weight vector
bias / threshold weight

5.2.1 The Two-Category Case

"The effective input is the sum of each input feauture value multiplied by its corresponding weight."

http://en.wikipedia.org/wiki/Hyperplane

"The weight vector is normal to any vector lying in the hyperplane."

A linear discriminant function divides the feature space by a hyperplane decision surface.

The orientation of the surface is determined by the normal vector w.
The location of the surface is determined by the bias w_0.

The discriminant function g(x) is proportional to the signed distance from x to the hyperplane.


218p
5.2.2 The Multicategory Case

A linear machine divides the feature space into c decision regions.

If the decision regions are contiguous, the boundary between them is a portion of the hyperplane.

With the linear machine, the decision boundary is determined by the differences of the weight vectors.

There are c(c-1)/2 paris of regions.


The linear machines are most suitable for problems for which the conditional densities are unimodal.


219p
5.3 Generalized Linear Discriminant Functions
 

http://en.wikipedia.org/wiki/Scaling_(geometry)
if the scaled matrix is
i) the positive multiple of the identity matrix
    => hypershere
ii) positive definete
    => hyperellipsoid
iii) some of eigenvalues of it are positive and others negative
    => various hyperhyperboloids
==> general multivariate Gaussian case

 
polynomial discriminant function
-> generalized linear discriminant function
: not linear in the feature vector x but linear in arbitrary functions y (phi function) of x

The mapping from x-space to y-space comes to find a homogeneous linear discriminant function.

Even with relatively simple functions y(x), decision surfaces induced in an x-space can be fairly complex.

the curse of dimensionality
-> the number of samples should be not less than the number of degrees of freedom. (-> ch.9)

http://en.wikipedia.org/wiki/Big_O_notation


http://en.wikipedia.org/wiki/Margin_(machine_learning)
The margin of a single data point is defined to be the distance from the data point to a decision boundary.
cf. VC dimension Tutorial Slides by Andrew Moore


http://en.wikipedia.org/wiki/Artificial_neural_network

http://en.wikipedia.org/wiki/Multilayer_perceptron


augmented feature vector, y
augmented weight vector, a

The hyperplane desicion surface in y-space passes through the origin in y-space.

The mapping from d-dim x-space to (d+1)-dim y-space preserves all distance relationships among samples.
-> The distance from y to the transformed hyperplane is less than or equal to the distance from x to the original hyperplane.



223p
5.4 The Two-Category Linearly Separable Case


separating vector / solution vector (to normalize samples in the feature space)

http://en.wikipedia.org/wiki/Weight_(representation_theory)

5.4.1 Geometry and Terminology


The linear discriminant equation defines a hyperplane through the origin of weight space having y_i as a normal vector.

The solution vector must be on the positivie side of every hyperplane and lie in the intersection of n half-spaces. -> solution region

-> To find a solution vector closer to the "middle" of the solution region
-> margin


224p
5.4.2 Gradient Descent Procedures

To define a criterion function J(a) that is minimized if a is a solution vector

(1) BGD = Basic Gradient Descent
The next weight vector is obtained by moving some distance from the former one in the direction of steepest descent, along the negative of the gradient of a criterion function.

(2) Newton's algorithm
- To minimize the second-order expansion for the quadratic error functions
- To leads to greater improvement per step
- computational burden of inverting the Hessian matrix

learning rate (to set the step size)

http://en.wikipedia.org/wiki/Gradient_descent
The (negative) gradient at a point is orthogonal to the contour line going through that point.

http://en.wikipedia.org/wiki/Hessian_matrix

http://en.wikipedia.org/wiki/Taylor_series
The Taylor series is a representation of a function as an infinite sum of terms calculated from the values of its derivatives at a single point.
(As the degree of the Taylor polynomial rises, it approaches the correct function.

http://en.wikipedia.org/wiki/Newton%27s_method

http://en.wikipedia.org/wiki/Quasi-Newton_method



227p
5.5 Minimizing The Perception Criterion Function

5.5.1 The Perceptron Criterion Function

http://en.wikipedia.org/wiki/Perceptron

Perceptron criterion function
- never negative, being zero only if a is a solution vector or if a is on the decision boundary.
- propotional to the sum of the distances from the misclassified samples to the decision boundary

Perceptron criterion is piecewise linear and acceptable for gradient descent.


batch training
batch Perceptron algorithm:
The next weight vector is obtained by adding some multiple of the sum of the misclassified samples to the present weight vector.


229p
5.5.2 Convergence Proof for Single-Sample Correction

http://en.wikipedia.org/wiki/Convergence_of_random_variables

We shall modify the weight vector whenever it misclassifies a single sample.
For the purposes of the convergence proof, repeat the samples cyclically.
: We must store and revisit all of the training patterns.
: We shall only change the weight vector when there is an error

Fixed-Increment Single-Sample Perceptron
The correction is moving the weight vector in a good direction, until the samples are linearly separable.

Fig.5.13
A correction which is proportional to the pattern vector is added to the weight vector.
Each correction brings the weight vector closer to the solution region.

Theorem 5.1. Perceptron Convergence
If training samples are linearly separable, then the sequence of weight vectors given by Fixed-Increment Single-Sample Perceptron algorithm will terminate at a solution vector.


232p
5.5.3 Some Direct Generalizations

Algorithm 5: Variable-Increment Percceptron with Margin
correction with a variable increment and a margin

Algorithm 6: Batch Variable Increment Perceptron
the trajectory of the weight vector is smoothed

Algorithm 7: Balanced Winnow
for separable data the gap determined by the two contituent weight vectors can naver increase in size. -> a convergence proof
http://en.wikipedia.org/wiki/Winnow_(algorithm)


235p
5.6 Relaxation Procedures

5.6.1 The Descent Algorithm

a squared error function:
- The gradient is countinuous(, whereas the gradient of a Perceptron criterion function is not.)
- to search a smoother surface
- so smoother near the boundary of the solution region (that the sequence of weight vectors can converge to a point on the boundary)
- the gradient reaches the boundary point when the weight vector is zero vector
- dominated by the longest sample vectors

a squared error with margin:
- never negative, (zero iff weighted samples misclassified by the weight is equal to or more than the margin)

Algorithm 8: Batch Relaxation with Margin

Algorithm9: Single-Sample Relaxation with Margin

relaxation
under-relaxation
over-relaxation


237p
5.6.2 Convergence Proof


238p
5.7 Nonseparable Behavior


error-correcting procedures to call for a modification of the weight vector when and only when an error is encountered

The length of the weight vectors produced by the fixed-increment rule are bounded (, fluctuating near some limiting value).

If the components of the samples are integer-valued, the fixed-increment procedure yields a finite-state process.

Averaging the weight vectors can reduce the risk of obtaining a bad solution.



5.8 Minimum Squared-error Procedures


5.8.4 The Widrow-Hoff or LMS Procedure

http://en.wikipedia.org/wiki/Least_mean_squares_filter


5.8.5 Stochastic Approximation Methods

http://en.wikipedia.org/wiki/Stochastic_gradient_descent


5.9 The Ho-Koshyap Procedures


5.11 Support Vector Machines


5.12 Multicategory Generalizations

5.12.1 Kesler's construction

to convert many multicategory error-correction procedures to two-category procedures for the purpose of obtaining a convergence proof

5.12.2 Convergence of the Fixed-Increment Rule

5.12.3 Generalizaions for MSE Procedures


posted by maetel