cf. 검색병이 도져 우연히 찾은 자료: Jeffrey Hass (Indiana University)'s Introduction to Computer Music: Volume One
(사운드, 음파에 대해 처음으로 이해해 보려는 나에게 안성맞춤. 설명용 그래픽 애니메이션이 정성스럽고, 무엇보다 최대 미덕은 바로 짧은 분량. 핵심 요약 노트 같다.)
- low-level process: both inputs and outputs are images
eg. image preprocessing to reduce noise, contrast enhancement, image sharpening
- mid-level process: inputs generally are images, but its outputs are attributes like edges, contours, identity of objects
eg. segmentation, description of the objects, classification (recognition)
- higher-level process: recognized objects performs the cognitive functions associated with vision
"modern digital computer"
with the introduction by John von Neumann of two key concepts: (1) a memory (2) conditional branching, which are the foundation of a CPU
+
mass storage & display systems
=> digital image processing
> the birth of digital image processing
- space probe
Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory (Pasadena, California) in 1964 when pictures of the moon transmitted by Ranger 7 were processed by a computer to correct various types of image distortion inherent in the on-board television camera.
- medical diagnosis
Tomography consists of algorithms that use the sensed data to construct an image that represents a "slice" through the object. Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices, which constitute a three-dimensional rendition of the inside of the object. Tomography was invented independently by Sir Godfrey N. Hounsfield and Professor Allan M. Cormack, who shared the 1979 Nobel Prize in Medicine for their invention.
1절 - 디지털 이미지 프로세싱을 정의하고, 상관 분야인 이미지 분석(Image Analysis)과 컴퓨터 비젼(Computer Vision)과의 영역 구분에 대해 논한다.
디지털 이미지 + 디지털 컴퓨터 => 디지털 이미지 프로세싱
이미지 프로세싱 -> 이미지 분석 -> 컴퓨터 비젼
2절 - 디지털 이미지의 발전 단계를 예시를 통해 짚고, 컴퓨터의 탄생과 현대적 의미의 디지털 컴퓨터에 대한 규정을 소개한다. 디지털 이미지 프로세싱의 발전 역사는 1960년대의 우주 탐사와 의료 진단에서 시작되었다. 이후 (1) 인간의 해석(human interpretation)을 돕는 수단으로서는 생물학, 지리학, 고고학, 고에너지 플라즈마와 전자 현미경 분야의 실험 물리학, 천문학, 핵의학, 법률 집행, 방위 산업 등에서 (2) 기계의 인식(machine perception)을 구현하는 수단으로서는 자동 문자 인식, 제품 조립과 검사 공정을 위한 산업 기계, 군사 정찰, 지문 자동 처리, X선과 혈액 샘플 검사, 기상 예보와 환경 평가를 위한 항공 또는 위성 사진 처리 등에서 광범위하게 응용하고 있다.
3절 - 현재 이미지를 얻는 원천은 주로 전자기 스펙트럼이다. 전자기파는 (1) 다양한 파장의 사인파의 진행이나 (2) 질량 없이 광속으로 움직이는 입자들의 흐름으로 생각할 수 있다. 감마선, X선, 자외선, 가시광/적외선, 마이크로파, 라디오파의 응용 실례들을 예시한다. 다른 원천으로는 음향, 초음파, 전자 빔이 있다.
4절 - 디지털 이미지 프로세싱의 각 단계들을 간략 소개한다. image acquisition, image enhancement, image restoration, color image processing, wavelets (-> image data compression / pyramidal representation), compression, morphological processing, segmentation, (boundary/regional) representation & description (feature selection), recognition
5절 - 이미지 프로세싱 시스템은 센싱, 하드웨어, 범용 컴퓨터, 소프트웨어, 대용량 저장소(short-term/on-line/archival), 프레임 버퍼(줌/스크롤/팬 기능), 이미지 디스플레이(컬러 모니터), 하드카피 장치(레이저 프린터, 필름 카메라, 열감지 장치, 잉크젯 기구, 디지털 매체 등), 네트워킹으로 구성된다.
주제: Optoelectronics for Brain and Cognitive Engineering
연사: 한재호 교수 (고려대학교 뇌공학과)
일시: 2011년 06월 13일(월) 오후 5시
장소: 자연계캠퍼스 아산이학관 633호
주최: 고려대학교 정보통신대학 뇌공학과
후원: 고려대학교 정보통신대학, 뇌인지과학 연계전공, WCU 뇌공학연구사업단, 뇌공학연구소
문의: 02)3290-5920
초록: Optoelectronics for Brain and Cognitive Engineering
In the first half of this talk, an introduction to various cutting-edge optoelectronic imaging and sensing technologies for high-resolution brain imaging will be overviewed. In the latter half, recent studies that have been performed will be provided mainly focusing on the optical coherence tomography (OCT). OCT has emerged as a promising imaging modality that can provide non-invasive high-resolution tomographic imaging in real-time. Novel fiber optic probes, image processing methods, near infrared sources for sensing, and surgical robot applications that could make the OCT a practical system for a high-resolution endoscopic imaging will be presented.
APS(Association for Psychological Science): Robert L. Solso (1933-2005)
- To develop connections between cognitive psychology and the related fields of anthropology, computer science, education, linguistics, neuroscience, and philosophy
“Art and cognition, and the brain, and consciousness, and evolution have all stood as complex mirrors, all reflecting and amplifying each other.”
In searching for a rational connection between consciousness and art, it was necessary to examine the evolution of the human brain and cognition. Out of these scientific explorations, I have developed a new model describing the evolution of consciousness and its relationship to the emergence of art.
conscious AWAREness
We have a pretty good idea, for example, as to when and how the human brain evolved and when early art emerged, and we have a sound understanding of the workings of the sensory-cognitive system. With this knowledge in hand, it is propitious to consider the evolution of the human brain and the emergence of AWAREness, as they might be related to art. As the brain increased in size and capacity during the upper Pleistocene, additional components of consciousness were added or developed. People became more AWARE in the sense that they were more cognizant, not only of a world that existed in contemporaneous actuality, but of a world that could be imaged. That change took humankind on a wondrous voyage. Men and women could imagine nonpresent things such as what might be behind a bush, where fresh water might be found, and what a nonpresent bull might look like. While other animals had some forms of consciousness, the visionary aptitude of humans to extend consciousness beyond responding to moment-to- moment sensory experiences was spinning into new possibilities previously unseen on this earth. Equipped with expanded conscious AWAREness, people first created art and then technology. The beginning of art is a clear manifestation of the brain’s capacity for imaginative behavior.
All factors—brain, anthropology, cog- nition, and art—were tied together by human consciousness.
초청: Neural Interface Lab(Director: 김성필) 일시: 2011년 6월 8일 (수), 오후 5시
장소: 고려대학교 자연계캠퍼스 미래융합기술관 111호
연사: 송주현 교수
(Department of Cognitive, Linguistics and Psychological Sciences, Brown University)
제목: How do perception, cognition, and action interact in a complex visual environment?
최근 들어 IT 산업에서 클라우드 컴퓨팅이 차지하는 비중이 높아지고 있다. 그럼에도 불구하고, 클라우드 컴퓨팅에 대한 개념과 내용이 불확실한 것이 현실이다. 본고에서는 클라우드 컴퓨팅과 관련한 논의가 가진 개념과 범주의 불확실성을 극복하기 위해 클라우드 컴퓨팅의 가장 본질적인 것으로 이해될 수 있는 속성을 도출하였다. 하드웨어 통합, 데이터 이전, 아키텍처의 세 가지 이슈를 중심으로 클라우드 컴퓨팅의 서비스와 전개 모델과 상관없이 클라우드 컴퓨팅 논의에 포함되어야 하는 최소한의 공통점인 본질적 속성에 대해 정리하였다.
2006년 구글 회의에서 처음으로 등장한 ‘클라우드 컴퓨팅(cloud computing)’이라는 용어는 최근 2~3년 사이에 IT산업의 중요한 화두로 자리매김하였다. ‘클라우드 컴퓨팅(cloud computing)’은 가트너가 선정한 10대 전략에서 2010년에 이어 2011년에도 1위를 차지하였으며, 구글, IBM, MS 등 글로벌 IT 기업들이 클라우드 컴퓨팅을 핵심 사업으로 삼고 있다. 클라우드 컴퓨팅에 대한 이 같은 뜨거운 관심을 반영하여 우리 정부에서도 이를 육성하기 위한 움직임을 보이고 있다. 이러한 움직임의 하나로 2009년 말에 국내 클라우드 컴퓨팅 산업을 육성하기 시작하여 2014년 세계시장 점유율 10%에 이르는 클라우드 컴퓨팅 강국 실현을 목적으로 하는 ‘범정부 클라우드 활성화 종합계획’을 행정안전부, 지식경제부, 방송통신위 등의 3개 부처가 공동으로 수립하였다.
Michael I. Jordan & Christopher M. Bishop, "Neural Networks", In Tucker, A. B. (Ed.) CRC Handbook of Computer Science, Boca Raton, FL: CRC Press, 1997.
Neural network methods have had their greatest impact in problems where statistical issues dominate and where data are easily obtained.
"conjunction of graphical algorithms and probability theory":
A neural network is first and foremost a graph with patterns represented in terms of numerical values attached to the nodes of the graph and transformations between patterns achieved via simple message-passing algorithms. Many neural network architectures, however, are also statistical processors, characterized by making particular probabilistic assumptions about data.
Based on a source of training data, the aim is to produce a statistical model of the process from which the data are generated so as to allow the best predictions to be made for new data.
statistical modeling - density estimation (unsupervised learning), classification & regression
density estimation ("unsupervised learning")
: to model the unconditional distribution of data described by some vector
- to train samples and a network model to build a representation of the probability density
Michael I. Jordan, Generic constraints on underspecified target trajectories, Proceedings of international conference on neural networks, (1989), 217-225
"The state space approach is more general than the "classical" Laplace and Fourier transform theory. Consequently, state space theory is applicable to all systems that can be analyzed by integral transforms in time, and is applicable to many systems for which transform theory breaks down"
(1) Linear systems with time-varying parameters can be analyzed in essentially the same manner as time-invariant linear systems.
(2) Problems formulated by state space methods can easily be programmed on a computer.
(3) High-order linear systems can be analyzed.
(4) Multiple input - multiple output systems can be treated almost as easily as single input - single output linear systems.
(5) State space theory is the foundation for further studies such areas as nonlinear systems, stochastic systems, and optimal control.
"Because state space theory describes the time behaviors of physical systems in a mathematical manner, the reader is assumed to have some knowledge of differential equations and of Laplace transform theory."
FFMV-03M2M-CS
6p-pin right angle IEEE-1394 Connector
Max 752*480 at 60 FPS
1/3" Micron CMOS , BW
Progressive Scan
Plastic Case Included
FFMV Metal Case
LM5NCL - F1.4 ~1.6 , C-Mount
Adaptor for NMV-4/5WA 렌트 Filter
IR Longpass - 830nm
1394a PCI Adpapter
1394b FWB-LDR-CAT5 Repeater SET
The Point Grey Image Filter Driver (PGRGIGE.sys) was developed for use with GigE Vision cameras. This driver operates as a network service between the camera and the Microsoft built-in UDP stack to filter out GigE vision stream protocol (GVSP) packets.
The filter driver is installed and enabled by default as part of the FlyCapture SDK installation process. Use of the filter driver is recommended, as it can reduce CPU load and improve image streaming performance.
Point Grey GigE Vision cameras can operate without the filter driver, by communicating directly with the Microsoft UDP stack. GigE Vision cameras operating on Linux systems can communicate directly with native Ubuntu drivers.
FlyCapture SDK 중 이미지 저장 관련 예제 코드 위치: Program Files > Point Grey Research Inc. > FlyCapture2 > Examples > SaveImageToAviEx 설명: Demonstrates saving a series of images to an AVI file
You can install the CMU 1394 driver for your camera and then use the API of this driver to capture video from the camera. In this way, you can avoid the use of DirectX. See example for details.