블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2010. 2. 22. 19:25 Computer Vision
UX, interactive art 분야 조사

1
Buzz 3D 사의 - 3D Interface - High Definition digital 3D Marketing solution
http://www.buzz3d.com/3d_interface_index.html
: 웹에서 실시간으로 동작 가능한 3차원 가상 현실 구현 플랫폼/애플리케이션
-> 사용자 트래킹: http://www.buzz3d.com/3d_interface_features_user.html
을 통해 행동을 분석하고
-> 햅틱: http://www.buzz3d.com/3d_interface_features_haptics.html
기능을 통해 체감형 경험을 제공함


2
HTC 사의 휴대폰 - HTC Touch Diamond
-> TouchFLO 3D 기능: http://www.htc.com/www/product/touchdiamond/touchflo-3d.html
: finger gesture를 통해 메뉴 선택, 웹 브라우징, 이메일링 등을 할 수 있음


3
CityWall
http://citywall.org/
: 핀란드 헬싱키 중심에 설치되어 있는 대형 멀티 터치 디스플레이
Helsinki Institute for Information TechnologyMultitouch 사에서 공동 개발
http://www.youtube.com/watch?v=WkNq3cYGTPE


4
Microsoft 사의 Bing Maps의 augmented reality mapping 기술
: 사용자가 찍고 있는 동영상의 2차원 이미지를 웹 상의 3차원 지도와 실시간으로 매칭시켜 보여 줌 (이때 시간 정보까지 반영함으로써 4D를 실현)
http://www.ted.com/talks/blaise_aguera.html



5
3차원 UX 구현 툴 목록 (입력이 3차원이 아니므로 주제와 별개일 수도 있음)
http://www.artefactgroup.com/blog/2010/02/tools-for-building-a-3d-ux/
- PapervisionAway 3d 는 Adobe Flash의 plug-in들
- Electric Rain 사가 개발한 Swift 3d
- Scaleform 사의 게임 개발용 솔루션 GFx 3.0
- Microsoft 사의 Expression Blend
- Electric Rain 사의 3D XAML 툴 ZAM 3D
- 비프로그래머들을 위한 Processing으로 만들어진 ATOMIC Authoring Tool은 ARToolkit 라이브러리로 제공되는 증강 현실 저작 툴
- TAT kaster UI 렌더링 플랫폼
- Kanzi


6
R.U.S.E. : 손가락 동작을 통해 조작하는 게임으로 E3에 발표됐음
ref. http://www.artefactgroup.com/blog/2009/09/3d-ui-useful-usable-or-desirable/

7
SKT의 증강 현실 서비스 Ovjet
http://ovjet.com/
: 휴대폰 카메라로 보는 실제화면 위에 실시간으로 다양한 정보를 결합하여 보여주는 증강현실(Augmented Reality) 서비스
ref. http://news.cnbnews.com/category/read.html?bcode=102993


8
CATIA
제품디자인 제작을 위한 가상현실 저작 툴


9
Marisil (Mobile Augmented Reality Interface Sign Interpretation Language)
http://marisil.org/
손동작에 기반한 증강 현실을 인터페이스로하는 모바일 기술




10
http://www.engadget.com/2005/10/02/pioneer-develops-input-device-for-3d-drawing/


http://en.wikipedia.org/wiki/Gesture_recognition

http://en.wikipedia.org/wiki/Depth_perception

"human detection IP"
http://www.visionbib.com/bibliography/motion-f733.html
VideoProtein  http://www.videoprotein.com/

"depth map IP application"
http://altruisticrobot.tistory.com/219
posted by maetel
2009. 11. 8. 16:31 Computer Vision
Branislav Kisačanin & Vladimir Pavlović & Thomas S. Huang
Real-Time Vision for Human-Computer Interaction
(RTV4HCI)
Springer, 2005
(google book's overview)

2004 IEEE CVPR Workshop on RTV4HCI - Papers
http://rtv4hci.rutgers.edu/04/


Computer vision and pattern recognition continue to play a dominant role in the HCI realm. However, computer vision methods often fail to become pervasive in the field due to the lack of real-time, robust algorithms, and novel and convincing applications.

Keywords:
head and face modeling
map building
pervasive computing
real-time detection

Contents:
RTV4HCI: A Historical Overview.
- Real-Time Algorithms: From Signal Processing to Computer Vision.
- Recognition of Isolated Fingerspelling Gestures Using Depth Edges.
- Appearance-Based Real-Time Understanding of Gestures Using Projected Euler Angles.
- Flocks of Features for Tracking Articulated Objects.
- Static Hand Posture Recognition Based on Okapi-Chamfer Matching.
- Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features.
- Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters.
- A Real-Time Vision Interface Based on Gaze Detection -- EyeKeys.
- Map Building from Human-Computer Interactions.
- Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures.
- Epipolar Constrained User Pushbutton Selection in Projected Interfaces.
- Vision-Based HCI Applications.
- The Office of the Past.
- MPEG-4 Face and Body Animation Coding Applied to HCI.
- Multimodal Human-Computer Interaction.
- Smart Camera Systems Technology Roadmap.
- Index.




RTV4HCI: A Historical Overview
Matthew Turk (mturk@cs.ucsb.edu)
University of California, Santa Barbara
http://www.stanford.edu/~mturk/
http://www.cs.ucsb.edu/~mturk/

The goal of research in real-time vision for human-computer interaction is to develop algorithms and systems that sense and perceive humans and human activity, in order to enable more natural, powerful, and effective computer interfaces.

Computers in the Human Interaction Loop (CHIL)

perceptual interfaces
multimodal interfaces
post-WIMP(windows, icons, menus, pointer) interfaces

implicit user awareness or explicit user control

The user interface
- the software and devices that implement a particular model (or set of models) of HCI

Computer vision technologies must ultimately deliver a better "user experience".

B Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Third Edition, Addison-Wesley, 1998.
: 1) time to learn 2) speed of performance 3) user error rates 4) retention over time 5) subjective satisfaction

- Presence and location (Face and body detection, head and body tracking)
- Identity (Face recognition, gait recognition)
- Expression (Facial feature tracking, expression modeling and analysis)
- Focus of attention (Head/face tracking, eye gaze tracking)
- Body posture and movement (Body modeling and tracking)
- Gesture (Gesture recognition, hand tracking)
- Activity (Analysis of body movement)

eg.
VIDEOPLACE (M W Krueger, Artificial Reality II, Addison-Wesley, 1991)
Magic Morphin Mirror / Mass Hallucinations (T Darrell et al., SIGGRAPH Visual Proc, 1997)

Principal Component Analysis (PCA)
Linear Discriminant Analysis (LDA)
Gabor Wavelet Networks (GWNs)
Active Appearance Models (AAMs)
Hidden Markov Models (HMMs)

Identix Inc.
Viisage Technology Inc.
Cognitec Systems


- MIT Medial Lab
ALIVE system (P Maes et al., The ALIVE system: wireless, full-body interaction with autonomous agents, ACM Multimedia Systems, 1996)
PFinder system (C R Wren et al., Pfinder: Real-time tracking of the human body, IEEE Trans PAMI, pp 780-785, 1997)
KidsRoom project (A Bobick et al., The KidsRoom: A perceptually-based interactive and immersive story environment, PRESENCE: Teleoperators and Virtual Environments, pp 367-391, 1999)




Flocks of Features for Tracking Articulated Objects
Mathias Kolsch (kolsch@nps.edu
Computer Science Department, Naval Postgraduate School, Monterey
Matthew Turk (mturk@cs.ucsb.edu)
Computer Science Department, University of California, Santa Barbara




Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features
Guangqi Ye (grant@cs.jhu.edu), Jason J. Corso, Gregory D. Hager
Computational Interaction and Robotics Laboratory
The Johns Hopkins University



Map Building from Human-Computer Interactions
http://groups.csail.mit.edu/lbr/mars/pubs/pubs.html#publications
Artur M. Arsenio (arsenio@csail.mit.edu)
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology



Vision-Based HCI Applications
Eric Petajan (eric@f2f-inc.com)
face2face animation, inc.
eric@f2f-inc.com



The Office of the Past
Jiwon Kim (jwkim@cs.washington.edu), Steven M. Seitz (seitz@cs.washington.edu)
University of Washington
Maneesh Agrawala (maneesh@microsoft.com)
Microsoft Research
Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10  Page: 157   Year of Publication: 2004
http://desktop.google.com
http://grail.cs.washington.edu/projects/office/
http://www.realvnc.com/



Smart Camera Systems Technology Roadmap
Bruce Flinchbaugh (b-flinchbaugh@ti.com)
Texas Instruments

posted by maetel