블로그 이미지
Leeway is... the freedom that someone has to take the action they want to or to change their plans.
maetel

Notice

Recent Post

Recent Comment

Recent Trackback

Archive

calendar

1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
  • total
  • today
  • yesterday

Category

2009. 11. 8. 16:31 Computer Vision
Branislav Kisačanin & Vladimir Pavlović & Thomas S. Huang
Real-Time Vision for Human-Computer Interaction
(RTV4HCI)
Springer, 2005
(google book's overview)

2004 IEEE CVPR Workshop on RTV4HCI - Papers
http://rtv4hci.rutgers.edu/04/


Computer vision and pattern recognition continue to play a dominant role in the HCI realm. However, computer vision methods often fail to become pervasive in the field due to the lack of real-time, robust algorithms, and novel and convincing applications.

Keywords:
head and face modeling
map building
pervasive computing
real-time detection

Contents:
RTV4HCI: A Historical Overview.
- Real-Time Algorithms: From Signal Processing to Computer Vision.
- Recognition of Isolated Fingerspelling Gestures Using Depth Edges.
- Appearance-Based Real-Time Understanding of Gestures Using Projected Euler Angles.
- Flocks of Features for Tracking Articulated Objects.
- Static Hand Posture Recognition Based on Okapi-Chamfer Matching.
- Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features.
- Head and Facial Animation Tracking Using Appearance-Adaptive Models and Particle Filters.
- A Real-Time Vision Interface Based on Gaze Detection -- EyeKeys.
- Map Building from Human-Computer Interactions.
- Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures.
- Epipolar Constrained User Pushbutton Selection in Projected Interfaces.
- Vision-Based HCI Applications.
- The Office of the Past.
- MPEG-4 Face and Body Animation Coding Applied to HCI.
- Multimodal Human-Computer Interaction.
- Smart Camera Systems Technology Roadmap.
- Index.




RTV4HCI: A Historical Overview
Matthew Turk (mturk@cs.ucsb.edu)
University of California, Santa Barbara
http://www.stanford.edu/~mturk/
http://www.cs.ucsb.edu/~mturk/

The goal of research in real-time vision for human-computer interaction is to develop algorithms and systems that sense and perceive humans and human activity, in order to enable more natural, powerful, and effective computer interfaces.

Computers in the Human Interaction Loop (CHIL)

perceptual interfaces
multimodal interfaces
post-WIMP(windows, icons, menus, pointer) interfaces

implicit user awareness or explicit user control

The user interface
- the software and devices that implement a particular model (or set of models) of HCI

Computer vision technologies must ultimately deliver a better "user experience".

B Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Third Edition, Addison-Wesley, 1998.
: 1) time to learn 2) speed of performance 3) user error rates 4) retention over time 5) subjective satisfaction

- Presence and location (Face and body detection, head and body tracking)
- Identity (Face recognition, gait recognition)
- Expression (Facial feature tracking, expression modeling and analysis)
- Focus of attention (Head/face tracking, eye gaze tracking)
- Body posture and movement (Body modeling and tracking)
- Gesture (Gesture recognition, hand tracking)
- Activity (Analysis of body movement)

eg.
VIDEOPLACE (M W Krueger, Artificial Reality II, Addison-Wesley, 1991)
Magic Morphin Mirror / Mass Hallucinations (T Darrell et al., SIGGRAPH Visual Proc, 1997)

Principal Component Analysis (PCA)
Linear Discriminant Analysis (LDA)
Gabor Wavelet Networks (GWNs)
Active Appearance Models (AAMs)
Hidden Markov Models (HMMs)

Identix Inc.
Viisage Technology Inc.
Cognitec Systems


- MIT Medial Lab
ALIVE system (P Maes et al., The ALIVE system: wireless, full-body interaction with autonomous agents, ACM Multimedia Systems, 1996)
PFinder system (C R Wren et al., Pfinder: Real-time tracking of the human body, IEEE Trans PAMI, pp 780-785, 1997)
KidsRoom project (A Bobick et al., The KidsRoom: A perceptually-based interactive and immersive story environment, PRESENCE: Teleoperators and Virtual Environments, pp 367-391, 1999)




Flocks of Features for Tracking Articulated Objects
Mathias Kolsch (kolsch@nps.edu
Computer Science Department, Naval Postgraduate School, Monterey
Matthew Turk (mturk@cs.ucsb.edu)
Computer Science Department, University of California, Santa Barbara




Visual Modeling of Dynamic Gestures Using 3D Appearance and Motion Features
Guangqi Ye (grant@cs.jhu.edu), Jason J. Corso, Gregory D. Hager
Computational Interaction and Robotics Laboratory
The Johns Hopkins University



Map Building from Human-Computer Interactions
http://groups.csail.mit.edu/lbr/mars/pubs/pubs.html#publications
Artur M. Arsenio (arsenio@csail.mit.edu)
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology



Vision-Based HCI Applications
Eric Petajan (eric@f2f-inc.com)
face2face animation, inc.
eric@f2f-inc.com



The Office of the Past
Jiwon Kim (jwkim@cs.washington.edu), Steven M. Seitz (seitz@cs.washington.edu)
University of Washington
Maneesh Agrawala (maneesh@microsoft.com)
Microsoft Research
Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 10 - Volume 10  Page: 157   Year of Publication: 2004
http://desktop.google.com
http://grail.cs.washington.edu/projects/office/
http://www.realvnc.com/



Smart Camera Systems Technology Roadmap
Bruce Flinchbaugh (b-flinchbaugh@ti.com)
Texas Instruments

posted by maetel
2009. 7. 14. 21:23 Computer Vision
ISMAR 2008
7th IEEE/ACM International Symposium on Mixed and Augmented Reality, 2008


Proceedings
State of the Art Report

Trends in Augmented Reality Tracking, Interaction and Display
: A Review of Ten Years of ISMAR
Feng Zhou (Center for Human Factors and Ergonomics, Nanyang Technological University, Singapore)
Henry Been-Lirn Duh (Department of Electrical and Computer Engineering/Interactive and Digital Media Institute, National University of Singapore)
Mark Billinghurst (The HIT Lab NZ, University of Canterbury, New Zealand)


Tracking

1. Sensor-based tracking -> ubiquitous tracking and dynamic data fusion

2. Vision-based tracking: feature-based and model-based
1) feature-based tracking techniques:
- To find a correspondence between 2D image features and their 3D world frame coordinates.
- Then to find the camera pose from projecting the 3D coordinates of the feature into the observed 3D image coordinates and minimizing the distance to their corresponding 3D features.

2) model-based tracking techniques:
- To explicitly use a model of the features of tracked objects such as a CAD model or 2D template of the object based on the distinguishable features.
- A visual serving approach adapted from robotics to calculate camera pose from a range of model features (line, circles, cylinders and spheres)
- knowledge about the scene by predicting hidden movement of the object and reducing the effects of outlier data

3. Hybrid tracking
- closed-loop-type tracking based on computer vision techonologies
- motion prediction
- SFM (structure from motion)
- SLAM (simultaneous localization and mapping)


Interaction and User Interfaces

1. Tangible
2. Collaborative
3. Hybrid


Display

1. See-through HMDs
1) OST = optical see-through
: the user to see the real world with virtual objects superimposed on it by optical or video technologies
2) VST = video see-through
: to display graphical infromation directly on real objects or even daily surfaces in everyday life
2. Projection-based Displays
3. Handheld Displays


Limitations of AR

> tracking
1) complexity of the scene and the motion of target objects, including the degrees of freedom of individual objects and their represenation
=> correspondence analysis: Kalman filters, particle filters.
2) how to find distinguishable objects for "markers" outdoors

> interaction
ergonomics, human factors, usability, cognition, HCI (human-computer interaction)

> AR displays
- HMDs - limited FOV, image distortions,
- projector-based displays - lack mobility, self-occlusion
- handheld displays - tracking with markers to limit the work range

Trends and Future Directions

1. Tracking
1) RBPF (Rao-Blackwellized particle filters) -> automatic recognition systems
2) SLAM, ubiquitous tracking, sensor network -> free from prior knowledge
3) pervasive middleware <- information fusion algorithms

2. Interaction and User Interfaces
"Historically, human knowledge, experience and emotion are expressed and communicated in words and pictures. Given the advances in interface and data capturing technology, knowledge, experience and emotion might now be presented in the form of AR content."

3. AR Displays





Studierstube Augmented Reality Project
: software framework for the development of Augmented Reality (AR) and Virtual Reality applications
Graz University of Technology (TU Graz)

Sharedspace project
The Human Interface Technology Laboratory (HITLab) at the University ofWashington and ATR Media Integration & Communication in Kyoto,Japan join forces at SIGGRAPH 99

The Invisible Train - A Handheld Augmented Reality Game

AR Tennis
camera based tracking on mobile phones in face-to-face collaborative Augmented Reality

Emmie - Environment Management for Multi-User Information Environments

VITA: visual interaction tool for archaeology

HMD = head-mounted displays

OST = optical see-through

VST = video see-through

ELMO: an Enhanced optical see-through display using an LCD panel for Mutual Occlusion

FOV
http://en.wikipedia.org/wiki/Field_of_view_(image_processing)

HMPD = head-mounted projective displays

The Touring Machine

MARS - Mobile Augmented Reality Systems
    
Klimt - the Open Source 3D Graphics Library for Mobile Devices

AR Kanji - The Kanji Teaching application


references  
Ronald T. Azuma  http://www.cs.unc.edu/~azuma/
A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments 6, 4 (August 1997), 355 - 385. Earlier version appeared in Course Notes #9: Developing Advanced Virtual Reality Applications, ACM SIGGRAPH '95 (Los Angeles, CA, 6-11 August 1995), 20-1 to 20-38.

Ronald Azuma, Yohan Baillot, Reinhold Behringer, Steven Feiner,Simon Julier, Blair MacIntyre
Recent Advances in Augmented Reality.IEEE Computer Graphics and Applications 21, 6 (Nov/Dec 2001),34-47.

Ivan E. Sutherland
The Ultimate Display, IFIP `65, pp. 506-508, 1965

Kato, H.   Billinghurst, M.   Poupyrev, I.   Imamoto, K.   Tachibana, K.   Hiroshima City Univ.
Virtual object manipulation on a table-top AR environment

Sandor, C., Olwal, A., Bell, B., and Feiner, S. 2005.
Immersive Mixed-Reality Configuration of Hybrid User Interfaces.
In Proceedings of the 4th IEEE/ACM international Symposium on Mixed and Augmented Reality(October 05 - 08, 2005). Symposium on Mixed and Augmented Reality. IEEEComputer Society, Washington, DC, 110-113. DOI=http://dx.doi.org/10.1109/ISMAR.2005.37

An optical see-through display for mutual occlusion with a real-time stereovision system
Kiyoshi Kiyokawa, Yoshinori Kurata and Hiroyuki Ohno
Computers & Graphics Volume 25, Issue 5, October 2001, Pages 765-779

Bimber, O., Fröhlich, B., Schmalstieg, D., and Encarnação, L. M. 2005.
The virtual showcase. In ACM SIGGRAPH 2005 Courses (Los Angeles, California, July 31 - August 04, 2005). J. Fujii, Ed. SIGGRAPH '05. ACM, New York, NY, 3. DOI= http://doi.acm.org/10.1145/1198555.1198713

Bimber, O., Wetzstein, G., Emmerling, A., and Nitschke, C. 2005.
Enabling View-Dependent Stereoscopic Projection in Real Environments. In Proceedings of the 4th IEEE/ACM international Symposium on Mixed and Augmented Reality (October 05 - 08, 2005). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 14-23. DOI= http://dx.doi.org/10.1109/ISMAR.2005.27

Cotting, D., Naef, M., Gross, M., and Fuchs, H. 2004.
Embedding Imperceptible Patterns into Projected Images for Simultaneous Acquisition and Display. In Proceedings of the 3rd IEEE/ACM international Symposium on Mixed and Augmented Reality (November 02 - 05, 2004). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 100-109. DOI= http://dx.doi.org/10.1109/ISMAR.2004.30

Ehnes, J., Hirota, K., and Hirose, M. 2004.
Projected Augmentation - Augmented Reality using Rotatable Video Projectors. In Proceedings of the 3rd IEEE/ACM international Symposium on Mixed and Augmented Reality (November 02 - 05, 2004). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 26-35. DOI= http://dx.doi.org/10.1109/ISMAR.2004.47

Arango, M., Bahler, L., Bates, P., Cochinwala, M., Cohrs, D., Fish, R., Gopal, G., Griffeth, N., Herman, G. E., Hickey, T., Lee, K. C., Leland, W. E., Lowery, C., Mak, V., Patterson, J., Ruston, L., Segal, M., Sekar, R. C., Vecchi, M. P., Weinrib, A., and Wuu, S. 1993.
The Touring Machine system. Commun. ACM 36, 1 (Jan. 1993), 69-77. DOI= http://doi.acm.org/10.1145/151233.151239

Gupta, S. and Jaynes, C. 2006.
The universal media book: tracking and augmenting moving surfaces with projected information. In Proceedings of the 2006 Fifth IEEE and ACM international Symposium on Mixed and Augmented Reality (Ismar'06) - Volume 00 (October 22 - 25, 2006). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 177-180. DOI= http://dx.doi.org/10.1109/ISMAR.2006.297811


Klein, G. and Murray, D. 2007.
Parallel Tracking and Mapping for Small AR Workspaces. In Proceedings of the 2007 6th IEEE and ACM international Symposium on Mixed and Augmented Reality - Volume 00 (November 13 - 16, 2007). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 1-10. DOI= http://dx.doi.org/10.1109/ISMAR.2007.4538852

Neubert, J., Pretlove, J., and Drummond, T. 2007.
Semi-Autonomous Generation of Appearance-based Edge Models from Image Sequences. In Proceedings of the 2007 6th IEEE and ACM international Symposium on Mixed and Augmented Reality - Volume 00 (November 13 - 16, 2007). Symposium on Mixed and Augmented Reality. IEEE Computer Society, Washington, DC, 1-9. DOI= http://dx.doi.org/10.1109/ISMAR.2007.4538830

posted by maetel
2007. 7. 23. 19:44 Method/VFX
invalid-file

<A Survey on Hair Modeling: Styling, Simulation, and Rendering>

Kelly Ward, Florence Bertails, Tae-Yong Kim, Stephen R. Marschner, Marie-Paule Cani, Ming C. Lin

Walt Disney Features Animation
EVASION-INRIA, Grenoble, France
Rhythm & Hues Studios
Cornell University
EVASION/INRIA & INP Grenoble, France
University of North Carolina at Chapel Hill


I. A. Hair Modeling Overview

hair modeling - hairstyling / hair simulation / hair rendering
    hairstyling: modeling the shape of the hair
           - geometry, density, distribution, orientation
    hair simulation: dynamic motion of hair
          - collision, mutual interactions
    hair rendering: visual depiction of hair
          - color, shadows, light scattering, transparency, anti-aliasing
ref. Magnenat-Thalmann

- geometric complexity and thin nature of an individual strand coupled with the complex collisions and shadows that occur among the hairs

GPU


I. B. Applications and Remaining Problems

cosmetic product
entertainment industry (feature animation)
interactive systems (virtual environments, videogames)


II. HAIRSTYLING
II. A. Hair Structural and Geometric Properties

hair types - Asian / African / Caucasian
- Asian: smooth, regular, a circular corss-section
- African: irregular, an elliptical cross-section
- Caucasian: ranged from smooth to curly

Visual styling techniques are for applications that desire a visually-prausible solution to match the final results with the appearance of real-world hair.


II. B. Attaching Hair to the Scalp

1) 2D Placement
    : spherical mapping of 2D map to the 3D contour of the scalp
    * TY Kim - 2D space defined by the two parametric coordinates of the patch wrapped interactively over the head model
    * Bando - harmonic mapping, compensation based on a Poisson disc distribution

2) 3D Placement
    : selecting triangles defined the scalp interactively

3) Distribution of Hair Strands on the Scalp
    : uniform distribution over the scalp
    * the user's painting local hair density as color levels and further characteristics such as length or curliness
ref. B. Hernandez & I. Rudomin, <Hair Paint>,
Computer Graphics International (CGI), June 2004, pp. 578-581


II. C. Global Hair Shape Generation

1) Geometry-Based Hairstyling
    : a parametric representation of hair in the form of trigonal prisms or generalized cylinders
    a) Parametric Surface: A patch of a parametric surface such as a NURBS surface("hair strips") are given a location on the scalp, an orientation, and weighting for knots to define a desired hair shape.
       * U-shape
       * Thin Shell Volume (TSV)
       * key hair curves generated along the isocurves of the NURBS volume
    b) Wisps and Generalized Cylinders: The positioning of one general space curve serves as the center of a radius function defining the cross-section of a generalized cylinder ("a hair cluster").
    c) Multi-resolution Editing: A hierarchy of generalized cylinders allows users to select a desired level of control in shape modeling.

2) Physically-based Hairstyling
    a) The Cantilever Beam: A cantilever beam is defined as a straight beam embedded in a fixed support at one end only. Gravity and extra-forces need to be applied to the strand.
    b) Fluid Flow: The idea that static hair shapes resemble snapshots of fluid flow around obstacles.
    c) Styling Vector and Motion Fields: Given a global field generated by superimposing procedurally defined vector field primitives, hair strands are extracted by tracing the field lines of the vector field. Hair deformation is computed by using the previous algorithm applied on the modified vector field. (Three types of constraints: point / trajectory / direction)

3) Generation of Hairstyles from Images
    : the automatic reconstruction of hair from imgaes
    a) Hair Generation from Photographs: Building a 3D hair volume from various viewpoints of the subject's hair is for simple hairstyles.
    b) Hair Generation from Sketches: In a sketch-based system dedicated to modeling cartoon hairstyles, Curves representing clusters of hair are generated between silhouette surface and the scalp.

4) Evaluation
  

II. D. Managing Finer Hair Properties

1) Details of Curls and Waves
    - a class of trigonometric offset functions with rabdin terms
    - a breakaway behaviour from the fluid flow based on a probability function
    - the degree of similarity among the master strands controlled by a length distribution, a deviation radius function and a fuziness value
    * a Markov chain process
    * a Gibbs distribution
    * the Kirchhoff model (: a mechanically accurate model for static elastic rods)
   
2) Producing Hair Volume
    - for hair self-collisions, hair can be viewed as a continuous medium
    - the idea that hair strands with pores at higher latituds on the head cover strands with lower pores
    - a hair-head collision detection & response algorithm
    * a quasi-static head

3) Modeling Styling Products and Water Effects
    - A styling force is used to enable hairstyle recovery as the hair moves due to external force or head movement. (1)The desire is to retain the deformed hairstyle rather than returning to the initial style. (2)Breakable static links or dynamic bonds can be used to capture hairstyle recovery by applying extra spring forces between nearby sections of hair (to mimic the extra clumping of hair created by styling products).
    - Using a dual-skeleton model for simulating the stiffness of hair motion, separated spring forces can be used to control the bending of hair strands versus the stretching of curls.
    - " As water is absorbed into hair the mass of the hair increases uip to 45%, while its elasticity modulus decreased by a factor of 10."  And the volume of the hair decreases due to the bonding nature of water.
       * Young's modulus of each fiber
    - An interactive virtual hairstyling system introduced by Ward et al.

ref.
K. Ward, N. Galoppo, M. C. Lin <Modeling hair influenced by water and styling products>, International Conference on  Computer Animation and Social Agents (CASA), May 2004, pp. 207-214
K. Ward, N. Galoppo, M. Lin <Interactive virtual hair salon>, PRESENCE: Teleoperators & Virtual Environments (to appear), 2007
  

III.  HAIR  SIMULATION
III. A. The Mechanics of Hair

- The irregular surface of individual hair strands causes anisotropic friction inside hair, with an amplitude (that strongly depends on the orientation of the scales and of the direction of motion).
- Hair-hair friction results in triboelectricity.
- The more intricate the hair's geometry is, the less degrees of freedom it has during motion.


III. B. Dynamics of Individual Hair Strands

1) Mass-Spring Systems
    : A single hair strand is modeled as a set of particles connected with stiff springs and hinges.

2) One Dimensional Projective Equations
    : The statics of a cantlever beam is simulated to get an initial plausible configuration of each hair strand. Then, each hair strand is considered as a chain of rigid sticks.

3) Rigid multi-body serial chain
    : Each hair strand can be represented as a serial, rigid, multi-body open chain using the reduced or spatial coordinates formulation, in order to keep only the bending and twisting degrees of freedom of the chain.
    * Articulated-Body Method
    * DOF
    * multi-pass forward dynamics algorithm

4) Dynamic Super-Helices
    * Kirchhoff theory for elastic rods
    : The curvatures and the twist of the rod are assumed to remain constant over each predefined piece of the rod. As a result, the shape of the hair strand is a piecewise helix, with a finite number of degrees of freedom. This model is then animated using the princiles of Lagrangian mechanics, accounting for the typical nonlinear behavior of hair, as well as for its bending and twisting deformation modes.
   
5) Handling external forces

ref.
D. W. Lee & H. S. Ko <Natural hairstyle modeling and animation>, Graphical Models, vol. 63, no. 2, pp. 67-85, March 2001
[39]

6) Evaluation
(Table II 삽입!)


III. C. Simulating the Dynamics of a Full Hairstyle

detection and resonse => computing hair contacts and collisions

1) Hair as a Continuous Medium
    : hair as an anisotropic continuous medium
    a) Animating Hair Using Fluid Dynamics: Interaction dynamics, including hair-hair, hair-body, and hair-air interactions, are modeled using fluid dynamics. Individual hair strands are kinematically linked to fluid particles in their vicinity. The density of the hair medium is defined as the mass of hair per unit of volume occupied. The pressure and viciosity represent all of the forces due to interactions to hair strands.
Hair-body interactions are modeled by creating boundary fluid particles around solid objects. A fluid particle, or Smooth Particle Hydrodynamics (SPH), exerts a force on the neighboring fluid particles basedd on its normal direction. The viscous pressure of the fluid, which is dependent on the hair density, accounts for the frictional interactions between hair strands.
    b) Loosely Connected Particles: Each particle represents a certain amount of hair material which has a local orientation (the orientation of a particle being the mean orientation of every hair strand covered by the particle). Initially, connected chains are setteld between neighboring particles being aligned with local hair orientation: two neighboring particles having similar directions and being aligned with this direction are linked. -> breakable links between close particles
    c) Interpolation between Guide Hair Strands: Using multiple guide hair strands for the interpolation of a strand alleviates local clustering of strands. A collision among hair strands is detected by checking for intersections between two hair segments and between a hair vertex and a triangular face.
    d) Free Form Deformation (FFD): A mechanical model lis defined for a lattice surrounding the head. The lattice is then deformed as a particle system and hair strands follow the deformation by interpolation.
        * metaball

2) Hair as Disjoint Groups
    : to capture local discontinuities observed inside long hair during fast motion
    a) Real-time Simulation of Hair Strips: The projective angular dynamics method is applied to the control point mesh of the NURBS surface. The strips of texture-mapped hair are simulated using a mass-spring model and 3D morphing.
    b) Simulation of Wisps: During motion, the shape of a wisp is approximated by parabolic trajectories of fictive particles initially located near the root of each wisp.


III. D. Multi-resolution Methods

1) Level-of-Detail Representations
    : Three different levels of detail (LODs) for modeling hair - individual strands, clusters and strips represented by subdivision curves, subdivision swept volumes, and subdivision patches. The family of swept shpere volumes (SSVs) as bounding volumes encapsulates the hair.
   
2) Adaptive Clustering
    : The Adaptive Wisp Tree (AWT) represents at each time step the wisps segments of the hierarchy that are actually simulated (called active segments). The AWT dynamically splits or merges hair wisps while always preserving a tree-like structure.


IV. HAIR RENDERING
IV. A. Representing Hair for Rendering

explicit models - line or triangle-based renderers
volumetric models - volume renderers, or rendering algorithms

1) Explicit Representation
    - curved cylinders / trigonal prisms with three sides / ribbon-like connected triangle strips / tessellating a curved hair geometry into polygons

2) Implicit Representation
    - volumetric textures (texels) / the cluster hair model


IV. B. Light Scattering in Hair

: The first requirement for any hair rendering system is a model for the scattering of light by individual fibers of hair.

1) Hair Optical Properties
    - A hair fiber is composed of three structures: the cortex, the cuticle, and the medulla.
    - A hair is composed of amorphous proteins that act as a transparent medium with an index of refraction (refractive index=1.55).
    - The cortex and medulla contain pigments that absorb light, often in a wavelength-dependent way; these pigments are the cause of the color of hair.

2) Notation and Radiometry of Fiber Reflection
    Because fibers are usually treated as one-dimensional entities, light reflection from fibers needs to be described somewhat differently from the more familiar surface reflection.

사용자 삽입 이미지

Light scattering at a surface is conventionally described using the bidirectional reflectance distribution function (BRDF). The BRDF gives the density with respect to the projected solid angle of the scattered flux that results from a narrow incident beam. It is defined as the ration of surface radiance (intensitiy per unit projected area) exiting the surface in direction w_r to surface irradicance (flux per unit area) falling on the surface from a differential solid angle in the direction w_i:

사용자 삽입 이미지


The scattering function f_s for a fiber is "the ratio of curve radiance (intensity per unit projected length) exiting the curve in direction w_r to curve irradiance (flux per unit length) falling on the curve from a differential solid angle in the direction w_r."

f_s = Curve Radiance / Curve Irradiance
     = (Intensity/Length)(w_r) / (Flux/Length)(w_i)

사용자 삽입 이미지

The curve radiance due to illumination from an incoming radiance distribution



ref.
S. Marschner, H. W. Jensen, M. Cammarano, S. Worley, and P. Hanrahan <Light scattering from human hair fibers>, ACM Transactions on Graphics, vol. 22, no. 3, pp. 780-791, July 2003, proceedings of ACM SIGGRAPH 2003

Curve irradiance measures the radiant power intercepted per unit length of fiber and therefore increases with the fiber's width.
    => Given two fibers with identical properties but different widths, the wider fiber will produce a brighter curve in a rendered image because the wider fiber intercepts more incident light. This definition is consistent with the behavior of real fibers: very fine hairs do appear fainter when viewed in isolation.


3) Reflection and Refraction in Cylinders
Bravais Law: The frequency with which a given face is observed is roughly proportional to the number of nodes it intersects in the lattice per unit length. (© 1996-2007 Eric W. Weisstei)
cf. wikipedia: crystal system

Snell's Law: The boundary condition that a wave be continuous across a boundary  requires that the phase of the wave be constant on any given plane
cf. wikipedia: Snell's Law

Light transmitted through a smooth cylinder will emit on the same cone as the surface reflection, no matter what sequence of refractions and internal reflections it may have taken.

 
4) Measurements of Hair Scattering
There are two specular peaks, one on either side of the specular direction, and there is a sharp ture specular peak that emerges at grazing anles.

5) Models for Hair Scattering
Fermat's Principle: A light ray, in going between two points, must traverse as optical path length which is stationary with respect to variations of the path.
cf. wikipedia: Fermat's Principle

Fresnel factor
Fresnel diffraction or near-field diffraction is a process of diffraction which occurs when a wave passes through an aperture and diffracts in the near field, causing any diffraction pattern observed to differ in size and shape, relative to the distance. It occurs due to the short distance in which the diffracted waves propagate.

6) Light Scattering on Wet Hair
When objects become wet thet typically appear darker and shinier.
As hair becomes wet, a thin film of water is formed around the fibers, forming a smooth, mirror-like surface on the hair. This smoother surface creates a shinier appearance of the hair due to increased specular reflections. Light rays are subject to total internal reflection inside the film of water around the hair strands, contributing to the darker appearance wet hair has over dry hair.
Water is absorbed into the hair fiber, increasing the opacity value of each strand leading to more aggressive self-shadowing.


IV. C. Hair Self-Shadowing and Multiple Scattering

Self-shadowing creates crucial visual patterns that distinguish  one hairstyle from another.

1) Ray-casting through a Volumetric Representation

2) Shadow Maps
The shadow map is a depth image of hair rendered from the light's point of view. Each point of be shadowed is projected onto the  light's camera and the point's depth is checked against the depth in the shadow map.

The transmittance function accounts for two important properties of hair.
Fractional Visibility: If more hair fibers are seen along the path from the light, the light gets more attenuated (occluded), resulting in less illumination (shadow).
Translucency

> Deep shadow maps
> Opacity shadow maps
> Photon mapping methods


IV. D. Rendering Acceleration Techniquies

1) Approximating Hair Geometry
> texture mapping
> alpha mapping
> Level of detail (LOD) representations

2) Interactive Volumetric Rendering
> hair modeling as a set of connected particles (<= fast cloud rendering techniques)
> accumulating transmittance values through a light-oriented voxel grid (-> interactive results for animated hair)

3) Graphics Hardware
Graphics processor units (GPUs)
languages such as Cg


V.

> physically-based realism (for cosmetic prototyping)
> visual realism with a high user control (for feature films)
> computations acceleration (for virtual environments and videogaes)


V. A. Hairstyling

- haptic techniques for 3D user input

V. B. Animation

V. C. Rendering

- simulating accurate models for both the scattering of individual hair fibers
- the computations of self-shadows at interactive rates





posted by maetel
2007. 2. 13. 02:13 Method/Cognition
Kristina Niedderer
­


Design Issues
Winter 2007, Vol. 23, No. 1, Pages 3-17
Posted Online December 11, 2006.
(doi:10.1162/desi.2007.23.1.3)

'Method > Cognition' 카테고리의 다른 글

[Semiotics for Beginners] Lectured by Daniel Chandler  (0) 2012.07.12
readibility & legibility  (0) 2007.11.16
posted by maetel
2006-05-02 @아트센터나비
아티스트를 위한 컴퓨터 언어의 이해

7. 프로세싱과 인터랙션 (2)

성기원

ref.
이론: 미디어와 디자인의 수사학적인(Rhetoric) 관점
실기: 시간함수로 창조하는 나만의 시계
http://sidi.hongik.ac.kr/~ipp/




http://processing.org/reference/PFont.html

posted by maetel
2006-04-25 @아트센터나비
아티스트를 위한 컴퓨터 언어의 이해

6. 프로세싱과 인터랙션 1  

성기원
key1sung@kaist.ac.kr

홍익대 시각디자인학과와 카이스트 산업디자인학과를 졸업했고, HCI2004 학회에서 우수논문상과 한국디자인학회에서 학술대상을 수상하였다. 홍익대 시각디자인학과, 영상대학원, 연세대 영상대학원에서 프로세싱과 인터페이스 디자인 수업을 강의해왔고, 번역과 저술을 하고 있다.


디자인의 패러다임이 변화함에 따라 시각 디자인의 역할이 정보를 알리고 전달하는 과거의 수동적인 기능에서 정보를 경험하고 선택하는 적극적인 기능으로 진화되고 있다. 이러한 네트워크 시대의 인터랙션 디자인은 오늘날 디자이너들에게 전통적 개념에서 탈피한 창조적인 사고와 논리적인 프로그래밍 능력, 그리고 인터페이스 디자인과 사용성 분석에 대한 지식을 절실히 요구하고 있다.
본 수업의 목적은 디지털 시대가 요구하는 창조성과 논리성을 개발하고 효과적인 인터랙션 디자인과 커뮤니케이션 능력을 훈련하는 데 있다. 이 수업에서는 전달하는 매체가 바뀜에 따라 그 컨텐츠에 미치는 영향과 변화를 개념적으로 공부하고, 그것을 직관적으로 경험하고 디자인하게 될 것이다. 이 수업을 통해 학생들은 전통적인 표현도구인 붓과 연필의 한계를 보완할 수 있는 뉴미디어를 디자이너의 표현도구로서 자유롭게 활용할 수 있을 것이다.

이론: 칸딘스키의 점선면, 추상예술, 그리고 한국성과 정체성
실기: 삼각함수를 이용한 기하학적인 인터랙션



Lo-Tech / Hi-Touch


> History

- 1999 xD_1 John Maeda
: Design by Numbers

- 2003 0707-11 xD_5 Casey Reas (미대 기반)
: Java 기반, open source, 교육용

: Next Design generation(?)
한국 SK본사 빌딩 12층에서 ICE 로이 해밀턴과 함께 강의
(삼성전자 제이 리? 미디어랩 졸업)

- 2003 1211-16 xD_6 Peter Cho
# 아이디어는 그림으로 그려서 프리젠테이션하는 연습을 들여라.

- 2004 0305-0611 홍익대 시각디자인과, Computing Design

- 2004 0902-1209 홍익대 영상대학원 Interactive Graphics 1+2
:카이스트 남택진 교수, 연대 김영수 교수

- 2004 0911- 아트센터 나비 Interactivity & Practice

- 2004 1113 서울 홍익대 정보통신센터 Q동 9층

- 2004 1217 서울 아트센터 나비 21층 회의실 C2

- 2005 0811 대전 카이스트 산업디자인학과 3층 세미나실


(진로 모색?
# 교수로서의 활로?
# 양민하씨처럼 인터랙션 디자인 회사 창업?
# 인터랙티브 광고 - 영화 마이너리티 리포트
# 인터랙티브 작가 - 상업적으로 판매 - bitform)
bitform서울 - NYU 졸업 이훈송(?))



> computing | design

computation | aesthetic : 수 |  미
-> 디자인을 컴퓨터(이진법 수)로 표현한다.

# 명인이란? 자신이 도구를 만들어 쓰는 사람
: 주어진 (심지어 통일된) 인터페이스에 종속되면 개성이 없어진다.
eg. 카일 크라우제?
eg. 존 놀, 토마스 놀 - 포토샵 제작

numbers | beauty

design by numbers

# 목표
1. 코딩, 함수보다 내용, 철학이 중요하다.  
2. 수학과 자연은 밀접하다.
3. 한국성의 구현: 라이프니츠가 동양의 음,양을 보고 영감을 얻어 2진법을 개발


이성 | 감성 :  수학 | 미분 :  양쪽 두뇌로 디자인하기
좌뇌 | 우뇌 :  분석 | 종합 :  논리적 생각 | 직관적 느낌


> Brain

- Nerve Cell 신경세포 90%, Basal Cell 지주세포 10%
- whole-Brain Learning


> face+action+face = interface

<기술복제시대의 예술작품> 발터 벤야민
: 인간이 상호간 대면 없이 기계를 매개로 소통한다.
: '아우라'가 사라진 복제(모사)가 엉클어져 있는 원본보다 낫다.
=> 기계가 없어지면 관계가 사라지게 되었다.

<시뮬라시옹> 장 보드리야르
: '접속(Log-in)한다, 고로 나는 존재한다.'

<저자의 죽음> 롤랑 바르트
: 만든 사람보다 사용하는 사람들이 더 위대하다.
: 사용자를 알지 않고서는 디자인이 불가하다.

<시지각 프로세스> 루돌프 아른하임 Rudolf Arnheim
- Eyegaze HW/SW
- Gestalt Theory
0: 보이는 것을 보는 것과 보고 싶은 것을 보는 것이 차이.
1: 단순성
2: connectivity
Illusion : 착시
eg.  적절한 이미지맵을 통해 사용자가 자연스럽게 메뉴트리를 따르도록 리드할 수 있다.

자극(정보) -(확산적 및 수렴적 탐색)-> 응시 -(시지각 사고 visual thinking : Gestalt theory)-> 해석 -(인지적 사고)-> 평가 -(문제 분석)-> 판단 -(해결 제안)-> 계획->선택->실행

정보처리 과정>
평가의 과정: 외부상태 지각 > 지각된 것을 해석 > 해석된 것을 평가 > 목표 설정
실행의 과정: 목표 설정 > 의도 형성 > 실행 순서 계획 > 실행


http://sidi.hongik.ac.kr/~ipp/

> 점선면+추상미술: 조형언어의 기본, 조형적 유희
ref.
<디자인의 개념과 원리> 찰츠 왈쉬레거, 안그라픽스
<점선면> 칸딘스키, 열화당 미술책방
<추상미술의 역사> 미진사 추상미술과 지의 자아, 인간사랑

<우리문화의 모태를 찾아서>

<3일만에 읽는 수학의 원리> 고바야시 미치마사, 서울만학사
사고력을 키우는 수학책> 오카베 츠네하루, 을지외국어
디지털이다
디지털 시대의 문화예술> 최혜실 엮음, ,문학과 지성사


C를 공부하라!


이세옥 영상디자인 석사1학기
sayok.aye@gmail.com

posted by maetel