Title page for etd-0729100-130432


[Back to Results | New Search]

URN etd-0729100-130432
Author Yen-Chun Yu
Author's Email Address yanjy@cse.nsysu.edu.tw
Statistics This thesis had been viewed 5343 times. Download 1796 times.
Department Computer Science and Engineering
Year 1999
Semester 2
Degree Master
Type of Document
Language zh-TW.Big5 Chinese
Title Human Facial Animation Based on Real Image Sequence
Date of Defense 2000-07-14
Page Count 74
Keyword
  • FACS
  • DELAUNAY
  • STEREO
  • MOTION CAPTURE
  • Keyframing
  • MOTION FIELD
  • Abstract 3D animation has developed rapidly in the multimedia nowadays, in computer games, virtual reality and films. Therefore, how to make a 3D model which is really true to life, especially in the facial expressions, and can have vivid actions, is a significant issue. At the present time, the methods to construct 3D facial model are divided into two categories: one is based on computer graphic technology, like geometric function, polygon, or simple geometric shapes, the other one is using hardware to measure a real face by laser scanning system, and three-dimensional digitizer. Moreover, the method to acquire the 3D facial expression primarily are applied as following: keyframing, motion capture, and simulation.
    The research covers two areas:
    1. Use two CCDs to digitalize the facial expressions of a real person simultaneously from both right and left side, and save the obtained standard image. Then, get the feature match points from the two standard images in the space domain, and by using the Stereo to attain the “depth information” which helps to build 3D facial model.
    2. Use one CCD to continuously digitalize two facial expressions and get the feature match points’ coordinates in the time domain to calculate the motion vector.
    By combining the “depth information” from space domain and the motion vector from the time domain, the 3D facial model’s motion sequence can be therefore obtained.
    If sufficient digitalized facial expressions are processed by the 3D facial model’s motion sequence, a database could be built. By matching the feature points between the 2D test image and 2D standard image in the database, the standard image’s “depth information” and motion vector can be used and turn the test image into 3D model which can also imitate the facial expressions of the standard images sequences. The method to match the feature points between the test image and standard images in the database can be entirely processed by computers, and as a result eliminate unnecessary human resources. 
    Advisory Committee
  • Chung-Nan Lee - chair
  • B.S Chou - co-chair
  • yun lung Chang - co-chair
  • yung jae Chuo - co-chair
  • John y. Chiang - advisor
  • Files
  • 8724615論文.pdf
  • indicate access worldwide
    Date of Submission 2000-07-29

    [Back to Results | New Search]


    Browse | Search All Available ETDs

    If you have more questions or technical problems, please contact eThesys