/
Facial Muscle Parameter Decision from 2D Frontal ImageShigeo MORISHIMA Facial Muscle Parameter Decision from 2D Frontal ImageShigeo MORISHIMA

Facial Muscle Parameter Decision from 2D Frontal ImageShigeo MORISHIMA - PDF document

winnie
winnie . @winnie
Follow
342 views
Uploaded On 2022-09-20

Facial Muscle Parameter Decision from 2D Frontal ImageShigeo MORISHIMA - PPT Presentation

controlling each muscle As a result 16 feature points inmaximum strength and neutral is quantized into 11 stepssions consisting of anger disgust fear happiness sadness and surprise and quanti ID: 954337

model facial face muscle facial model muscle face anger image open flow synthesized error expression test demetri marker figure

Share:

Link:

Embed:

Download Presentation from below link

Download Pdf The PPT/PDF document "Facial Muscle Parameter Decision from 2D..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Facial Muscle Parameter Decision from 2D Frontal ImageShigeo MORISHIMA*, Takahiro ISHIKAWA*, Demetri TERZOPOULOS**3-3-1 Kichijoji-kitamachi, musashino, Tokyo.University of Toronto**6 King's Colledge Road Torontorealistic approach to realize life-like agent in computer.and muscles. In this model, forces are calculated effect-strength, so the combination of each muscle parameterdecide a specific facial expression. Now each muscleparameter is decided on trial and error procedure com-In this paper, we propose the strategy of automaticestimation of facial muscle parameters from 2D markerThis is also 3D motion estimation from 2D point orflow information in captured image under restriction of1 IntroductionRecently, research into creating friendly human inter-faces has flourished remarkably. In the human-humanmeans of transmitting non-verbal information and pro-moting friendliness between the participants. We have haveThe facial muscle model is composed of facial tissueelements and muscle strings. In this model, forces effect-specific facial image. Currently, however, we have toerror, comparing the synthesized image to a photograph.formation of the face when an expression appears. Thenwe can find out the difference between any specific ex-into muscle parameters. This neural network can real-ment in the display. So this is also 3D motion es

timation2-1 Feature Pointssubject's face to measure and model facial expression. A controlling each muscle. As a result, 16 feature points inmaximum strength and neutral is quantized into 11 steps.sions consisting of anger, disgust, fear, happiness, sad-ness and surprise, and quantize the difference betweenneutral and each of these also into 11 steps.forehead sub-area. In the mouth area, each muscle con-traction does not happen independently. So all learningmouth shape. Also 6 basic expressions are appended asduced. They are also quantized into 11 steps. The num- Each pattern is composed of a data pair: muscle pa-rameter vector and feature points movement vector. Neu-3-1 Open TestWe attach markers on a real human's face and get After normalization, the movement values of thein our model is given as the test sample. In the exampleTable 1 shows the error between captured face and syn-marker coordinate in the display. Second column is alsoity. At first, 7 people create the face whose impression isand true face is defined. Figure 3 shows an averaged facefor Anger. Standard deviation(s) is also calculated forFor example, in case of Anger, error is 5.01 and is a) Original b) Average c) Synthesizeda) Original b) Average c) Synthesized Figure 2. Open Test for SurpriseFigure 3. Open Test for Anger tion of impre

ssion in the synthesized face. Anger is worst4 To Omit The Marker Locationof strong dependance on initially position of markers.Here new method is introduced. This method uses Opti-each flow. And averaged in the mask on the face. Figureoutput. This neural network had learned couple of opti-corresponding with original facial image. As a result, weoptical flow than marker tracking method. This evalua-this paper. Parameter conversion from 2D to 3D workswell when the model is fitted to the target person's faceand vowel pronunciations. We currently fit the model tostrongly depends on its initial location and on the targeteffects.model is the next problem to be solved. Now, our facialmuscle model requires long computation time. As a re-for the new muscle model are under examination. We haders. More delicate facial expression can be captured byoptical flow. This evaluation is next subject.Reference[1] Yuencheng Lee, Demetri Terzopoulos and Keith Waters,[2] Demetri Terzopoulos and Keith Waters, "Analysis and Syn-thesis of Facial Image Sequences Using Physical and Ana-tomical Models", IEEE Transactions on Pattern Analysis[3] Paul Ekman and Wallace V. Friesen, "Facial Action Cod- Table 1. Estimated Error for Open Test (a) Optical Flow (b) Masks for Averaging Figure 5. Synthesis by Optical FlowCaptured Synthesiz