FIPA | 96/06/03 09:38 |
FOUNDATION FOR INTELLIGENT PHYSICAL AGENTS | nyws015 |
Source: Eric Petajan (Lucent Technologies) |
Video Driven Face Animation in FIPA
The use of video messaging and conferencing is currently impeded by two basic problems. First and foremost, the data-rate of the switched communications infrastructure is inadequate for video conferencing with good quality. The second problem is an apparent reluctance by consumers to be viewed by strangers, especially given the distortions associated with video processing and compression. Both of these problems are eliminated by using synthetic graphical agents to represent the speaker at the receiver. The data-rate required to articulate the graphical agent model (including face, body, and hands) can be very low, and visual privacy is maintained while communicating visual speech gestures (primarily lip movements) and other gestures (e.g., head and eye movements, and hand gestures). Eventually, each speaker will be able to control the similarity between the appearance of the agent and the speaker, depending on the privacy and data-rate required.
We have recently developed the technology for robust acquisition of facial features from talking faces which are then used to control a 3D synthetic
talking head model. Our contribution to the FIPA meeting at IBM will be to describe this system and show video demonstrations. Other advantages of
this approach to personal visual communication include robust visual/acoustic automatic speech recognition (locally and in the network) and content
retrieval, using only the transmitted parameter stream.
Eric Petajan
Bell Labs - Lucent Technologies
Murray Hill, NJ 07974
edp@bell-labs.com