Iterative Text-based Editing of Talking-heads Using Neural Retargeting

Xinwei Yao1

Ohad Fried2

Kayvon Fatahalian1

Maneesh Agrawala1

1Stanford University

2The Interdisciplinary Center Herzliya

Abstract

We present a text-based tool for editing talking-head video that enables an iterative editing workflow. On each iteration users can edit the wording of the speech, further refine mouth motions if necessary to reduce artifacts and manipulate non-verbal aspects of the performance by inserting mouth gestures (e.g. a smile) or changing the overall performance style (e.g. energetic, mumble). Our tool requires only 2-3 minutes of the target actor video and it synthesizes the video for each iteration in about 40 seconds, allowing users to quickly explore many editing possibilities as they iterate. Our approach is based on two key ideas. (1) We develop a fast phoneme search algorithm that can quickly identify phoneme-level subsequences of the source repository video that best match a desired edit. This enables our fast iteration loop. (2) We leverage a large repository of video of a source actor and develop a new self-supervised neural retargeting technique for transferring the mouth motions of the source actor to the target actor. This allows us to work with relatively short target actor videos, making our approach applicable in many real-world editing scenarios. Finally, our refinement and performance controls give users the ability to further fine-tune the synthesized results.

Video

Details

Paper on ArXiv Supplemental Materials

Citation

@misc{yao2020talkinghead,
    author = {Xinwei Yao and Ohad Fried and Kayvon Fatahalian and Maneesh Agrawala},
    title = {Iterative Text-based Editing of Talking-heads Using Neural Retargeting},
    year = {2020},
    eprint = {arXiv:2011.10688},
}