Deepfake technology allows anyone to dance like a pro
Researches created a program that copies dance moves of one person and 'gifts' them to another by making it look as if they can dance using neural-networking software.
It took several months for the team to map the face of one person to the entire body of another. The video shows the original source performing and then various other shots of people never actually doing what they have done but performing perfectly thanks to deepfake technology. The process captures two neural (generative adversarial) networks, videos of one person dancing and another video of someone moving. The first movement is mimicked by the second dancer and additional software is then used to refine the routine and ensure the face appears clear.
The UC Berkley researchers created a paper on "do as I do" motion transfer. They say the project 'hit a nerve with folks', especially disabled people, for example quadriplegics, who wanted to visually watch themselves move in ways they were unable to do physically. Due to a simple method producing compelling results, the researchers were motivated to provide a forensics tool for reliable synthetic content detection. Additionally they also released an open-source dataset of videos available to the public interested in motion transfer.
Who created the deepfake?
UC Berkeley researchers
Caroline Chan
Shiry Ginosar
Tinghui Zhou
Alexei Efros
Was the content disclosed as a deepfake?
Yes
Was the deepfake consensual?
Yes
How was the deepfake created?
Deepfake Software
Video Editing
File Footage
Motion transfer
Target subject performing
Spatio-temporal smoothing
Realistic face synthesis
Year
2018