http://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/ai-creates-fake-obama
You can’t trust anymore what you see or hear in videos anymore.
Artificial intelligence software could generate highly realistic fake videos of former president Barack Obama using existing audio and video clips of him, a new study [PDF] finds. Essentially, the researchers synthesized videos where Obama lip-synched words he said up to decades beforehand.
Such work could one day help generate digital models of a person for virtual reality or augmented reality applications, researchers say.
Example video:
5 Comments
Tomi Engdahl says:
Adrienne LaFrance / The Atlantic:
New tech allows realistic superimposing of people’s mouth movements in video, making them appear to say something they didn’t, which could be used to deceive
The Technology That Will Make It Impossible for You to Believe What You See
https://www.theatlantic.com/technology/archive/2017/07/what-do-you-do-when-you-cannot-believe-your-own-eyes/533154/
With these techniques, it’s difficult to discern between videos of real people and computerized impostors that can be programmed to say anything.
Tomi Engdahl says:
This New Algorithm Can Literally Put Words In Your Mouth
http://www.iflscience.com/technology/this-new-algorithm-can-literally-put-words-in-your-mouth/
Already well into the era of “fake news”, things might be about to get a whole lot murkier. Researchers at the University of Washington have shown how they can create video clips of Barack Obama by using audio from other speeches.
Tomi Engdahl says:
Technology That Turns Obama’s Words Into Lip-Synced Videos to Be Featured at SIGGRAPH
http://variety.com/2017/digital/news/obama-videos-lip-syncing-audio-1202509988/
A paper set to be delivered at next week’s SIGGRAPH 2017 conference has garnered a lot of pre-confab attention because the technology could possibly be used to produce fake news videos. But the technology described in the paper, “Synthesizing Obama: Learning Lip Sync From Audio,” could have many more beneficial uses, especially in the entertainment and gaming industries.
Researchers from the University of Washington have developed the technology to photorealistically put different words into former President Barack Obama’s mouth, based on several hours of video footage from his weekly addresses. They used a recurrent neural network to study how Obama’s mouth moves, then they manipulated his mouth and head motions as to sync them to rearranged words and sentences, creating new sentences.
Tomi Engdahl says:
Nvidia and Remedy use neural networks for eerily good facial animation
The neural network just needs a few minutes of video, or even just an audio clip.
https://arstechnica.com/gaming/2017/08/nvidia-remedy-neural-network-facial-animation/
Remedy, the developer behind the likes of Alan Wake and Quantum Break, has teamed up with GPU-maker Nvidia to streamline one of the more costly parts of modern games development: motion capture and animation. As showcased at Siggraph, by using a deep learning neural network—run on Nvidia’s costly eight-GPU DGX-1 server, naturally—Remedy was able to feed in videos of actors performing lines, from which the network generated surprisingly sophisticated 3D facial animation. This, according to Remedy and Nvidia, removes the hours of “labour-intensive data conversion and touch-ups” that are typically associated with traditional motion-capture animation.
Aside from cost, facial animation, even when motion captured, rarely reaches the same level of fidelity as other animation. That odd, lifeless look seen in even the biggest of blockbuster games often came down to the limits of facial animation. Nvidia and Remedy believe its neural network solution is capable of producing results as good, if not better, than what’s produced by traditional techniques. It’s even possible to skip the video altogether and feed the neural network a mere audio clip, from which it’s able to produce an animation based on prior results.
Tomi Engdahl says:
A Whole New Game: NVIDIA Research Brings AI to Computer Graphics
https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/
The same GPUs that put games on your screen could soon be used to harness the power of AI to help game and film makers move faster, spend less and create richer experiences.
At SIGGRAPH 2017 this week, NVIDIA is showcasing research that makes it far easier to animate realistic human faces, simulate how light interacts with surfaces in a scene and render realistic images more quickly.
NVIDIA is combining our expertise in AI with our long history in computer graphics to advance 3D graphics for games, virtual reality, movies and product design.
Generating Expressive 3D Facial Animations From Audio
https://news.developer.nvidia.com/generating-expressive-3d-facial-animations-from-audio/