In December 2017 a user on Reddit with the name Deepfake posted a series of pornographic videos with the faces of female celebrities superimposed on adult film stars. They weren’t bad Photoshop jobs; the faces moved naturally and were superimposed seamlessly on the other actress’s body. The films were created using a machine learning technologies to realistically superimpose one face on another. And the possibilities of this technology are all too apparent.

What started with female celebrities superimposed on porn films has developed to show how we can control what presidents and world leaders say – making them appear to say things that they didn’t, as the below Obama and Trump videos show:

 

But the technology to date was still relatively crude. Whilst you could accurately map how features of the face behaved – copy when eyebrows were raised, for example, or how the lips moved, recent research from Stanford University shows how you can now create photo-realistic reanimations of portrait videos that copy the position, full movement of the head, the eye gaze and all facial expressions. The result is significant more realistic fake videos – much closer to what might have been achieved with expensive and complicated CGI, but at a fraction of the cost.

There are obvious threats posed by such videos – from the porn and politician examples already discussed, to the ability to put anybody’s face on another body or to control what the face says. And sites and platforms from Pornhub to Twitter have taken steps to find and remove deepfakes from their video content.

But how will the rise of deepfakes develop? Will we all find our faces superimposed on porn stars bodies or politicians?

With current technology you still need many hours of footage of both the video of the face being replaced and the video of the face that is replacing it. This precludes deepfakes from being created of most people – people for whom hours and hours of footage is not available. But as machine learning technology increases, the volume of footage needed to feed the process will reduce and the opportunity to superimpose the faces of regular people will become more of a threat.

But there are also huge opportunities that this technology will bring, including:

• We can replace one actor with another in a film – superimposing their face in scenes. Maybe we can change the actor based on the target market so that the same film can have different actors in it for different audiences.
• Brands can license celebrity faces for their campaigns and photoshoots – rather than pay for them to appear in your new video, just license their face to be part of it.
• Foreign films or adverts can be dubbed into a new language and the facial expressions in that language (most notably the way the mouth moves to pronounce words) can be copied over so that the film looks seamless in the new language.

The dangers of deepfake videos are clear – the ability to negatively portray somebody by making them do or say something that they just wouldn’t, or putting them in a scenario that they wouldn’t normally be in. And we will need to be increasingly aware of this technology and the fact that what we see on videos online might not be the truth, even if it looks very, very real. But the possibilities this technology can bring to video production is exciting, the ability to efficiently and effectively change what is said, and who says it, has many applications in film, advertising and beyond.

But we all need to get increasingly aware of a world where what we see may not be the truth.

Matt Rhodes

About Matt Rhodes

Head of Digital Strategy for work. Marathon runner and charity trustee for fun.

Follow Matt Rhodes