Deepfakes are videos or pictures especially of celebrities, but also politicians and other public figures, showing them saying or doing something they’ve never said or done. They usually feature famous faces with faces transposed from existing videos or images to make it look like the person was saying things had done that they have not.
What are deepfakes?
Deepfakes are a type of AI-generated fake media that can be used to impersonate someone. This technology has been around since 2014, but it’s only recently become a household name.
The term “deepfake” was coined by the original developers of this technology as a portmanteau of “deep learning” and “fake.” Deepfakes are generated using machine-learning algorithms that learn from real videos and can then synthesise new video based on this training data.
How do deepfakes work?
Deepfakes use a machine learning algorithm to take existing videos and make them look like they feature the person you want. The process works by training the system on images of the target person, then using this knowledge to map their face onto someone else.
This type of AI is known as generative adversarial networks (GANs), which were first developed in 2014 by Ian Goodfellow and his colleagues at OpenAI. GANs are a type of neural network, where one network (the generator) creates fake images while another network (the discriminator) tries to tell them apart from real ones.
The discriminator attempts to work out whether an image is real or fake by checking if it looks similar to other known real images. If it doesn’t, the discriminator will assume it’s fake and vice versa. By comparing its own output with that of the discriminator, the generator can learn how to produce images that fool both networks into thinking they’re genuine
To create a deepfake video, you need two things:
A source video — any video that contains footage of someone’s face. This could be a movie, TV show or even your own home movies. A deepfake algorithm — software that automatically transfers someone’s facial expressions from one video to another.
How widespread is the problem?
There are several ways to create fake videos with deepfake technology. One is by using open source software like FakeApp and another is by using machine learning tools. While most of these tools are available on GitHub, there are also some proprietary solutions that companies can purchase from private vendors or develop in-house (e.g., Adobe).
The availability of these tools makes deepfakes accessible even to non-technical users who may use them for malicious purposes without realizing the consequences of their actions. For example, someone could use fake video editing software to edit your face onto another person’s body in a pornographic video while you were asleep at home!
Are Deepfakes legal?
It depends on where you live and what you do with them. Deepfakes have been banned in some countries, but not others.
In the US, deepfake creators are protected under free speech laws if their videos don’t contain copyright-protected material or violate someone’s privacy rights (for example, by impersonating someone). However, if a deepfake contains copyrighted material or infringes on someone’s privacy rights, then it’s illegal under federal law — even if it isn’t intended as pornography or political propaganda.
How can we spot deepfakes?
The most obvious way to spot a fake is by looking at the video itself. But this isn’t always possible, because the AI-generated face can be inserted into existing footage — so it has to be done on a frame-by-frame basis.
The other option is to look at the metadata associated with the image or video file. This includes things like the camera model, lens type and focal length used, as well as any changes in resolution when saving an image from RAW format to JPEG.
More sophisticated methods of detecting fakes involve checking for inconsistencies between frames of an object or person moving through space and time — but these are still far from perfect.
Why are they dangerous?
Deepfakes are more believable than traditional photoshopped images because they’re created from real videos and audio recordings — so it’s possible to create convincing videos by simply splicing together different clips. For example, you could take footage of Donald Trump saying “I am not a racist” during an interview with Piers Morgan, then splice in audio from his speech at a white supremacist rally in Charlottesville last year to make him appear as if he’s saying something entirely different.
The danger with deepfakes is that they could be used to distort reality — manipulating politicians’ statements to make them look bad, or even creating fake news stories in which real people say things they never did.
What can we do about it?
There are several ways you can protect yourself from deepfakes. The first step is to check the source of any image or video you see online — if it’s not from a trusted news site or publisher, it may be fake.
If you see an image that seems too good to be true (for example, if someone looks like they’re giving a speech at a conference but there’s no proof of them being there), try searching for the word “deepfake” plus the name on the image (or words related to it). You might find reports about where the image originally came from before it was manipulated by someone else.
Deepfakes are a democratised version of a decades old practice of image modification. They have brought this to the masses, resulting in a batch of bad deepfake videos being uploaded to the internet that have seriously affected public perception. At this stage, it is not possible for humans to always spot which videos and images are deepfake. The best defence available at this time is to raise awareness amongst social media users, as well as encourage platforms like Facebook and Twitter to be more aggressive in taking down deepfake accounts and content.