HOW DEEPFAKE TECHNOLOGY ACTUALLY WORKS

Most Read Tech

Mon 24 February 2020:

Everyone knows enough about deepfake videos to find them either creepy or entertaining, but few people know the impressive tech behind them.

As coverage of deepfake technology becomes more prevalent, it’s reasonable to wonder how these videos even work. Advancements in motion capturing and facial recognition over the past decade have been staggering – and terrifying. What used to be limited to only the most well-funded computer scientists and movie studios is now a tool in the hands of comedy outlets and state-run media.

By definition, deepfakes are videos in which a person’s face and/or voice are replaced with someone else’s by an AI. The underlying technology and machine learning processes rose to popularity in the early ’90s, evolving from a field of academic research. The term itself, a combination of the words “deep learning” and “fake”, originated on Reddit in 2017. After a controversial stint as a way to fabricate pornographic videos on Reddit, deepfakes emerged as a source of entertainment online, and a frightening reminder of the dangers of the internet.

 

The process of creating a deepfake has changed as various apps and free software make their way into the public space, but the underlying concept of the more elaborate deepfake videos follow the same principles. There’s usually an autoencoder and a generative adversarial network (GAN). In extremely simple terms, the autoencoder is a computer’s way of seeing a face and deciding all the ways it can “animate”. It processes how that face would blink, smile, grin, and so on. The GAN is a system through which the images from the autoencoder are compared to real images of the targeted person. It will reject inaccurate images, causing more attempts to be generated, and the cycle continues indefinitely, inching toward a “perfect” recreation of the person. In summation: a robot makes an image of a person’s facial expressions, another one tells it if the expressions look fake, and they argue until all the images are nearly perfect.

How Deepfakes Are Used Today

In its infancy, public use of deepfake technology was exclusive to a now-banned Reddit thread, but it quickly became another way to make videos for the internet. The most common deepfakes these days are videos where a famous movie has the lead actor replaced by someone else. The majority of viral deepfakes are mostly harmless videos doing things like morphing Bill Hader into Tom Cruise.

However, more nefarious deepfakes have gained popularity in recent years. Viral deepfaked videos of political leaders like US House Speaker Nancy Pelosi and Russian President Vladimir Putin have shown the potential dangers of convincing, fake videos. These sorts of incidents triggered numerous campaigns to outlaw – or at least increase awareness of – deepfake technology. Social media is no stranger to deepfake technology after suspicions arose over last year’s trendy FaceApp program, which people used to see their photos altered to make them look older. While most claims of the app being harmful were overblown, public outrage over the dangers of a neural network with access to everyone’s faces clearly illustrates the potential threat of deepfake technology.

As deepfakes continue to be used for entertainment, harassment, and academic study, it’s unlikely that we’ve seen the last viral deepfake story. Hopefully, the technology that detects deepfakes advances at an equivalent pace.

-Source: https://screenrant.com

Think your friends would be interested? Share this story!

 

Leave a Reply

Your email address will not be published. Required fields are marked *