Researchers rally to twist Deepfakes

Facebook has just officially launched its “Deepfake Detection Challenge” to advance research in the field … and eliminate this new plague in its social networks. The hunting season is open.

Not all technological advances are good, and the audio-visual forgery technology “Deepfake” based on neural networks is particularly toxic. It can change the heroines of pornographic movies, modify the speech of politicians or celebrities, and even create scams. Welcome to the world of lies and camouflage.

No wonder Facebook has already faced a wave of trolls and intoxication, and is very interested in blocking such content. The social network took advantage of the NeurIPS conference in Vancouver from December 8th to 15th to officially launch its Deepfake Detection Challenge.

$ 10 million reward
The idea is to give AI researchers the body of more than 100,000 “Deepfake” videos and encourage them to develop detection technologies. Google has created a similar corpus, but the scope is much narrower. In addition, Facebook went further, offering $ 10 million in scholarships and rewards to the best teams … as long as they released their code in open source.

* More than 150 teams have registered. They must submit the solution by March 2020 and will evaluate it with unpublished data sets. “This is a huge challenge for Facebook,” Antoine Bordes, co-director of Facebook AI Research, told a press conference in Paris. “We are already developing internal detection methods, but we cannot do it alone everything. That’s why we try to openly unite communities around this issue. ”

To create this dataset, Facebook hired actors and filmed them in various situations. The company then produced the deepfakes for these records using the best technology. In the example provided by Facebook above, can you determine which was the original video? (Tip: look closely at the eyes and their outlines *).

How does Deepfake work?

Deepfake was created based on two types of cascaded neural networks: “autoencoders” and “generative adversarial networks” (or GANs, generative adversarial networks). An autoencoder is an algorithm consisting of an “encoder” and a “decoder.” The first was able to reduce the image to a minimalist mathematical representation called a “latent image.” The second is able to reconstruct an image based on this representation. To replace Clara Morgane’s head with Scarlett Johansson’s, all you have to do is combine the first encoder with the second decoder. And voila.

How does Deepfake work

Autoencoders can also be of variable type. By simply changing features based on the probability curve, they can generate similar images instead of simply recreating well-known images.

The disadvantage of the encoding-decoding process is that information is always lost. In the case of change generation, blurred areas also appear. Therefore, the researchers came up with the idea of ​​combining an autoencoder with a GAN whose task is to deliver only images of sufficient quality.

In fact, the generated adversarial network consists of two competing neural networks. The first generates images, and the second must determine whether the images are true or false based on his training. The more complicated the discriminator’s analysis, the more “generated” the generated image will be. For Deepfake generators, the decoder of the autoencoder can simply be replaced by a GAN generator.

To detect Deepfake, researchers will once again rely on deep neural networks, which they will configure and train to be able to identify the truth from the errors. So interested in having a big data set like Facebook has to offer. Deepfake detection is an emerging field, although there are already a dozen different methods. Some researchers focus on blinking, others focus on temporal inconsistencies between images, and others focus on their texture and resolution.

Towards an endless race?

This study will obviously lead to the shallot race. As the detector improves, the quality of the generator will improve. Is it possible to win this search match? Facebook grew there. “Look what’s happening with spam. The filters are running and the problem has been resolved. In Deepfake, the situation will be the same. The more sophisticated the detection technology, the higher the cost of undetectable forgery. You have to lease the computing cluster It takes genius to make great Deepfake videos, but people will stop doing it. “Antoine Bordes said.

In other words, in the Deepfake competition, the winner will eventually be the one with the most computing power. It’s hard to beat Facebook in this regard.

Leave a Reply

Your email address will not be published. Required fields are marked *