Connect with us

Business

Facebook: Detecting the Models Behind Deepfakes

Deepfakes have become so believable in recent years that it can be difficult to tell them apart from real images. As they become more convincing, it’s important to expand our understanding of deepfakes and where they come from. In collaboration with researchers at Michigan State University (MSU), we’ve developed a method of detecting and attributing…

Published

on

Deepfakes have become so believable in recent years that it can be difficult to tell them apart from real images. As they become more convincing, it’s important to expand our understanding of deepfakes and where they come from. In collaboration with researchers at Michigan State University (MSU), we’ve developed a method of detecting and attributing deepfakes. It relies on reverse engineering, working back from a single AI-generated image to the generative model used to produce it.

Within the scientific community, much of the focus with deepfakes is on detection — telling whether an image is real or a deepfake. Beyond detecting deepfakes, researchers are also able to perform what’s known as image attribution, that is, determining what particular generative model was used to produce a deepfake. Image attribution can identify a deepfake’s generative model if it was one of a limited number of generative models seen during training. But the vast majority of deepfakes — an infinite number — will have been created by models not seen during training. During image attribution, those deepfakes are flagged as having been produced by unknown models, and nothing more is known about where they came from, or how they were produced. 

Our reverse engineering method takes image attribution a step further by helping to deduce information about a particular generative model just based on the deepfakes it produces. It’s the first time that researchers have been able to identify properties of a model used to create a deepfake without any prior knowledge of the model. 

Through this groundbreaking model parsing technique, researchers will now be able to obtain more information about the model used to produce particular deepfakes. Our method will be especially useful in real-world settings where the only information deepfake detectors have at their disposal is often the deepfake itself. In some cases, researchers may even be able to use it to tell whether certain deepfakes originate from the same model, regardless of differences in their outward appearance or where they show up online. 

To read the full story visit Facebook AI

Source

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Facebook: An Update on How We’re Building Safe and Secure Third-Party Chats for Users in Europe

As required by the Digital Markets Act (DMA) people using WhatsApp and Messenger in Europe have the option to connect with people using third-party Source

Published

on

By

As required by the Digital Markets Act (DMA) people using WhatsApp and Messenger in Europe have the option to connect with people using third-party

Source

Continue Reading

Business

Facebook: Navigating Your 20s With Facebook

Young adults are using Facebook to find communities, furnish their apartments and navigate the transitions that come with life in your 20s.  Source

Published

on

By

Young adults are using Facebook to find communities, furnish their apartments and navigate the transitions that come with life in your 20s. 

Source

Continue Reading

Business

G&J Pepsi profit rises by $30 million with Microsoft Dynamics 365

G&J Pepsi embarked on a transformation journey, implementing solutions including Microsoft Dynamics 365 Field Service. Learn more. Source

Published

on

By

G&J Pepsi embarked on a transformation journey, implementing solutions including Microsoft Dynamics 365 Field Service. Learn more.

Source

Continue Reading

Trending

Copyright © 2021 Today's Digital.