Advertisement

Meta Outlines Proactive Approach to Labeling AI-Generated Content

Content created with Meta’s generative AI tools already automatically receives an “Imagined with AI” label, but the company has shared how it is working to create technical standards and signals to notify users on its platforms when any of the content they see has been created using AI. 

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” said Nick Clegg, President of Global Affairs at Meta in a blog post. “People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

To do this, Meta is working with industry partners to align on a “common set of technical standards and signals” that make it possible for the company to automatically label AI-generated images that users post to Facebook, Instagram and Threads, whether they were created in Meta’s generative AI tools or not.

This collaborative effort is happening through forums like the Partnership on AI (PAI), said Clegg. The invisible markers Meta uses for images created by its tools, which include IPTC metadata and invisible watermarks, are in line with PAI’s best practices.

Advertisement

Meta also is working to build tools that will be able to identify invisible markers on AI-generated content at scale — specifically the “AI generated” information in the C2PA and IPTC technical standards — so that images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock, all of which are working to add metadata to images created by their tools, can be identified as AI-generated when posted to a Meta platform. Meta already automatically applies visible markersinvisible watermarks and embedded metadata in photorealistic images created using its own generative AI tools.

Clegg pointed out that the practice of including these invisible identifiers is more advanced in image generation tools than AI tools that generate audio and/or video, so automatically labeling video and audio content as AI-generated isn’t yet possible. In the meantime, Meta is adding a feature for users to disclose when they share AI-generated video or audio so that it can be labeled accordingly. 

Users will be required to use this disclosure and labeling tool whenever they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and Clegg said the company might apply penalties if they fail to do so. Additionally, if Meta determines that a piece of digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, the company may choose to add a more prominent label to give viewers more information and context.

Clegg also was careful to point out that it is not yet possible to identify all AI-generated content and that there are ways people can strip out the invisible markers. To that end, Meta is working to develop “classifiers” that can help its platforms automatically detect AI-generated content, even if the content lacks markers, as well as looking for ways to make it more difficult to remove or alter invisible watermarks.

“Generative AI tools offer huge opportunities, and we believe that it is both possible and necessary for these technologies to be developed in a transparent and accountable way,” said Clegg. “That’s why we want to help people know when photorealistic images have been created using AI, and why we are being open about the limits of what’s possible too.”

Clegg noted that these proactive measures will be particularly important in the coming year with a number of critical elections taking place around the world: “This work is especially important as this is likely to become an increasingly adversarial space in the years ahead,” he said. “People and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.”

Until identification standards and practices become widespread, Clegg encouraged users to approach online content with a questioning eye and advised them to do things like check whether an account that shared content is trustworthy and to look for details in content online that looks or sounds unnatural.

“These are early days for the spread of AI-generated content,” he said. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content. Industry and regulators may move toward ways of authenticating content that hasn’t been created using AI, as well as content that has. What we’re setting out today are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do. We’ll keep collaborating with our industry peers. And we’ll remain in a dialogue with governments and civil society.” 

Featured Event

The Retail Innovation Conference & Expo explores the evolving customer journey and how technology enables the convergence of content, community and commerce. 

Advertisement

Advertisement

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: