Meta to Label AI-Generated Content on Facebook, Instagram Amid Rise in Misinformation

Meta's Oversight Board urged the expansion of the Manipulated Media policy.

Meta will introduce a tool to identify AI-generated images shared on Facebook and Instagram amid a global rise in synthetic content spreading misinformation.

Due to numerous online systems, the Mark Zuckerberg-owned company wants to extend labels to others like Google, OpenAI, Microsoft, and Adobe.

Meta To Label Al-generated Content on Media Platforms

In an interview, Nick Clegg, Meta's president of global affairs, said that these actions are intended to "galvanize" the tech industry as it gets harder to distinguish artificial intelligence (AI)-generated content from reality. The White House has firmly pushed businesses to watermark AI-generated content.

Clegg claimed that Meta is developing tools to identify synthetic media, even if its metadata has been changed to hide AI's involvement in its production.

Meta wants to implement the labeling tool entirely and add a flag feature that allows users to report AI-generated content in the coming months.

But with the US presidential election already underway, some question if the labels will be released in time to prevent the spread of false information.

The move comes after Meta's Oversight Board urged the company to identify and label any altered audio and video that might deceive users.

A representative for the Oversight Board, Dan Chaison, told Dailymail.com, "The Board's recommendations go further in that it advised the company to expand the Manipulated Media policy to include audio, clearly state the harms it seeks to reduce, and begin labeling these types of posts more broadly than what was announced."

He continued that labeling allows Meta to leave more content up and protect free expression. He noted that it is important that the organization clearly define the issues it aims to tackle, considering that not all altered posts are unacceptable unless there is an immediate danger of actual harm.

Furthermore, he explained that those harms include inciting violence or misleading people about their right to vote.

On Tuesday, Meta announced that it is developing technical standards with industry partners to facilitate the identification of photos and, eventually, audio and video produced by artificial intelligence tools.

SWITZERLAND-DIPLOMACY-ECONOMY-SUMMIT-DAVOS
A photograph taken during the World Economic Forum (WEF) annual meeting in Davos on January 18, 2024, shows the logo of Meta, the US company that owns and operates Facebook, Instagram, Threads, and WhatsApp. FABRICE COFFRINI/AFP via Getty Images

Taylor Swift Explicit AI-generated Photos Pop Up Online

Pornographic, AI-generated images of the world's most famous star went viral on social media this week, highlighting the damaging potential of mainstream artificial intelligence technology and its ability to produce convincingly authentic and negative images.

The fake photos of Swift mainly circulated on X, formerly Twitter. Tens of millions of people saw the explicit and sexually provocative images of the singer before they were taken off social media.

However, nothing on the internet truly disappears, and content will still be disseminated on other, less regulated platforms.

The policies of X, like those of most major social media platforms, prohibit the sharing of "synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm."

Tags
Facebook, Instagram
Real Time Analytics