The European Union urges those who have signed their Code of Practice on Online Disinformation to identify and label deepfakes and other content created using artificial intelligence. The EU's commissioner for values and transparency, Vera Jourova, announced this during a meeting with the Code's more than 40 signatories. She advised that measures should be taken to detect AI-generated content and make it easily recognizable to users.
The EU made changes to the current version of the Code last summer, and it plans to make the voluntary instrument a mitigation measure that complies with the legally binding Digital Services Act (DSA), as reported by TechCrunch. However, the current version of the Code doesn't require identifying and labeling deepfakes. The Commission aims to amend this and include provisions for identifying and labeling deepfakes.
Two Ways to Address the Issue
The EU commissioner has suggested two ways to address the issue of AI-generated content in the Code. The first is to ensure that services using generative AI like Microsoft's New Bing or Google's Bard AI-augmented search services have safeguards to prevent disinformation spread. The second is requiring signatories to implement technology that detects AI-generated disinformation and labels it clearly for users.
Jourova talked with Google CEO Sundar Pichai, who informed her that Google has the technology to identify AI-generated text content. However, Google is still working on improving this technology. During a press Q&A, Jourova stated that the EU wants clear and quick labels for deepfakes and other AI-generated content. The goal is to ensure that regular users can immediately recognize whether a machine or a person created the content they see. Jourova emphasized that the Commission wants platforms to implement these labels immediately.
The DSA has rules requiring big online platforms to label manipulated audio and imagery. However, Jourova thinks that adding this labeling requirement to the disinformation Code could happen even before the August 25 deadline for compliance with the DSA.
The Commission is anticipating progress on reporting the risks of disinformation generated by AI next month. Jourova suggests that those signatories concerned should utilize the July reports to inform the public about the safeguards they are implementing to prevent the misuse of generative AI in spreading disinformation.
The disinformation Code now has 44 signatories in all - which includes tech giants like Google, Facebook, and Microsoft, as well as smaller adtech entities and civil society organizations - a tally that's up from 34 who had signed to the commitments as of June 2022. However, late last month, Twitter took the unusual step of withdrawing from the voluntary EU Code.
A Warning to Social Media Platforms
The EU has warned social media platforms, including Facebook and Twitter, to do more to combat disinformation and propaganda, especially from Russian sources. The EU's commissioner for values and transparency, Vera Jourova, also called for consistent moderation and fact-checking, better election security, and increased access to data for researchers.
The commissioner criticized fact-checking efforts for not being applied to all languages spoken in EU member states and warned that smaller nations were particularly vulnerable to disinformation. Jourova also cautioned Twitter, following Elon Musk's decision to remove his account from the platform, that it could face enforcement action under the Digital Services Act.
The EU has criticized Twitter for withdrawing from the Code of Practice on Disinformation and warned that the platform would be assessed for compliance with the Digital Services Act (DSA) in August. The DSA imposes a legal requirement on Very Large Online Platforms (VLOPs) to assess and mitigate societal risks like disinformation. Failure to comply can result in fines of up to 6% of global annual turnover. While Twitter's Community Notes approach may be considered, it will be up to the Commission enforcers to determine compliance with the DSA.