EU urges online platforms to label deepfakes and AI-generated content

European Union Flags 2a

The European Union is exerting pressure on the signatories of its Code of Practice on Online Disinformation, calling for them to label deepfakes and other AI-generated content, as reported by TechCrunch. Following a recent meeting with over 40 signatories, EU values and transparency commissioner, Vera Jourova, argued that participants need to establish technology that can identify AI-produced content and clearly mark it for users.

While acknowledging the positive potential of AI technologies, Jourova also warned of their dark side, particularly their ability to propagate disinformation. Advanced chatbots like ChatGPT can fabricate intricate and convincing content and visuals within seconds. Image generators can fabricate plausible pictures of events that never transpired, while voice generating software can mimic the voice of a person based on a few seconds’ sample. These technologies introduce novel challenges in the fight against disinformation.

The commissioner called on the signatories to establish a dedicated track within the code to discuss these issues. The current version of the code, which was strengthened last summer, does not presently commit to identifying and labelling deepfakes. However, the Commission is keen to change this.

Jourova revealed two principal discussion angles for incorporating mitigation measures for AI-generated content into the code. The first pertains to services that use generative AI, such as Microsoft’s New Bing or Google’s Bard AI-augmented search services. These should pledge to integrate necessary safeguards against potential misuse by malicious actors to spread disinformation. The second angle concerns obliging signatories with services that can disseminate AI-generated disinformation to develop and implement technology that can identify such content and label it clearly for users.

Jourova reported that she had conversed with Google’s CEO, Sundar Pichai, who confirmed Google’s existing technology can detect AI-generated text content, while ongoing efforts are being made to enhance these capabilities.

Further comments made during a press Q&A session revealed that the EU desires clear and quick labels for deepfakes and AI-generated content. These would enable everyday users to instantly discern when content has been generated by a machine rather than a person. The Commission is pushing for immediate implementation of such labelling.

Under the Digital Services Act (DSA), very large online platforms (VLOPs) are required to label manipulated audio and imagery. However, incorporating labelling into the disinformation Code would enable it to be introduced ahead of the August 25 DSA compliance deadline for VLOPs.

Jourova underlined the importance of protecting freedom of speech but emphasised that AI does not possess this right. As such, efforts to address AI disinformation will continue under the Code of Practice. The Commission is also looking forward to the next month’s reports from relevant signatories about their measures to prevent generative AI from being used to spread disinformation.

The disinformation Code now boasts 44 signatories, including tech giants Google, Facebook, and Microsoft, smaller adtech entities, and civil society organisations. This is a significant increase from the 34 signatories in June 2022.

More from Qonversations

Tech

Screenshot 2024 12 18 at 12.43.02 AM

Powering Ahead: China’s EV trucks set to disrupt the industry?

Tech

Screenshot 2024 12 16 at 5.35.03 PM

Explainer: Arm vs Qualcomm and the battle over Nuvia Tech

Tech

Screenshot 2024 12 12 at 5.28.16 PM

Is Grok the AI revolution we’ve been waiting for?

Tech

Screenshot 2024 12 10 at 2.51.00 PM

Vietnam’s EV boom: Can the charging network keep pace?

Front of mind