Is there a secret to longevity? This health expert says 1,000% yes
In the era of social media, post-COVID, and with mental health at the forefront, a shift is taking […]
The European Union is exerting pressure on the signatories of its Code of Practice on Online Disinformation, calling for them to label deepfakes and other AI-generated content, as reported by TechCrunch. Following a recent meeting with over 40 signatories, EU values and transparency commissioner, Vera Jourova, argued that participants need to establish technology that can identify AI-produced content and clearly mark it for users.
While acknowledging the positive potential of AI technologies, Jourova also warned of their dark side, particularly their ability to propagate disinformation. Advanced chatbots like ChatGPT can fabricate intricate and convincing content and visuals within seconds. Image generators can fabricate plausible pictures of events that never transpired, while voice generating software can mimic the voice of a person based on a few seconds’ sample. These technologies introduce novel challenges in the fight against disinformation.
The commissioner called on the signatories to establish a dedicated track within the code to discuss these issues. The current version of the code, which was strengthened last summer, does not presently commit to identifying and labelling deepfakes. However, the Commission is keen to change this.
Jourova revealed two principal discussion angles for incorporating mitigation measures for AI-generated content into the code. The first pertains to services that use generative AI, such as Microsoft’s New Bing or Google’s Bard AI-augmented search services. These should pledge to integrate necessary safeguards against potential misuse by malicious actors to spread disinformation. The second angle concerns obliging signatories with services that can disseminate AI-generated disinformation to develop and implement technology that can identify such content and label it clearly for users.
Jourova reported that she had conversed with Google’s CEO, Sundar Pichai, who confirmed Google’s existing technology can detect AI-generated text content, while ongoing efforts are being made to enhance these capabilities.
Further comments made during a press Q&A session revealed that the EU desires clear and quick labels for deepfakes and AI-generated content. These would enable everyday users to instantly discern when content has been generated by a machine rather than a person. The Commission is pushing for immediate implementation of such labelling.
Under the Digital Services Act (DSA), very large online platforms (VLOPs) are required to label manipulated audio and imagery. However, incorporating labelling into the disinformation Code would enable it to be introduced ahead of the August 25 DSA compliance deadline for VLOPs.
Jourova underlined the importance of protecting freedom of speech but emphasised that AI does not possess this right. As such, efforts to address AI disinformation will continue under the Code of Practice. The Commission is also looking forward to the next month’s reports from relevant signatories about their measures to prevent generative AI from being used to spread disinformation.
The disinformation Code now boasts 44 signatories, including tech giants Google, Facebook, and Microsoft, smaller adtech entities, and civil society organisations. This is a significant increase from the 34 signatories in June 2022.
In the era of social media, post-COVID, and with mental health at the forefront, a shift is taking […]
With its fast speeds and revolutionary potential, 5G stands out as a noteworthy milestone in the field of […]