Meta updates AI deepfake policy for political ads
Meta will oblige political advertisers to disclose the use of AI or digital manipulation in their ads on Facebook and Instagram. Although the social media company already has policies in place regarding deepfakes, this new requirement represents a further step.
Starting January, advertisements related to politics, elections, or social issues must declare any digitally altered images or videos. The global policy will be enforced by a combination of human and AI fact checkers. Meta clarified that this encompasses alterations such as changing spoken content in a video, modifying images or footage of actual events, and portraying non-existent individuals with realistic appearances.
Users will receive notifications when ads are flagged for digital alterations, but Meta did not elaborate on how this information will be presented. Advertisers are exempt from declaring minor changes like cropping or colour correction unless such modifications are significant to the claims, assertions, or issues raised in the ad. Meta already has existing policies, applicable to all users, regarding the use of deepfakes in videos. Deepfakes are removed if they could potentially mislead an average person into believing that the video’s subject uttered words they did not actually say.
Under the new rules, advertisements related to politics, elections, or social issues must disclose any form of digital alteration, regardless of whether it is done by a human or AI, prior to going live on Facebook or Instagram. Meta’s other social media platform, Threads, adheres to the same policies as Instagram. Failure to declare digital alterations during ad uploads may result in rejection, and repeated non-disclosure could lead to penalties for the advertiser.
Google has recently announced a similar policy for its platforms, while TikTok does not permit any political advertising. As major democracies, including India, Indonesia, the US, and the UK, anticipate general elections in 2024, the issue of deepfakes becomes increasingly pertinent.
Deepfakes, where AI is employed to manipulate someone’s words or actions in a video, have become a real concern in the realm of politics. Instances of misleading content, such as a fabricated image of former US President Donald Trump being arrested and a deepfake video of Ukrainian President Volodymyr Zelensky discussing surrender to Russia, underscore the potential impact of digitally altered media on public perception.