AI, political campaigns, and the future of democracy

shutterstock 2289524003 Large
Deep fake and AI manipulation

In the past, social media platforms served as conduits for spreading false information and deepening divisions, with far-right activists, foreign influence campaigns, and fake news sites all playing a role in this misinformation. Four years later, the U.S. 2020 election was marred by conspiracy theories and unfounded claims of voter fraud, disseminated to millions of people, fueling an anti-democratic movement attempting to overturn the election results.

Experts caution that advances in artificial intelligence (AI) have the potential to revitalize disinformation tactics from the past. AI-generated disinformation not only threatens to deceive the public but also poses a significant risk to an already troubled information ecosystem by inundating it with falsehoods and deception, according to experts. It seems that trust levels will decrease, and the task of journalists and others disseminating accurate information will become more challenging.

The emergence of AI tools that can generate photorealistic images, replicate human voice audio, and produce convincing human-like text has gained prominence recently, as companies like OpenAI have made such technology widely available. This technology, which has already disrupted various industries and exacerbated existing inequalities, is now increasingly being employed to create political content.

In recent months, instances of AI-generated content making waves include a fabricated image of an explosion at the Pentagon causing a brief stock market dip, viral AI audio parodies of U.S. presidents playing video games, and AI-generated images depicting Donald Trump in confrontations with law enforcement officers, widely shared on social media platforms. Additionally, the Republican National Committee released an entirely AI-generated ad illustrating imagined disasters that would occur if Biden were re-elected, while the American Association of Political Consultants warned about the threat of video deepfakes to democracy.

In some respects, these AI-generated images and advertisements are not radically different from manipulated images, deceptive videos, misleading messages, or robocalls that have been used in society for years. However, disinformation campaigns once faced logistical hurdles, such as the time-consuming process of creating customized messages for social media, editing images, and manipulating videos.

Now, with generative AI, creating such content has become accessible to individuals with basic digital skills, and there are limited safeguards or effective regulations to curb its use. Experts caution that the potential result is the democratization and acceleration of propaganda, precisely when several countries are entering major election years.

The potential threats posed by AI to elections encompass a range of concerns seen in past decades of election interference. These include social media bots impersonating real voters, manipulated videos or images, and deceptive robocalls—all easier to produce and harder to detect with the assistance of AI tools.

Josh Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, notes, “If you’re sitting in a troll farm in a foreign country, you no longer need to be fluent to produce an article that sounds fluent in the language of your target audience. You can simply have a language model generate an article with the grammar and vocabulary of a fluent speaker.”

AI technology could also intensify voter suppression campaigns targeting marginalized communities. Additionally, generating letter-writing campaigns or fake engagement could create a false constituency, obscuring how voters genuinely respond to issues. In a research experiment, two Cornell University sent tens of thousands of emails to over 7,000 state legislators in the U.S., purportedly from concerned voters. These emails were divided between AI-generated letters and those written by humans. The responses were nearly identical, with human-written emails receiving only a 2% higher reply rate than the AI-generated ones.

Political campaigns have already started using AI-generated content for their purposes. For instance, after Florida’s Governor Ron DeSantis announced his candidacy in May, Donald Trump mocked his opponent using a parody video featuring AI-generated voices of DeSantis, Elon Musk, and Adolf Hitler. Trump’s previous campaigns leaned heavily on memes and videos created by his supporters, including deceptively edited videos to portray Joe Biden negatively. Observers warn that this AI-infused strategy is gaining ground.

The role of artificial intelligence in the election remains uncertain. Simply creating misleading AI-generated content does not guarantee its impact on an election, and assessing the effects of disinformation campaigns is notoriously challenging. Monitoring the engagement with fake materials is one thing, but measuring the secondary effects on the information ecosystem, where people may increasingly distrust online information, is another. However, there are worrying signs. As generative AI use grows, many social media platforms that bad actors rely on for disseminating disinformation have started rolling back content moderation measures. YouTube reversed its election integrity policy, Instagram allowed the anti-vaccine conspiracy theorist Robert F. Kennedy Jr. back on its platform, and Twitter’s head of content moderation left the company amid a general decline in standards under Elon Musk’s leadership.

The effectiveness of media literacy and traditional fact-checking methods in countering a flood of misleading text and images remains to be seen, as the sheer scale of generated content poses a new challenge. Now, when AI-generated images and videos can be created much more quickly than fact-checkers can review and debunk them, AI can erode public trust by making them believe that virtually anything could be artificially generated. While some generative AI services, including ChatGPT, have policies and safeguards against generating misinformation, their effectiveness varies, and several open-source models lack such policies and features.

More from Qonversations

Tech

Wifi

Did you know? The term Wi-Fi doesn’t stand for anything

Tech

2024 02 19T141103Z 275614176 RC2Q56AQ23MP RTRMADP 3 TECH AI

Unlocking the investment potential of artificial intelligence in today’s market

Tech

Google California

Google invests US$1 billion to transform Thailand’s digital economy

Tech

Tunisia 5G

Tunisia speeds into the future with 5G to enhance digital infrastructure

Front of mind