Misinformation, Deepfakes and the Threat to Global Democracy
The U.S. Presidential election is quickly approaching, and will cap off an unusually busy year for democracy. More than 2 billion voters in 50 countries headed to the polls in 2024.
With that many people voting, it comes as no surprise that nefarious players are using whatever tools they can get their hands on to influence outcomes, including deepfakes -- aka AI-generated audio, visual, or video content that mimic real people.
Election officials are sounding the alarm over the use of generative AI in creating phony but convincing political ads. In January 2024, registered Democratic voters in New Hampshire received fake Joe Biden robocalls telling them not to vote in the primaries so that they could save their vote for November.
It's not just the U.S. where AI is shaping elections. In Argentina, during the final weeks of campaigning, President-elect Javier Milei shared a fabricated image portraying his Peronist opponent, Sergio Massa, as an old-fashioned communist in military attire, his hand raised in a salute.
Sergio Massa's campaign team released a video featuring Milei, where he discusses the potential profits from selling human organs and suggests that having children could be considered a "long-term investment." Although the video was clearly labeled as AI-generated, it quickly circulated across various platforms without disclaimers.
President Recep Tayyip Erdoğan’s staff shared a video depicting his main rival, Kemal Kiliçdaroğlu, receiving the endorsement of the Kurdistan Workers’ Party, a designated terrorist group. Although this video was clearly fabricated, it didn’t stop voters from viewing it and sharing it widely.
Threats to Democracy
Sharing such fabricated videos is a threat to democracy. These deepfakes spread rapidly on social media platforms, making it challenging to debunk them before they reach a wide audience. Generative AI has already been called a political super weapon and for good reason: it allows disinformation to be created, shared and believed at scale.
Voter suppression is another key concern. AI-generated content can be used to spread false information about voting procedures or discourage people from voting, as seen in the fake robocall impersonating President Biden during the New Hampshire primary.
As deepfakes become more sophisticated and widespread, the ability for voters to differentiate between authentic and manipulated content diminishes, potentially undermining trust in political institutions and the democratic process itself. This growing uncertainty may lead to an erosion of confidence in elections and governance.
Many democracies already struggle with hyper-polarization, and as people find it harder to distinguish truth from fiction, they may retreat further into partisan echo chambers, deepening existing political divides.
What the Ad-Tech Space Can Do to Preserve Democracy
To combat disinformation and deepfakes in political ads, clear policies must be developed and enforced. Many companies currently lack specific guidelines or have ambiguous ones, which leaves room for misleading content to spread unchecked. Requiring disclosure for AI-generated and digitally altered ads is an essential first step in maintaining transparency and trust in the electoral process.
Admittedly, that may be a big ask considering that both Elon Musk and candidate Donald Trump have both shared AI-generated images on X.
Additionally, advanced AI and machine learning technologies should be deployed to detect deepfakes and manipulated media in political ads before they are published. This proactive approach can help identify disinformation early and prevent it from reaching voters.
Collaborating with independent fact-checking organizations is another critical measure. These partnerships can help verify claims in political ads and flag potentially misleading content. Moreover, working across platforms and ad networks to share information about identified disinformation can create a more unified response, further reducing the spread of harmful content.