Earlier this year, watchdogs and technologists warned that artificial intelligence would spread misinformation through deepfakes and personalized political ad campaigns, disrupting the 2024 U.S. election. Such concerns are widespread, with a recent Pew poll finding that more than half of American adults are “very or very concerned” about AI’s negative impact on elections.
But with the election just a week away, fears that it will be derailed or dictated by AI now appear to be exaggerated. Political deepfakes have been shared across social media, but they were just one part of a massive misinformation campaign. The U.S. intelligence community said in September that foreign powers like Russia are using generative AI to “improve and accelerate” efforts to influence voters, but that the tool could “revolutionize such operations.” “It’s not something,” he wrote.
Tech insiders acknowledge that 2024 was not a breakthrough year for generative AI in politics. “There are a number of campaigns and organizations that are using AI in some way, but in my view, they haven’t reached the level of impact that people expected or feared,” said a venture fund that invests in political technology. says Betsy Huber, founder of Higher Ground Labs.
At the same time, researchers caution that the impact of generative AI on this election cycle is not yet fully understood, especially as it is deployed on private messaging platforms. They also believe that while the impact of AI on this election campaign may not seem overwhelming, it will continue to play a role in future elections as technology advances and the use of AI increases among the general public and political operatives. It is argued that there is a high possibility that this will increase. “I’m confident that in another year or two, AI models will get better,” says Sunny Gandhi, vice president of political affairs at Encode Justice. “So I’m pretty worried about what’s going to happen in 2026 and certainly 2028.”
The rise of political deepfakes
Generative AI is already having a clear impact on world politics. In countries across South Asia, candidates used artificial intelligence to bombard their citizens with articles, images, and deepfake videos. In February, an audio deepfake went viral that purported to show London Mayor Sadiq Khan making inflammatory comments before a large pro-Palestinian march. Khan said the audio clip incited violent clashes between demonstrators and counter-protesters.
There were also examples in America. In February, New Hampshire residents received a deepfake voicemail of Joe Biden in which the president appeared to discourage voting. The FCC immediately banned robocalls containing AI-generated voices, the Democratic political consultant who created the voicemails was criminally charged, and a strong warning was issued to others who might try similar tactics.
Still, political deepfakes were highly praised by politicians, including former President Donald Trump. In August, Trump posted an AI image of his supporters Taylor Swift and Kamala Harris in communist attire. In September, a video linked to a Russian disinformation campaign accusing Harris of being involved in a hit-and-run accident was viewed millions of times on social media.
Read more: Taylor Swift’s endorsement of Harris has another important message.
Russia has been a particular hotbed for malicious uses of AI, with state actors generating text, images, audio, and video that are used in the United States, often to amplify fears about immigration. . It’s unclear whether these campaigns have had much of an impact on voters. The Justice Department announced in September that it had disrupted one of these campaigns, known as Doppelganger. US intelligence agencies wrote in the same month that foreign actors face several challenges in disseminating these videos, including the need to “overcome limitations built into many AI tools.”
Independent researchers have also worked to track the prevalence and impact of AI creations. Earlier this year, a group of Purdue University researchers created a database of political deepfake incidents, which has since recorded more than 500 incidents. Surprisingly, the majority of these videos are not created to deceive people, but are satirical, educational, or political commentary, according to researcher Christina Walker. But Walker says that as these videos spread across the political spectrum, their meaning to viewers often changes. “Someone posted a deepfake and wrote, ‘This is a deepfake.’ They created it to show X, Y, and Z. After retweeting it 20 times, someone else decided it was real. “We’re sharing it as if it were a real thing,” Walker said.
Daniel Schiff, another researcher on the project, said many deepfakes are likely designed to reinforce the opinions of people who are already inclined to believe the message. Other research suggests that the effects of most political persuasion are very small at best, and that voters actively dislike political messages tailored to them. That could negate one of AI’s key capabilities: crafting targeted messages cheaply. In August, Meta reported that generative AI-driven tactics resulted in “only increased productivity and content generation benefits” to impact campaigns. The company concluded that the tech industry’s strategies to neutralize the spread of the virus “appear to be effective at this time.”
Other researchers are less confident. Mia Hoffman, a researcher at Georgetown’s Center for Security and Emerging Technologies, said it’s difficult to see how AI will affect voters for several reasons. For one thing, big tech companies are limiting the amount of data they share about posts. Twitter ended free access to its API and Meta recently shut down Crowdtangle on Facebook and Instagram, making it difficult for researchers to track hate speech and misinformation across these platforms. “We’re at the mercy of what these companies share with us,” Hoffman says.
Hoffman is also concerned that AI-generated misinformation is spreading on closed messaging platforms like WhatsApp, which are particularly popular with the U.S.’s diaspora communities and influencing voters in battleground states. She added that strong AI efforts may be being deployed to give people more power, but their effectiveness is questionable until after the election, which we may not know. “As these groups grow in electoral importance, they are increasingly targeted by targeted influence campaigns aimed at suppressing their votes and swaying opinion,” Hoffman said. say. “And because the app is encrypted, misinformation is further hidden from fact-checking efforts.”
AI tools in political campaigns
Other political actors are also attempting to leverage generative AI tools in more everyday ways. Campaigns use AI tools to search the web to see how candidates are viewed in various social and business circles, conduct opposition surveys, and search for dozens of news stories. You can summarize articles and create social media copy tailored to different audiences. Many campaigns are understaffed, have limited budgets, and have limited deadlines. The theory is that AI could replace some of the low-level work typically done by interns.
A Democratic National Committee spokesperson told Time that the organization’s members are using generative AI to “make our operations more efficient while maintaining strong safeguards.” This includes helping officials draft fundraising emails, code them, and spot unusual patterns of voter exclusion. Public data records. A spokesperson for the Republican National Committee did not respond to a request for comment.
Various startups have begun offering AI tools for political campaigns. These include BattleGroundAI, which can create hundreds of copies of political ads “in minutes,” and Grow Progress, which runs chatbot tools that help craft and tailor persuasion tactics and messages to potential voters. The company says it is. Grow Progress co-founder Josh Berezin says dozens of campaigns have “experimented” using chatbots to create ads this year.
But Berezin says adoption of these AI tools has been slow. Political activities are often risk-averse, and many strategists are hesitant to participate, especially given the negative public perception of the use of generative AI in politics. The New York Times reported in August that only a handful of candidates were using AI, and that some candidates using the technology wanted to hide that fact from the public. . “If someone says, ‘This is an AI election,’ I’ve never actually seen it,” Berezin said. “I’ve seen some people happily using these new tools, but it’s not universal.”
However, the role of generative AI is likely to expand further in future elections. Improved technology allows campaigns to craft messages and raise money faster and cheaper. AI could also help with the bureaucracy of voting processing. For example, automatic signature verification, which matches a mail-in voter’s signature to a signature on file, was used in several counties in 2020.
However, improvements in AI technology are also likely to produce more believable deepfake videos and audio clips, leading to the spread of misinformation and increased distrust of all political messages and their veracity. “This is a growing threat,” says Georgetown researcher Hoffman. “Debunking and identifying these influence campaigns will consume even more time and resources.”