2024 is the year of elections. It’s estimated more than 2 billion voters will take to the polls this year across 50 countries, including the US, the UK, India, as well as the European Union.
But a climate of doubt and disinformation threatens to overshadow 2024’s elections. Many voters already harbor growing skepticism about political content published online. Results from a recent UK-wide survey reveal that 23% of Brits no longer trust any political content they see on social media, and 29% only trust political content published from a verified source such as an official news outlet*.
Given this growing skepticism around the legitimacy of political content, generative artificial intelligence (GenAI) and deepfakes have the potential to make this year’s election cycles unpredictable.
Real versus fake? The threat of deepfakes
Easy access to cheap, online GenAI tools and apps have made it easier than ever for people to generate fake content. A few prompts and mere seconds is all it takes to create realistic images or videos of “voters being turned away from polling stations” or “a senior political official engaging in unethical activities”. Indeed, there are several examples of deepfakes featuring politicians, including a fake voice of Joe Biden that was sent by telephone to New Hampshire voters attempting to persuade them not to take part in the primary. In the UK, both Prime Minister Rishi Sunak and Labour Leader Keir Starmer have had their identities spoofed in fake video or audio clips.
Not all content created using AI-assisted tools is meant for malicious purposes. But when false content is almost indistinguishable from real-world photos and videos, deepfakes have the potential to sow doubt and spread significant misinformation. And there are valid concerns about the influence deepfakes and other misinformation could have on public perception. The term ‘fake news’ itself has become politicized and sometimes used to discredit opposing viewpoints.
GenAI not only makes such creations easier to produce, but also highly scalable. With the use of deepfakes on the rise — Onfido (an Entrust company) found a 3,000% year-on-year increase in the volume of deepfake attempts — there’s a risk that false information could flood the internet during election seasons. Some experts predict that up to 90% of online content could be synthetically generated within a few years, which will only add fuel to the fire around the question ‘what is real’ and ‘what is fake’?
The impact of disinformation on democracy
It’s important to acknowledge that many people’s political views can be hard to change, no matter how compelling the made-up photo or video. And despite the increasing realism of many deepfakes, the fake videos and images that end up online are quickly debunked.
Part of the problem with disinformation is that measuring its impact on election cycles is very difficult. While ‘fake news’ is often quickly identified and called out, it’s hard to know what impact the information would have had before it was debunked. Or indeed the effect it has on people’s general sentiment towards online sources.
Average online users also struggle to identify deepfakes. Only 20% felt confident they would be able to differentiate between legitimate and deepfaked content, 60% weren’t confident, and 16% weren’t sure*.
The real impact of this false information is the seeds of doubt and confusion it has the potential to sow among an already disillusioned election population. 42% of Brits believe that deepfakes and fake news could influence the election outcome, and 57% believe they could in particular influence the way younger people vote*.
The doubt spread throughout election cycles could even have longer-lasting impacts on election results. The worst-case scenario from disinformation campaigns is to completely delegitimize elections, either by impacting the perceived legitimacy of those elections or the trust people have in the outcome. In fact, 67% of Brits believe that deepfakes and fake news could seriously harm future democracy in the UK*.
What is being done about the spread of disinformation and deepfakes?
Despite the attention harmful AI-generated content receives, many regulators and governments are unprepared for the potential threats. These threats include the spread of disinformation as well as the impact they could have across scams and fraud.
The companies behind the technology typically mandate within their policies that the content should not be used for political, sexual, personal, criminal, or discriminatory content. Many leading tech companies, including Google and Meta, also pledged to manage the risks arising from deceptive AI election content in line with their own policies and practices.
But according to a recent study, the world’s most popular text-to-image generators accepted on average over 85% of prompts that sought to generate fake political news. Not only are there problems with the tools themselves, but it’s also difficult to monitor, prevent and penalize the use of content once it’s out in the world.
Current regulation surrounding AI and deepfakes only addresses the tip of the iceberg, and it seems Brits are aware of this. There is an appetite amongst the general population for stricter regulation around the use of AI (49%), better education on what deepfakes are and how to spot them (44%), holding online outlets and social media platforms more accountable for malicious deepfakes (43%), and progressive government regulation on the creation and distribution of deepfakes (41%)*.
The use and development of generative AI technology has ballooned in recent months. So much so that it’s outpaced existing regulations aimed to safeguard against malicious or unethical uses.
How to spot deepfakes and disinformation during election cycles
With so much information freely available online, it can be hard to know what is real and what is fake. But there are some steps people can take to help identify potential misinformation.
- Check whether the information has a named, reliable source or author
- Review any quotes or stats and check they have been represented accurately
- Consider whether any important information has been left out of the story
To help identify deepfakes in particular:
- Pay attention to video and image quality: Quick reactions or movements are likely to be less life-like in videos. For images of people, they can often look highly filtered with unnaturally perfect skin texture. Small details such as people’s hands can also look unusual.
- Look for unnatural eye movements: Because of the way that AI builds deepfake videos, natural eye movements and blinking are hard to recreate perfectly.
- Listen for sync between audio and video: See if the person’s lip movements sync up with the audio, or compare it with an original video of the person talking.
- Check for inconsistencies in colors and shadows: Look for shadows in the wrong place, or color inconsistencies with the background and the person, especially when they move.
- Monitor for abnormalities: Sometimes body shape or facial expressions can become distorted with movement in deepfake videos.
It’s hard to know exactly what impact deepfakes and disinformation will have on 2024’s elections. As mentioned, while it’s likely to exacerbate an environment of doubt and distrust, measuring the actual fallout of this is likely to be extremely difficult.
What we do know is that deepfakes (and their role in spreading disinformation more generally) are a sign of things to come. More needs to be done to help protect online users from the effects of disinformation campaigns. Arguably, while deepfakes pose a potential threat, they are just one small part of a wider problem of disinformation contributing to a general erosion of trust among voters.
Tools like encrypted communications, digitally signed media verification, and decentralized identity management can help preserve trust and authenticity in online information flows. No solution is perfect, but better regulation, combined with the tools we have at our disposal will go a long way to help.
*Findings from the Onfido (an Entrust company) online survey of 2,052 UK adults. Conducted by Opinium. More information available on request.