The conversation around artificial intelligence (AI) technology, AI-generated content, and regulation has been a hot topic over the last few months. At Onfido (an Entrust company), we’ve discussed the importance of regulating deepfakes as part of the wider conversation around AI regulation.
With upcoming elections in the UK (4th July) — as well as the US later in the year (5th November) — those in the technology sector are keeping a close eye on what’s next for AI.
The renewed focus on AI regulation is welcome after the UK has seen regulatory momentum lose steam following last year’s AI Summit. After all, if regulations fail to keep up while applications and technology continue to develop at pace, this poses risks to both the wider tech industry and customer safety.
UK technology policy commitments
Ahead of July’s election, all major UK political parties (Labour, Conservative and Liberal Democrats) have now released their manifestos. Common themes on technology include: investment into research and development, improving public services through digital transformation, and bolstering existing online safety measures and regulations.
Labour Party Manifesto
Labour’s commitment for tech goes beyond AI, highlighting that the Party’s plans for growth and economic stability must be underpinned by technology and innovation. They propose new plans for the Department for Science, Innovation and Technology (DSIT) to bring more innovation and a greater focus on delivery to public services, as well as proposals to expand the Online Safety Act, and commitments to open banking and open finance. They will legislate to regulate the most powerful AI models but have stopped short of broader regulation.
Some of the key commitments in their manifesto include:
- Regulatory reform: Launch a Regulatory Innovation Office (RIO), bringing together existing functions across government to help regulators update sectoral regulation to reflect the challenges of AI.
- Research and development: Set ten-year budgets for key research and development (R&D) institutions, supporting spinouts in partnership with universities.
- Online safety: Bring forward provisions to expand the Online Safety Act, and explore further measures to boost online safety, especially on social media. They also propose working with tech companies to stop their platforms being used by fraudsters.
- Artificial intelligence: They propose introducing binding regulation on companies developing the most powerful AI models, as well as banning the creation of sexually explicit deepfakes. Labour also plans to create a National Data Library to help deliver data-driven public services and public data to support machine learning, and will remove planning barriers for new data centres to support the development of the AI sector.
- Competition: Ensure a pro-business environment with a competition and regulatory framework that supports innovation and investment.
- Financial services: Commitment to support innovation and growth in the sector by leveraging new technologies like open banking and open finance.
Conservative Party Manifesto
The Conservative’s manifesto follows their current positioning on tech, maintaining their wait and see position in government for AI regulation. Commitments on technology policy are generally spread across different policy areas. Some of the key takeaways include:
- Research and development: Plan to increase R&D spending by £2bn per year, maintain the R&D tax relief, and introduce more technological capability into public service delivery including through AI.
- Online safety: Introduce a statutory ban on mobile phones being used in schools, and open an urgent consultation on increased parental controls over social media access to build on existing responsibilities for social media companies under the Online Safety Act.
- Public services: Invest in technology to improve public service delivery by doubling digital and AI expertise in the civil service and implementing a new medtech pathway for rapid adoption of AI in the NHS. They propose plans for a £3.4bn investment in new technology across the NHS, including the NHS App, increased use of AI to free up staff time, and improved IT infrastructure.
What does this mean for businesses?
Both of the major political parties have signaled their openness to working with the private sector on topics like AI, technology, and fraud. For businesses operating in these areas, this offers an opportunity to open up future conversations, because getting AI regulation right is key to balancing both innovation and security.
Both parties have also acknowledged the need for targeted legislation to address gaps in current regulatory framework, particularly regarding the risks posed and providing direction for the key players involved in its development. Organizations should prepare for increased UK AI regulation over the next few years, regardless of the outcome of July’s election.
The importance of AI regulation
While the UK’s Online Safety Act passed in 2023 made it illegal to share explicit images or videos that have been digitally manipulated, the Act mostly focuses on deepfkaes, and doesn’t make it an offense to create any other type of AI-generated media without the subject's consent.
Legislation in general must go a step further than only addressing explicit images. Why? Because the harmful implications of deepfakes don’t end there. Deepfake laws should also look to prevent:
- The spread of misinformation: 2024 is a year of global elections, and convincing deepfake videos during election cycles could erode trust in an online ecosystem already rife with disinformation.
- The erosion of trust: Some experts predict that up to 90% of online content could be synthetically generated within a few years. The complexities surrounding online content could soon blur the lines of misinformation further, and already exacerbate an extremely problematic issue.
- Identity fraud and scams: Fraudsters are increasingly using AI tools and content as a way to commit fraud. At Onfido, we’ve seen a 3,000% increase in deepfakes as part of fraudulent account onboarding attempts. There’s also been a rise in scams where fraudsters pose as family, friends or colleagues to get individuals to hand over money.
Looking to the future of regulation, the UK has the opportunity to learn from and take the best aspects of the EU AI Act, the world’s first comprehensive legal framework for AI. The AI Act aims to address both the risks of AI for health, safety, fundamental rights, democracy, and rule of law while also fostering innovation, growth and competitiveness.
Deepfakes need regulating — but why now? Learn more in this blog.