AI and national security header image binary digits on pillars in blue

The US Department of Homeland Security warns that AI models pose multiple threats to the wellbeing of nations, but officials also believe AI itself can be used to counteract these threats.

The meteoric rise of Artificial Intelligence (AI) has brought with it immense promise for breakthroughs and innovations across all industries. However, these leaps and bounds are not restricted to good actors and virtuous uses: fraudsters and bad actors are benefiting from the same advancements. As AI continues to develop, it may be the only solution to the threats it poses. 

6 core threats AI poses to national security

The Department of Homeland Security (DHS), which has been authoring United States AI policy, has identified potential threats posed by AI models should they be leveraged for wrongdoing. In a recent talk at the AI Summit in New York City, Noah Ringler, an AI Policy specialist from DHS, summarized six core threats:

  1. Cyberattacks: AI models can be trained to identify and exploit vulnerabilities in software and systems, potentially leading to major breaches.
  2. Deepfakes: The ability to create realistic audio, photo, and video forgeries through AI, or deepfakes, threatens not only biometric-based systems but also public trust.
  3. Fraud/scam/manipulation: AI can generate synthetic data to perpetuate financial scams, manipulate online interactions, and negatively impact individuals and groups.
  4. Bias, inaccuracy, or unintentional harm: AI models trained on biased data can lead to discriminatory outcomes, jeopardizing fairness and justice and further marginalizing already disempowered groups.
  5. Coordinated Inauthentic Behavior (CIB): AI can be used to create and manipulate online accounts, spread misinformation, and influence public opinion through bots and other automated methods.
  6. Internet of Things (IoT)/Critical infrastructure operations: Malicious actors can leverage AI to disrupt critical infrastructure operations, such as power grids and transportation systems.

The United States Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, published by President Biden in late Oct 2023, is a direct acknowledgement of these threats: providing public reassurance against perceived existential threats by establishing standards and best practices for detecting AI-generated content and authenticating official content. 

AI as a solution to AI threats

Similar to fighting fire with fire, AI threats can be thwarted with AI solutions. The thoughtful development of AI to prevent these threats from becoming reality is key. Fortunately, companies like Onfido are actively working to mitigate these threats through responsible AI development and deployment. Atlas™, Onfido’s award-winning AI, automates identity verification without compromising speed, allowing companies across industries to protect against bad actors looking to conduct cyberattacks, Coordinated Inauthentic Behavior, or scams of any size. 

Atlas™ was built using diverse datasets and refined over more than 10 years, undergoing a continuous process of testing, evaluation, validation, and verification (TEVV) to ensure Onfido mitigates AI-bias in identity verification. As a result, Onfido was recognized in the CogX Awards for ‘Best Innovation in Algorithmic Bias Mitigation’ and 'Outstanding Leader in Accessibility’ because of its leading anti-bias practices. 

Today Atlas™ powers The Onfido Real Identity Platform using a unique micro-model architecture that combines over 10,000 models trained to detect specific fraud markers. Each micro-model can be trained and tuned in days, allowing users to react quickly to unprecedented threats. In addition, to keep ahead of rapidly-evolving fraud, Onfido's Fraud Lab creates deepfakes and 3D masks to replicate the most advanced attacks. It also generates thousands of fraud samples and synthetic identities to train Atlas™ AI faster than possible using real fraud data alone.

In conclusion, AI can be used responsibly and ethically to counteract threats posed to national security. By powering open, secure, and inclusive relationships between businesses and their customers around the world, Onfido slows the advancement of AI used for wrongdoing and helps assure a bright, secure future. Companies building AI should take active measures to join in the fight against AI-posed threats, and customers of AI products should ensure they contract with responsibly built AI companies. 

See how it works

Take an interactive tour of the Onfido Real Identity Platform, including a hands-on workflow builder in Onfido Studio. 

Take the tour