AI Innovations in Homeland Security: Expanding Applications and Oversight Challenges

ICE Drives AI Use Case Growth Within Homeland Security

The Department of Homeland Security (DHS) is actively expanding its use of artificial intelligence (AI), currently working on over 200 AI use cases, which reflects a nearly 37% increase since July 2025. A key contributor to this growth is the Immigration and Customs Enforcement (ICE) agency, which has added 25 new AI use cases since its last report.

New AI Applications by ICE

The newly introduced use cases include:

  • Processing tips
  • Reviewing mobile device data relevant to investigations
  • Confirming identities of individuals through biometric data
  • Detecting intentional misidentification

Among these, three are products from Palantir, a technology partner for the U.S. government that has faced scrutiny due to its controversial history. One notable application is the Enhanced Lead Identification & Targeting for Enforcement (ELITE), which utilizes generative AI to assist officers in extracting information from records.

Concerns Over Civil Liberties

As AI use expands, concerns regarding civil liberties and privacy have emerged. Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, noted that the inventory raises more questions than answers regarding the legality of actions taken by DHS.

The annual inventory process, initiated by a 2020 executive order, aims to track AI use within federal agencies. However, previous iterations faced criticism for being incomplete or inaccurate. Efforts to enhance this process were implemented in 2024, but delays are anticipated due to the longest federal government shutdown in history.

New Technologies and Their Implications

Palantir’s technology is being applied for:

  • Tip processing using large language models
  • Mobile Fortify, an application for identity verification that compares biometric data with agency records

This application has also drawn attention from lawmakers concerned about potential misuse. ICE began using Mobile Fortify in May 2025.

Risk Management Practices

Interestingly, DHS’s inventory categorizes certain use cases as “presumed high-impact but determined not high-impact”. This classification allows agencies to bypass additional risk management practices, leading to concerns among experts about the adequacy of oversight. For instance, the ELITE tool from Palantir falls into this category, with DHS arguing its outputs do not significantly impact decisions affecting individuals.

Former advisers and experts have expressed alarm over the categorization of risks and the lack of established risk management tactics.

High-Impact Use Cases and Oversight Challenges

Among the high-impact use cases, Mobile Fortify is still undergoing development to complete minimum risk management practices. Despite being actively deployed, ICE has not yet established appropriate fail-safes or appeal processes for affected individuals.

Additionally, ICE is utilizing an AI-Assisted Resume Screening Tool powered by OpenAI’s GPT-4 for HR tasks, which is also labeled as high-impact and is currently in pre-deployment testing.

Conclusion

The expansion of AI use within DHS presents significant advancements in technology but raises critical questions about governance, oversight, and the protection of civil liberties. Continuous transparency and rigorous risk management practices will be essential as these technologies evolve.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...