ICE Drives AI Use Case Growth Within Homeland Security
The Department of Homeland Security (DHS) is actively expanding its use of artificial intelligence (AI), currently working on over 200 AI use cases, which reflects a nearly 37% increase since July 2025. A key contributor to this growth is the Immigration and Customs Enforcement (ICE) agency, which has added 25 new AI use cases since its last report.
New AI Applications by ICE
The newly introduced use cases include:
- Processing tips
- Reviewing mobile device data relevant to investigations
- Confirming identities of individuals through biometric data
- Detecting intentional misidentification
Among these, three are products from Palantir, a technology partner for the U.S. government that has faced scrutiny due to its controversial history. One notable application is the Enhanced Lead Identification & Targeting for Enforcement (ELITE), which utilizes generative AI to assist officers in extracting information from records.
Concerns Over Civil Liberties
As AI use expands, concerns regarding civil liberties and privacy have emerged. Quinn Anex-Ries, a senior policy analyst at the Center for Democracy and Technology, noted that the inventory raises more questions than answers regarding the legality of actions taken by DHS.
The annual inventory process, initiated by a 2020 executive order, aims to track AI use within federal agencies. However, previous iterations faced criticism for being incomplete or inaccurate. Efforts to enhance this process were implemented in 2024, but delays are anticipated due to the longest federal government shutdown in history.
New Technologies and Their Implications
Palantir’s technology is being applied for:
- Tip processing using large language models
- Mobile Fortify, an application for identity verification that compares biometric data with agency records
This application has also drawn attention from lawmakers concerned about potential misuse. ICE began using Mobile Fortify in May 2025.
Risk Management Practices
Interestingly, DHS’s inventory categorizes certain use cases as “presumed high-impact but determined not high-impact”. This classification allows agencies to bypass additional risk management practices, leading to concerns among experts about the adequacy of oversight. For instance, the ELITE tool from Palantir falls into this category, with DHS arguing its outputs do not significantly impact decisions affecting individuals.
Former advisers and experts have expressed alarm over the categorization of risks and the lack of established risk management tactics.
High-Impact Use Cases and Oversight Challenges
Among the high-impact use cases, Mobile Fortify is still undergoing development to complete minimum risk management practices. Despite being actively deployed, ICE has not yet established appropriate fail-safes or appeal processes for affected individuals.
Additionally, ICE is utilizing an AI-Assisted Resume Screening Tool powered by OpenAI’s GPT-4 for HR tasks, which is also labeled as high-impact and is currently in pre-deployment testing.
Conclusion
The expansion of AI use within DHS presents significant advancements in technology but raises critical questions about governance, oversight, and the protection of civil liberties. Continuous transparency and rigorous risk management practices will be essential as these technologies evolve.