DHS Expands AI Surveillance Despite Court Rulings

DHS AI Surveillance Arsenal Grows as Agency Defies Courts

Last week, a federal judge in Minnesota listed 96 court orders that Immigration and Customs Enforcement (ICE) has violated in 74 cases in an order concerning a detained individual. The judge emphasized, “This list should give pause to anyone—no matter his or her political beliefs—who cares about the rule of law.” They noted that “ICE has likely violated more court orders in January 2026 than some federal agencies have violated in their entire existence.”

Despite this judicial defiance, ICE continues to acquire sophisticated surveillance tools powered by artificial intelligence technologies and is rapidly deploying them across American cities. The latest Department of Homeland Security (DHS) AI inventory, released on January 28, reveals more than 200 AI use cases currently deployed or in development by DHS and its component agencies—marking a nearly 40% increase since the last disclosure in July 2025. Much of this growth is driven by ICE, which has added 24 new AI applications, including tools for processing tips, reviewing social media data, and deploying facial recognition to confirm identities.

Evidence of Alleged Lawlessness

The evidence of alleged lawlessness by DHS component agencies is substantial, particularly concerning digital rights and surveillance. The ACLU has filed a lawsuit documenting ICE and CBP’s pattern of suspicionless stops, warrantless arrests, and racial profiling of Minnesotans, including the use of facial recognition technology. The New York Times reported on how tech companies are facilitating these surveillance applications, building a surveillance infrastructure that targets individuals for profit while challenging constitutional principles.

A Growing AI Arsenal

According to a FedScoop analysis, the newly disclosed inventory details several applications that have raised concerns among experts. Here are five notable examples:

  • ELITE: A Palantir tool using generative AI to assist ICE officers in extracting information from records and warrants. This tool creates a map populated with potential deportation targets, providing dossiers on individuals with an address “confidence score.”
  • Mobile Fortify: A facial recognition and fingerprint matching application used by both CBP and ICE since May 2025, which has been documented to misidentify individuals during immigration raids.
  • AI-enhanced tip processing: Utilizing Palantir technology and large language models to efficiently review and categorize incoming tips from the public.
  • Hurricane Score: A predictive risk model assessing the likelihood of non-citizens in Alternatives to Detention (ATD) programs failing to comply with check-in requirements.
  • Open Source and Social Media Analysis: Developed by NexusXplore, this tool enhances social media searches with AI modules for text detection, translation, and image recognition.

At least 23 applications utilize some form of facial recognition or biometric identification. Among these, some tools are designed to evaluate user-uploaded ID photos for suitability in employment authorization applications, while others scour public internet images for matches. DHS has issued a $3.8 million contract to Clearview for these purposes.

Concerns and Challenges

Of the 238 use cases in the latest inventory, 55 are deemed “high-impact.” However, the classification of tools like ELITE raises eyebrows, as DHS claims they do not have “significant effects” on individuals’ rights or safety. This characterization allows ICE to use advanced technology for neighborhood raids without substantial oversight.

Past analyses by organizations such as Just Futures Law have described the DHS inventory as “scattered, misleading, and incomplete,” with critical procurement information missing. The latest version similarly lacks substantial data, particularly in risk management fields.

Automating Authoritarianism

DHS seems to be using cloud capabilities and AI to automate monitoring and expand repression. The consolidation of data from various sources—including license plate readers and social media—has created what some experts term a surveillance panopticon.

Critics argue that this technological expansion represents a significant win for Big Tech companies like Palantir, which have positioned themselves to secure lucrative DHS contracts. The result is an extensive surveillance infrastructure that allows the agency to compile detailed dossiers on individuals, raising concerns about civil liberties and personal privacy.

The implications of these new surveillance technologies on U.S. constitutional rights remain uncertain. Polling indicates that public sentiment is shifting; more Americans now support abolishing ICE than oppose it. The coming weeks are expected to see discussions on reforms aimed at curtailing the agency’s surveillance capabilities.

Conclusion

The DHS’s increasing reliance on AI surveillance technologies poses significant challenges to civil rights and personal privacy. As the agency continues to expand its arsenal, the public must remain vigilant and advocate for transparency and accountability in the use of these powerful tools.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...