The American Council of Immigration does not approve or oppose candidates for elected offices. We aim to provide an analysis of the impact of elections on the US immigration system.
On April 30, the Department of Homeland Security (DHS) released an updated 2024 inventory of uncategorized and non-sensitive AI use cases within the sector. Public data revealed something powerful. Artificial intelligence is not just about future possibilities in immigration enforcement. It’s already here. In fact, the inventory lists 105 active DHS AI use cases deployed by major immigration agencies, with a wide range of applications that affect asylum screening, border surveillance, fraud detection, and other key DHS features.
The division has publicly made its AI use inventory since 2022, but the inventory in 2024 has provided the most comprehensive disclosure to date. The 2023 list only has 39 use cases by immigration agencies under DHS. However, the Office of Management and Budget (OMB) issued new guidelines in 2024, requiring institutions to disclose more AI use cases.
Of the 105 AI applications deployed by immigration agencies on the 2024 inventory list:
Customs and Border Protection (CBP) leads in 59 AI use cases. Immigration and Customs Enforcement (ICE) is 23. The US Citizenship Office (USCIS) uses AI with 18 cases. DHS Head Office has five common AI applications that apply to all institutions.
Inventory organizes use cases into topic areas. There are many use cases in internal agency support or government services, but the majority of these AI use cases, 61%, are tied to “law and justice.” They include the following tools:
Identifying biometric authentication
Example: CBP’s unified processing/mobile intake system uses facial recognition to match individuals with individual photo repository. Integrated with Traveler Verification Services, the system helps agents quickly identify people with previous anxiety and security flags and facilitate processing at the border. screening
Example: CBP uses a tool called Babel to collect and analyze open source and social media content related to a particular traveler. Using AI for translation, image recognition, and text detection, Babel helps analysts identify potential threats or people who need additional testing, sub-ment the manual review process, and sometimes reduce the need for additional screening. Support for research
Example: ICE’s Homeland Security Research Unit generates leads using an AI tool called Email Analysis of Survey Data. The system uses natural language processing and pattern detection to analyze large amounts of email, video and audio data to identify communications that may be related to criminal activity. Based on these patterns, ICE identifies a particular individual or network for further action by investigators and analysts.
Of the 59 cases of CBP, 71% are related to law and justice, and in the case of ICE, 65% of AI projects fall into the same topic area. The share of cases that fall under law and justice is the lowest in USCIS at 39%.
DHS inventory also identifies AI cases that use facial recognition and facial capture technology. From the DHS AI Inventory, 16 DHS Immigrant AI use cases include facial recognition or facial capture. Most of these are used in CBP and ICE.
According to DHS Inventory, 27 out of 105 use cases are labeled as “rights impact.” These are when OMBs under the Biden administration are identified as affecting individual rights, freedoms, privacy, access to equal opportunities, or their ability to apply for government benefits and services.
However, last month, OMB under the Trump administration released two new memos addressing the use of AI. Some attorneys pointed out that minimum risk criteria “impact rights” are missing from the two new memos.
CBP has the highest number of cases affecting rights, followed by USCIS (7) and ICE (5). Furthermore, 28 cases have been identified as “not new to assess,” suggesting that these tools need to be reevaluated if they “impact their rights” before the start phase is over.
Many immigration lawyers are already navigating complex and opaque processes. Now they must also address the increasing use of AI in decision-making. For example, if an AI-powered system is potentially flagging an asylum application as “fraud”, how does that cause it to be a decision-making process? How is it disclosed to the applicant? How can applicants appeal an AI-driven decision?
There are many unanswered questions about how DHS uses AI in immigration enforcement. But one thing is clear, that AI is already shaping the outcomes of immigration. The question is whether these systems are ready to be held, and whether the institutions that deploy them are already available.
This post is the first in a series that explores how DHS integrates AI into immigration enforcement. Future posts explore how AI can make decisions in immigration enforcement, use of monitoring tools at borders, risk of bias, increase efficiency, and ultimately build systems that are more transparent and accountable.
Submitted below: Customs and Border Protection, Department of Homeland Security, Immigration and Customs Enforcement