Eve is here. AI offering “better” surveillance is a prospect that will further inhibit the use of this technology. But with cameras already widely deployed on subways and buses, New York City may not be the only city to be at least a little transparent about how it moves forward. Officials are concerned because past AI experiments (weapon identification), which were supposed to be relatively simple, resulted in a large number of false positives.
Also note that the city is not considering the use of facial recognition technology. If you’re concerned about such things, there’s another argument for wearing a mask.
The MTA has begun exploring the potential of using artificial intelligence within its transit system to detect weapons, monitor abandoned items, and even predict subway stamps.
Officials said an unspecified number of technology providers and system integrators responded to a request for information from transportation authorities early last month by the Dec. 30 deadline.
“There is interest across the board,” MTA Chief Security Officer Michael Kemper told THE CITY. “That’s not just coming from the MTA, but from the business community that works with us, the AI business community.”
The request details the early stages of the MTA’s move toward potentially using AI to perform complex public safety tasks, such as analyzing real-time video feeds from subways and buses and predicting potentially dangerous behavior through transit system cameras.
“This is not only the norm, it’s expected. AI is here and AI is the future,” Kemper said. “It would be misconduct on our part for us not to look into it, investigate it and investigate it.”
But technology watchdog groups have warned that the AI boom comes with privacy risks and that tracking capabilities could expand beyond what the MTA needs for video analysis.
Jerome Greco, supervising attorney for the Legal Aid Society’s digital forensics division, said the technology’s potential ability to monitor “unusual” or “dangerous” behavior within a traffic environment comes with a number of potential problems, including “very negative” interactions with police.
“This use of AI is not like Netflix telling you what movie to watch next,” Greco said. “If it’s wrong, the consequences can be pretty significant, and I don’t think the MTA should be too cavalier about that.”
William Owen, communications director for the Surveillance Technology Surveillance Project, likened the transit agency’s efforts to a pilot program for weapons detectors that then-Major Eric Adams and the New York City Police Department introduced in 2024 on subways. During the month-long test, which included more than 3,000 searches at 20 stations, the AI-powered scanner detected 12 knives, but no guns, and more than 100 false positives.
“It actually turned out to be just a metal detector, which found a number of umbrellas and other items, not an actual weapon,” Owen said.
Kemper said the MTA understands the issues being raised regarding the use of AI video analytics in transit systems, calling it a “tool” that augments human decision-making.
“People have concerns and questions about it, and it’s our job to be transparent and answer those questions,” he said. “But we need to move forward and explore these technologies to keep riders safe.”
On January 6, 2026, security cameras were installed in all subway cars. Credit: Alex Krales/THE CITY
Notably, the request makes no mention of the controversial use of facial recognition technology used by the NYPD in the April 2024 incident, which critics have urged law enforcement to reconsider. A 2021 Amnesty International investigation found that the NYPD was able to feed images from more than 15,000 cameras in Brooklyn, the Bronx, and Manhattan into facial recognition software.
The MTA says its AI research focuses on leveraging current technology for public safety.
The technology providers’ response to the MTA’s request is the latest move by the nation’s largest mass transit system to adapt burgeoning technology for security purposes. More than 15,000 cameras are installed throughout the transit system and on more than 6,000 subway cars.
Artificial intelligence is already being tested elsewhere in the city’s transport network.
Last year, the authority installed Google Pixel smartphones on the axles of some vehicles along the A Line, combined with advanced artificial intelligence capabilities to detect and analyze potential track defects. The MTA is also testing new AI-enabled fare gates at some stations.
The safety-focused initiative aims to leverage the existing camera network at Sunset Park Station, where the streaming video feed failed during the April 2022 subway shooting.
A December 2022 report on the outage by the MTA inspector general said video streams at Brooklyn Station and two other stops stopped four days before the shooting.
In its request for information, the MTA acknowledged some of the issues related to the use of eyes in transit systems.
“With more than 15,000 cameras deployed across approximately 472 metro stations, current surveillance methods remain manual, reactive, and resource-intensive,” the report said.
The document adds that the MTA aims to evolve its oversight structure into a “proactive intelligence-driven ecosystem capable of action alerting, risk assessment, and incident response.”
The effort will be based on advanced video analytics and AI technology, but will be guided by insights from certified experts in behavioral science and psychology who have “a deep understanding of human behavior in the transportation environment,” according to the MTA.
No timeline has been set for the project, but the next step will be to consider proposals from stakeholders to determine what is possible to ultimately jump-start the project within a 24-hour transit system that moves nearly 4 million subway riders every day.
The MTA’s chief security officer said its potential value to passengers is “immeasurable.”
“As soon as we find something we’re happy with, we want to move forward quickly,” Kemper said.
Legal Aid’s Greco countered that the MTA needs to proceed with caution when it comes to predictive technology for “unusual” or “dangerous” behavior within the subway system.
“How does that work? Who gets to make that decision? And what are the consequences of that decision?” he said. “If we determine that something dangerous is going to happen, who knows what will be used to determine that, but what happens next?
“Are we essentially policing people for being weirdos?”
