The Trump administration plans to use artificial intelligence to create federal transportation regulations, according to Department of Transportation records and interviews with six agency officials.
The plan was presented to DOT officials last month during a demonstration of how AI “has the potential to revolutionize the way we make rules,” Daniel Cohen, an attorney for the agency, said in a letter to colleagues. Cohen wrote that the demonstration will introduce “new AI tools available to DOT rulemakers to help us do our jobs better and faster.”
Discussions about the plan continued last week among agency leaders, according to meeting records reviewed by ProPublica. Gregory Zerzan, the agency’s general counsel, said at the meeting that President Donald Trump is “very excited about this initiative.” Zerzan seemed to suggest that the DOT is spearheading a broader federal effort, calling the department the “tip of the spear” and “the first agency to be able to fully use AI in drafting regulations.”
Zerzan seemed primarily interested in the quantity rather than the quality of regulation that AI could produce. “We don’t need perfect rules about XYZ. We don’t need very good rules about XYZ,” he said, according to the meeting transcript. “We’re hoping for something good enough,” Zerzan said, adding: “The area is flooded.”
These developments are worrying some at the DOT. The agency’s rules touch on virtually every aspect of transportation safety, including regulations that keep planes in the air, prevent gas pipelines from exploding, and stop freight trains carrying toxic chemicals from sliding off the rails. Some officials wondered why the federal government would entrust the creation of such an important standard to an emerging technology notorious for making mistakes.
The answer for the plan’s proponents is simple. It’s “speed”. Creating and revising complex federal regulations can take months or even years. But using the DOT version of Google Gemini, employees were able to create rule proposals in minutes, even seconds, two DOT staffers who attended the December demo remembered a presenter saying. In any case, most of what is written in the preamble to DOT’s regulatory documents is just “word salad,” one staff member recalled the presenter saying. Google Gemini can run word salads.
At last week’s meeting, Zerzan reiterated his ambition to accelerate rulemaking using AI. The goal is to significantly shorten the timeline for developing transportation regulations, allowing them to go from idea to completed draft ready for review by the Office of Information and Regulatory Affairs in as little as 30 days, he said. “It should take less than 20 minutes to submit a draft rule from Gemini,” he said, adding that it should be possible.
The DOT’s plan, previously unreported, is another development in the Trump administration’s campaign to incorporate artificial intelligence into federal operations. This administration is not the first to use AI. Over the years, federal agencies have gradually incorporated the technology into applications such as translating documents, analyzing data and categorizing public comments. But the current administration is particularly enthusiastic about the technology. President Trump announced several executive orders last year supporting AI. In April, Russell Vought, director of the Office of Management and Budget, circulated a memo calling for the federal government to accelerate its use. Three months later, the government published its AI Action Plan, which included similar directives. But none of those documents explicitly called for using AI to write regulations, as the DOT currently plans to do.
Those plans are already in motion. A DOT official briefed on the matter said the agency used AI to draft a previously unpublished Federal Aviation Administration rule.
Skeptics argue that so-called large-scale language models such as Gemini and ChatGPT should not be trusted with complex and consequential responsibilities for governance because they are error-prone and incapable of human reasoning. But proponents see AI as a way to automate mindless tasks and extract efficiency from the slow-moving federal bureaucracy.
That optimism was on display earlier this month in a windowless conference room in northern Virginia, where federal technology officials convened for an AI summit to discuss implementing an “AI culture” within the government and “upskilling” federal workers to take advantage of the technology. Those federal representatives included Justin Hubert, director of the DOT’s Cybersecurity and Operations Division, who spoke on a panel about the Department of Transportation’s plans for “rapid adoption” of artificial intelligence. He noted that many see humans as a “challenge” that slows down AI. But eventually, Ubert predicted, humans will revert to a mere supervisory role, monitoring “interactions between AIs.” Mr. Hubert declined to discuss the record with ProPublica.
Similar optimism about AI’s potential permeated a presentation at DOT in December. The presentation was attended by more than 100 DOT officials, including department heads, high-ranking attorneys, and public officials from rule-making bodies. One attendee recalled the presenter saying enthusiastically that Gemini could handle 80% to 90% of the regulatory work, with DOT staff doing the rest.
To illustrate this, the presenters asked the audience for suggestions on topics on which the DOT might need to create a Notice of Proposed Rulemaking, a public document that describes the agency’s plans to introduce new regulations or change existing regulations. I then connected the topic keywords to Gemini, which created a document that resembled a notice of proposed rulemaking. But the actual language in the Code of Federal Regulations appeared to be missing, one staff member recalled.
According to three attendees, presenters expressed little concern that regulatory documents created by AI could contain so-called hallucinations, the erroneous text frequently generated by large-scale language models such as Gemini. Either way, DOT staff will be there, he said. “His vision for the future of rulemaking at DOT seemed to be that our job was to calibrate this mechanical product,” one employee said. “He was very excited.” (Three attendees couldn’t remember the name of the lead presenter, but said they believed it was Brian Brossos, the agency’s acting chief AI officer. Brossos declined to comment, referring questions to the DOT press office.)
A DOT spokesperson did not respond to a request for comment. Mr. Cohen and Mr. Zerzan also did not respond to messages seeking comment. A Google spokesperson had no comment.
The December presentation left some DOT staff with deep skepticism. They say rulemaking is a complex task that requires expertise not only in the subject matter at hand, but also in existing statutes, regulations, and case law. Errors and oversights in DOT regulations can lead to lawsuits and injuries and deaths in transportation systems. Some rule writers have decades of experience. But the presenter seemed to ignore all of that, participants said. “It seems so irresponsible,” said one person, who like others requested anonymity because they were not authorized to speak publicly about the issue.
Mike Horton, the DOT’s former acting chief artificial intelligence officer, criticized the plan to use Gemini to write regulations, calling it akin to “asking a high school intern to write the rules.” (He said the plan wasn’t in place when he left the agency in August.) Noting that transportation safety regulations are a matter of life and death, Horton said agency leaders “want to go fast and break things, but going fast and breaking things means people get hurt.”
Scholars and researchers who track the use of AI in government expressed mixed opinions about the DOT plan. If agency rulemakers use this technology as a kind of research assistant with sufficient oversight and transparency, it could be useful and potentially time-saving. But handing too much responsibility to AI could lead to important regulatory deficiencies and run afoul of the requirement that federal regulations be based on rational decision-making.
“Just because these tools can generate a lot of words doesn’t mean those words will lead to high-quality government decisions,” said Bridget Dooling, a professor at Ohio State University who studies administrative law. “It’s very tempting to want to know how to use these tools, and I think it makes sense to try them out, but I think they should be done with a healthy dose of skepticism.”
Ben Winters, director of AI and privacy at the Consumer Federation of America, said the plan is especially problematic given the exodus of subject matter experts from the government as a result of the administration’s cuts to the federal workforce last year. Since President Trump returned to the White House, the Department of Transportation has had a net loss of nearly 4,000 of its 57,000 employees, including more than 100 lawyers, according to federal data.
Elon Musk’s Department of Government Efficiency has been a major champion of AI adoption in government. In July, the Washington Post reported on a leaked DOGE presentation that called for leveraging AI to eliminate half of all federal regulations, in part by having AI draft regulatory documents. “Writing is automated,” the presentation said. DOGE’s AI program “automatically drafts all filings for attorneys to edit.” DOGE and Musk did not respond to requests for comment.
The White House did not respond to questions about whether the administration plans to use AI in rulemaking for other government agencies. Four of the administration’s top technical officials said they were not aware of any such plans. As for the DOT’s “tip of the spear” claim, two of the officials expressed skepticism. “We often see an attitude of, ‘We want to be seen as a leader in federal AI adoption,'” one person said. “I think this is really a marketing issue.”
