U.S. government officials who work on AI issues say the reporting requirements are essential to alerting the government to potentially dangerous new features in increasingly powerful AI models. The official, who requested anonymity to speak freely, pointed to OpenAI’s acknowledgment of the latest model’s “consistent rejection of requests to synthesize nerve agents.”
The official said reporting requirements are not too burdensome. They argue that unlike the European Union and China’s AI regulations, Biden’s presidential election reflects a “very broad-based, light-touch approach that continues to foster innovation.”
Nick Reese, who served as the first director of emerging technologies at the Department of Homeland Security from 2019 to 2023, rejects conservative claims that reporting requirements put companies’ intellectual property at risk. And he said encouraging startups to develop AI models that are “more computationally efficient” and require less data to fall under the reporting standards could have real benefits.
Ami Fields Meyer, a White House technology official who helped draft Biden’s presidential campaign, says the power of AI will make government oversight essential.
“We’re talking about a company that claims to be building the most powerful system in the history of the world,” Fieldsmeyer says. “The government’s first duty is to protect people. ‘Trust me, we made this happen’ is not a particularly convincing argument.”
Experts praise NIST’s security guidance as an important resource for building protection into new technologies. They point out that flawed AI models can cause serious social harm, such as discrimination in renting and lending and unwarranted loss of government benefits.
President Trump’s own first-term AI order requires federal AI systems to respect civil rights, which would require research into social harm.
The AI industry has largely welcomed Biden’s safety policies. “What we’re hearing is that elaborating on this would be broadly helpful,” the U.S. official says. For startups with smaller teams, “it expands the ability of employees to address these concerns.”
Michael Daniel, a former presidential cyber adviser who now heads the information-sharing nonprofit Cyber Threat Alliance, said that reversing President Biden’s presidency would “result in the U.S. government taking a hands-off approach to AI security.” “This would send a worrying signal that we are going to take it,” he said.
When it comes to competition with China, European Union advocates argue that safety rules could actually help the U.S. by ensuring that U.S. AI models perform better than Chinese rivals and are protected from Chinese economic espionage. Claim it will help you win.
Two completely different paths
If President Trump wins the White House next month, we can expect major changes in how the government approaches AI safety.
Helberg said Republicans want to prevent harm from AI by applying “existing tort and statutory law,” rather than enacting new and sweeping restrictions on technology, and “reducing the risks.” We support a focus on maximizing the opportunities presented by AI, rather than placing too much emphasis on mitigation. “That could doom some of the reporting requirements and perhaps NIST’s guidance.
Reporting requirements could also face legal challenges now that the Supreme Court has weakened the deference courts used to give government agencies when evaluating regulations.
And Republican pushback could even jeopardize NIST’s voluntary AI testing partnerships with major companies. “What will happen to these promises under the new administration?” US officials ask.
This polarization over AI has frustrated technologists who worry that President Trump will undermine the pursuit of safer models.
“The promise of AI also comes with risks, and it is critical that the next president continues to ensure these systems are safe and secure,” said Nicole Turner Lee, director of the Center for Technology Innovation at the Brookings Institution. .