As a cybersecurity reporter for ProPublica, much of my work over the past two years has focused on how the federal government and IT contractors like Microsoft have navigated major technology transitions. Artificial intelligence is now in the news every day.
This emerging technology has everyone hooked, with home users, businesses, and the federal government all rushing to use it. President Donald Trump and his Cabinet say that if we deploy AI quickly enough, it will transform the country and make us more prosperous, efficient, and secure.
But this message is not new. President Barack Obama’s administration used much the same language a decade and a half ago, when the United States entered the cloud computing technological revolution in earnest.
I have studied how the federal government has responded to this transition over the past 20 years, and how it has mishandled it. My report offers several warnings and valuable lessons as policymakers encourage the use of AI and federal agencies deploy AI technologies.
Lesson 1: There’s no such thing as a free lunch.
Then: In the early 2020s, the federal government was rocked by a series of cyberattacks linked to Russia, China, and Iran. The Biden administration has called on big tech companies to help strengthen the U.S. defense. In response, Microsoft CEO Satya Nadella pledged to provide governments with $150 million in technology services to help improve digital security. It also offered “free” security upgrades for government customers.
Now: Last year, the Trump administration announced a number of agreements with technology companies aimed at helping federal agencies “purchase enterprise AI tools at government-friendly prices.” Agencies can use OpenAI’s ChatGPT for $1. Google’s Gemini costs 47 cents. xAI’s Grok is 42 cents. The government hoped the low-cost pricing would “make it easier for federal teams to acquire powerful AI capabilities to improve mission execution and operational efficiency.”
Lesson learned: Be careful with freebies. Our investigation into Microsoft’s seemingly straightforward efforts reveals a more complex and profit-driven plan. Installing the upgrade effectively locks federal customers in because moving to a competitor after a free trial is cumbersome and costly. At that point, customers have little choice but to pay high subscription fees. This plan worked. As one former Microsoft sales representative told me, “It was more successful than any of us could have imagined.” In response to questions about the effort, Microsoft said its “single goal during this period was to support the government’s urgent request to strengthen the security posture of federal agencies that continue to be targeted by advanced nation-state threat actors.”
Agencies looking to purchase AI tools at a discount must consider how their costs will increase in the future. The General Services Administration warns that AI “can rapidly increase the cost of use without proper oversight and administrative controls,” and advises agencies to “set usage limits and regularly review usage reports.”
Lesson 2: A monitoring program is only as effective as its resources.
Then: During the Obama administration, the federal government moved its sensitive information and computing needs to data centers owned and operated by private companies. Recognizing the potential risks, the government created the Federal Risk and Authorization Management Program (FedRAMP) in 2011 to ensure the security of the cloud computing services it encourages U.S. government agencies to use.
However, recent research on this program shows that it can’t compete with Microsoft. Microsoft has effectively exhausted the FedRAMP team for five years as it sought program approval for a major cloud product known as GCC High. Despite serious concerns about cybersecurity, FedRAMP ultimately approved the product. One reason for this was the lack of resources to continue. In response to my question, Microsoft told me: “We stand behind our products and the comprehensive measures we have taken to ensure that all FedRAMP-certified products meet the necessary security and compliance requirements.”
Now: Today, this small hub within the General Services Administration has even fewer resources to oversee the cloud technologies, including AI, that the government relies on. FedRAMP says it is currently operating with “skeletal support staff” and “limited customer service.” The program was an early target of the Trump administration’s Office of Government Efficiency.
The bottom line: FedRAMP, which a 2024 White House memo said “must be an expert program capable of analyzing and verifying security claims” by cloud providers, is now little more than a rubber stamp for the tech industry, former employees said. The impact of this downsizing on federal cybersecurity will be far-reaching as federal agencies deploy AI tools that tap into large amounts of sensitive information. A GSA spokesperson defended the program, saying FedRAMP now “operates with enhanced oversight and accountability mechanisms.”
Lesson 3: An “independent” review is just that: an independent review
Next: Governments have long relied on so-called third-party assessors to verify security claims made by cloud service providers like Microsoft and Google. In theory, these companies are supposed to be independent experts who provide recommendations to FedRAMP about whether a product meets federal standards. But in reality, their independence comes with an asterisk. They are compensated by the companies they evaluate.
My recent research has shown that this setup inherently creates a conflict of interest. In the case of Microsoft’s GCC High, two reviewers recommended the product even though they were unable to fully vet it, according to a former FedRAMP reviewer. One of the companies did not respond to my questions, and the other denied this explanation.
We found that FedRAMP is well aware of how financial arrangements between cloud companies and their evaluators can skew official findings on cybersecurity issues. The program also created a “back channel” that encouraged raters to share concerns they might not raise in official reports for fear of angering technology customers and losing business.
Now: As FedRAMP has become a “paper pusher,” as one former GSA official put it, these third-party rating firms have taken on even more importance in the review process. In response to ProPublica’s questions, GSA said FedRAMP’s system “does not pose an inherent conflict of interest for professional auditors to meet ethical and contractual performance expectations.” He did not respond to questions about the program’s back channels.
Bottom line: The pendulum has essentially swung back to the pre-FedRAMP days, when each federal agency was individually responsible for vetting the products it uses. GSA told me that FedRAMP’s job is to “ensure that government agencies have sufficient information to make decisions about risk.” The problem is that agencies often lack the staff and resources to conduct thorough reviews, meaning the entire system relies on the claims of cloud companies and the ratings of third-party companies that are paid to evaluate cloud companies.
