OpenAI has lost another longtime AI safety researcher and has been hit with accusations from another former researcher that the company violated copyright laws in training its models. Both cases raise serious questions about OpenAI’s methodology, culture, direction, and future.
On Wednesday, Miles Brundage, who currently leads the team responsible for thinking about policies to prepare both the company and society as a whole for the emergence of “artificial general intelligence” (AGI), said he expected further deterioration in performance. As a result, he announced on Friday that he would be leaving the company. By doing so, I was able to continue working with fewer constraints.
In a lengthy post on Substack, Brundage said that OpenAI is placing increasingly strict limits on what can be said in published research. He also said that by founding or joining an AI policy nonprofit, “when claims to this effect come from industry, they are often dismissed as hype,” he said, he hopes to create an urgent response to the dangers of AI. He said he wanted to more effectively warn people about sex.
“Consideration for safety”
Mr. Brundage’s post was not an overt criticism of his soon-to-be former employer—in fact, he cited the CEO as one of many who provided “comments on earlier versions of this draft.” He mentioned Sam Altman — but it was a lengthy complaint. Typical AI companies “not necessarily” [giving] By default, the safety and security of AI is getting the attention it deserves. ”
“There are many reasons for this, one of which is the mismatch between private and social interests, which regulation can reduce. Also, a credible commitment to safety levels and There are also challenges with verification, which further encourages cutting corners,” Brundage wrote. “Cuts are being made in a wide range of areas, from preventing works that create harmful bias and hallucinations to making investments to prevent near-term catastrophic risks.”
Brundage’s resignation extends a string of high-profile departures from OpenAI this year. These include Chief Technology Officer Mira Murati and the company’s co-founder and former chief scientist Ilya Satskeva, many of whom are clearly or potentially related. Highly sexual. The company’s changing stance on AI safety.
OpenAI was originally founded as a research institute for secure AI development, but over time the need for significant external funding grew, and the company recently raised $6.6 billion at a valuation of $157 billion. However, its scale has gradually shifted toward profit-making. This may officially become a major structural component of OpenAI in the near future.
Co-founders Sutskever and John Schulman both left OpenAI this year to increase their focus on secure AI. Sutskever started his own company, and Shulman joined Anthropic, OpenAI’s biggest rival. Jan Rijke, a key colleague of Mr. Satskever, also said: [at OpenAI] Shiny products are taking a backseat. ”
Already by August, it was revealed that about half of OpenAI’s safety-focused staff had left in recent months. That was before Murati’s dramatic retirement. Murati found himself frequently having to adjudicate arguments between the company’s safety-first researchers and their own researchers. As reported by Fortune, GungHo’s commercial team has grown even more. For example, OpenAI staff had just nine days to test the safety of the company’s powerful GPT-4o mode before launch, according to people familiar with the situation.
In a further sign that OpenAI is starting to focus on safety, Brundage said his AGI Readiness team will be disbanded and its staff “distributed to other teams.” That economic research subteam will be headed by OpenAI’s new chief economist, Ronnie Chatterjee, he said. He did not say how other staff would be reassigned.
It’s also worth noting that Brundage isn’t the first person to run into problems with research he’s publishing on OpenAI. Last year, OpenAI’s safety-focused board dramatically and quickly fired Altman, citing his previous co-authorship on a paper on AI safety that implicitly criticized the company. It was revealed that he had put pressure on then-director Helen Toner.
unsustainable model
Concerns about OpenAI’s culture and methods were raised in another article Wednesday. The New York Times published an important article about Suteru Balaji, an AI researcher who spent nearly four years at OpenAI before retiring in August.
Balaji found that OpenAI violates copyright laws in the way it trains models based on copyrighted data on the web, and that chatbots like ChatGPT are more likely than not to be beneficial to society. He also stated that he resigned because he deemed it to be harmful.
Once again, the focus here is on OpenAI’s transformation from a research organization to a money-spinning company. “In a research project, generally speaking, you can train on any data. That was the thinking at the time,” Balaji told the Times. He argues that AI models now threaten the commercial viability of the businesses that generated that data in the first place, and says, “This is not a sustainable model for the entire internet ecosystem.”
OpenAI and many of its peers have been sued by copyright holders over training that involves copying large amounts of data so that the companies’ systems can ingest and learn from it. These AI models are not expected to contain a complete copy of the data itself, and are unlikely to output an exact copy in response to a user’s prompt. Lawsuits commonly target the first piracy.
The standard defense in such cases is that the company accused of copyright infringement claims that the way it is using its copyrighted material should constitute “fair use,” meaning that the company is not infringing the copyright. is to claim that the copyrighted work has been transformed into something else in a non-exploitative manner. used them in a way that does not directly compete with the original copyright holder or prevent the original copyright holder from exploiting the works in a similar way, or is in the public interest; The defense is easier to apply to non-commercial use cases and is always determined by judges on a case-by-case basis.
In a blog post on Wednesday, Balaji delved into relevant U.S. copyright law and assessed how the test for establishing “fair use” relates to OpenAI’s data practices. He argues that the advent of ChatGPT has had a negative impact on traffic to destinations like developer Q&A site Stack Overflow, and that in some cases ChatGPT’s output can substitute for the information on that site. He said there is. He also presented mathematical reasoning that he claims can be used to determine the association between an AI model’s output and its training data.
Balaji is a computer scientist, not a lawyer. And many copyright lawyers believe that fair use defenses against using copyrighted works to train AI models should be successful. But Balaji’s intervention is sure to attract the attention of lawyers representing publishers and book authors who are suing OpenAI for copyright infringement. Perhaps his insider analysis will play a role in these cases, and the results could determine the future economics of generative AI, and perhaps the future of companies like OpenAI.
It’s unusual for employees at AI companies to go public with their copyright concerns. Perhaps the most important case to date is the lawsuit brought by Ed Newton Rex, who was Stability AI’s head of audio until his retirement last November, who argued that “Today’s generative AI models are clearly copyrightable.” can be used to create a work that competes with the work that was created.” So I don’t see how using a copyrighted work to train this kind of generative AI model could be considered fair use. ”
“We use publicly available data to develop AI models in a manner that is protected by fair use and related principles and supported by long-standing and widely accepted case law,” an OpenAI spokesperson said in a statement. We’re building,” he said. “We believe these principles are fair to creators, necessary to innovators, and important to America’s competitiveness.”
“We’re excited to follow that impact.”
Meanwhile, an OpenAI spokesperson said Brundage’s “plans to commit to independent research on AI policy gives him the opportunity to have a broader impact, and we look forward to learning from his work. We’re excited to follow its impact.”
“We are confident that Mr. Miles will continue to raise the bar for the quality of policymaking in industry and government in his new role,” they said.
During his career at OpenAI, Brundage has focused his research on everything from developing AI safety testing methods and researching current national and international governance issues related to AI to dealing with potential superhumans. In particular, I realized that the scope of my work at OpenAI was narrowing. AGI rather than short-term safety risks of AI.
Meanwhile, OpenAI is adding a number of strong policy experts with extensive experience in politics, national security, or foreign affairs to lead teams that consider various aspects of AI governance and policy. The company hired former Obama administration national security adviser Anna Makanju, who previously worked in policy roles at SpaceX’s Starlink and Facebook, to oversee initial outreach to government officials in Washington, D.C., and around the world. . She currently serves as Vice President of Global Impact at OpenAI. It recently hired Chris Lehane, a veteran political operative who previously held communications and policy roles at Airbnb, as vice president of international affairs. Mr. Chatterjee, who will take over the economics team that previously reported to Mr. Brundage, previously held various advisory roles in the White Houses of Presidents Joe Biden and Barack Obama, and also served as chief economist at the Commerce Department.
In fast-growing technology companies, it’s not uncommon for the roles of early employees to be limited by the later additions of senior staff. In Silicon Valley, this is often referred to as “layering.” And although it is not explicitly mentioned in Brundage’s blog post, he previously lost some of his short-term AI policy research to Makanju and Lehane, followed by the economics division to Chatterjee. It may have been the last tailwind. . Mr. Brundage did not immediately respond to a request for comment on this story.
Brundage used his post to highlight the issues he will be focusing on. These include assessing and predicting advances in AI; Frontier AI safety and security regulation. Economic impact of AI. Accelerating aggressive use cases for AI. AI hardware distribution policy. And at the higher level, “AI Comprehensive Grand Strategy.”
He warned that “neither OpenAI nor the other frontier labs were really prepared for the emergence of AGI, and neither was the outside world.” “To be clear, I don’t think this is a controversial statement among OpenAI leaders,” he stressed, adding, “I don’t think anyone should take seriously the fact that their actions and words contribute to society. As long as the company accepts this, people should continue to work for the company.” As organizations begin to manage very sophisticated functions, positive or negative path dependencies can arise. ”
Brundage said OpenAI has provided funding, computing credits and even access to early models to support future research.
However, he said he had not yet decided whether to accept these offers because they “could undermine the reality or perception of independence.”