Google DeepMind executives are seeking consensus on what constitutes safe, responsible, and human-centered artificial intelligence after California’s governor vetoed sweeping AI safety legislation.
“My hope for this space is that we can achieve consistency and understand the full benefits of this technology,” said Terra Terwilliger, director of strategic initiatives at Google DeepMind, the company’s AI research division. ”. She spoke Wednesday at Fortune Magazine’s Most Powerful Women Summit, alongside January AI CEO and co-founder Nousheen Hashemi, Eclipse Ventures general partner Aidan Madigan Curtis, and Deloitte & Touche. He gave a presentation along with Dipti Gulati, CEO of Audit and Assurance, LLP US.
The women addressed the much-debated California bill SB-1047, which would require developers of the largest AI models to meet certain safety testing and risk mitigation requirements. Madigan-Curtis suggested that if companies like OpenAI are actually building models as powerful as they claim, they should have some legal obligation to do so securely. .
“That’s how our system works, right? It’s push and pull,” Madigan-Curtis said. “The scary thing about being a doctor is the possibility of being sued for medical malpractice.”
He spoke of a now-defunct California bill’s “kill switch” provision that would require companies to shut down their models if they were somehow used for devastating purposes, such as making weapons of mass destruction. He pointed out that it was mandatory to create a way to turn it off.
“If your model is being used to terrorize certain people, shouldn’t we be able to disable it or prevent its use?” she asked.
DeepMind’s Terwilliger wants regulations that take into account the different levels of the AI stack. She said that the underlying model has different responsibilities than the applications that use it.
“It is critical that we all help regulators understand these differences in order to achieve stable and meaningful regulation,” she said.
But Terwilliger said it doesn’t have to be the government that promotes responsible construction. Even with fluid regulatory requirements, building AI responsibly will be key to long-term adoption of the technology, she added. This applies to every level of technology, from making sure your data is clean to setting guardrails for your models.
“I think we have to believe that responsibility is a competitive advantage, so understanding how to be accountable at every level of that stack is going to make a difference,” she says. I did.