Google’s CEO revealed Tuesday that AI systems now generate more than a quarter of new code for the company’s products, with human programmers overseeing computer-generated contributions. This statement, made during Google’s Q3 2024 earnings call, shows that AI tools are already having a major impact on software development.
“We also use AI internally to improve our coding processes, which increases productivity and efficiency,” Pichai said on a conference call. “Currently, more than a quarter of all new code at Google is generated by AI, reviewed and approved by engineers. This empowers engineers to do more and act faster. It will be.”
Google developers aren’t the only programmers using AI to aid their coding tasks. Although hard numbers are hard to come by, Stack Overflow’s 2024 Developer Survey found that more than 76% of all respondents said they are using or plan to use AI tools in their development process this year. ” and 62% of them actively use it. A 2023 GitHub study found that 92% of US-based software developers “already use AI coding tools on and off the job.”
AI-assisted coding first debuted in a big way with GitHub Copilot in 2021, and the feature was widely released in June 2022. It used OpenAI’s special coding AI model called Codex. The model is trained to suggest continuations of existing code and creates new code from scratch from English instructions. Since then, AI-based coding has expanded significantly, with solutions from Anthropic, Meta, Google, OpenAI, and Replit constantly improving.
GitHub Copilot functionality has also been expanded. Just yesterday, the Microsoft-owned subsidiary announced that for the first time, developers will be able to generate code in their applications using non-OpenAI models such as Anthropic’s Claude 3.5 and Google’s Gemini 1.5 Pro.
While some tout the benefits of using AI in coding, criticism comes from those who worry that future software produced partially or largely by AI will be full of bugs and errors that are difficult to detect. We also collect
A 2023 study by Stanford University found that while developers who use AI coding assistants paradoxically believe their code is more secure, it also tends to contain more bugs. . This finding was highlighted by Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, who told Wired that AI-assisted coding “probably contains both benefits and risks,” adding that “more code is more “It’s not good code,” he said. ”