Hello and welcome to Eye on AI! In this newsletter… Elon Musk reportedly raises xAI funding at $40 billion valuation… Microsoft’s GitHub Copilot surpasses OpenAI, Anthropic and Google … AI ‘slop’ abounds in midsize… Apple Intelligence receives mixed reviews… US finalizes rules to rein in AI investment in China
A research paper claiming that 40% of U.S. adults have used generative AI at work or home is making the rounds on social media, evidence that people are rapidly adopting generative AI tools. In fact, the study found that number is higher than the PC penetration rate in the early ’80s. Back then, only 20% of people were using the then-new computers three years after they were introduced.
The paper, titled “Rapid Adoption of Generative AI,” and published last month by the National Bureau of Economic Research, is by researchers at the Federal Reserve Bank of St. Louis, Vanderbilt University, and Harvard Kennedy School. It is based on responses from just over 5,000 people, a representative sample of the entire U.S. population.
But Arvind Narayanan, a Princeton University computer science professor and co-author of the recently published AI Snake Oil, called the paper a “hype case study” in a post on social media service X. is. He pointed out that the 40% figure would include: Someone who simply asked ChatGPT to write a limerick once in the last month. In fact, the paper found that only 0.5% to 3.5% of working hours involve generative AI assistance, and only 24% of workers used AI assistance once in the last week before the survey. , and only one in nine people used AI during their workday.
“Compared to what AI boosters were predicting after the release of ChatGPT, this is a glacial pace of adoption,” Narayanan wrote, noting that people spending thousands of dollars on early PCs are now using the system once a month. He added that he doesn’t just use it.
Based on what I’ve seen in my own world, I agree with a more sober view of AI adoption. Most people I know don’t use generative AI tools at all. Many people don’t know what I’m talking about when I mention tools other than ChatGPT. In fact, my husband is one of the few people who fits into the “superuser” category that the Washington Post reported yesterday. They are defined as people who regularly use tools like ChatGPT, Google’s Gemini, and Anthropic’s Claude to learn new skills, create reports, analyze data, and research topics.
However, I believe this will change relatively quickly. That’s because it’s becoming increasingly impossible for consumers to avoid AI text, image, audio, and video generation tools. If you use Google, you’ll see an overview of AI every time you search. Meanwhile, for months, Google Docs has been nudging me to use its AI assistant. Apple Intelligence just arrived on the iPhone, while Microsoft Copilot is built into everything from Word to Excel. When it comes to Meta, its AI assistant is inevitable on Facebook, Instagram, and WhatsApp. Consumers try one of these tools every day, are intrigued by the results, and are likely to try another task again.
Let’s take an example. One of the most common questions people ask me is whether I use generative AI tools. The answer is yes, but so far I feel these are only useful for certain tasks. For example, article headlines are always difficult to write and sometimes I want feedback. For a long time, I pasted a preliminary heading into ChatGPT, let the AI suggest a few other options, and edited from there.
The results were okay, but not necessarily great. Finally, a few months ago I tried a new approach. We asked ChatGPT, “What do you think about this headline?” The results are always really satisfying. For example, for this essay, I asked ChatGPT what he thought about the following: But almost everyone still experiences it. ”
ChatGPT responded with thoughts on what was working in my headline (contrast, intrigue, relevance) and suggestions for improvement (something punchier with improved flow). There were a few other options and I didn’t like them so I went back and forth several times. Each time, I got closer to what I would eventually publish.
It remains to be seen whether the adoption of generative AI will reach the rapid mass adoption that businesses and investors have predicted. Some tools have been criticized for being “half-baked” (as the New York Times reported yesterday about new features in Apple Intelligence). There will be a lot of useless generative AI products because users didn’t find them useful. But avoid generative AI for now. Generative AI is already everywhere, whether you’re using it yet or not. Companies want you to try again and again.
So, here’s more AI news for you.
sharon goldman
sharon.goldman@fortune.com
@SharonGoldman
Request an invitation to the Fortune Global Forum in New York City on November 11th and 12th. Speakers will include Honeywell CEO Vimal Kapur and Lumen CEO Kate Johnson, who will discuss the impact of AI on work and the workforce. Qualtrics CEO Zig Serafin and McKinsey Senior Partner and Chairman of North America Eric Kutcher on how companies can build the data pipelines and infrastructure they need to compete in the AI era Let’s discuss.
AI in news
Elon Musk’s xAI is reportedly seeking funding at a valuation of $40 billion. Just a few months ago, the startup raised $6 billion at a valuation of $24 billion. According to the Wall Street Journal, talks are in the very early stages, but officials said xAI hopes to raise billions of dollars in a new round of funding.
Microsoft’s GitHub Copilot goes beyond OpenAI. There’s another sign that Microsoft’s relationship with OpenAI, in which it’s a big investor, is becoming less exclusive. GitHub Copilot, one of Microsoft’s most successful AI products used for code generation, gives developers the option to use models from OpenAI as well as Anthropic and Google. “There is no single model that governs all scenarios, and developers expect agencies to build using the model that works best for them,” GitHub CEO Thomas Dohmke said in a blog post. ”.
AI “slop” is flooding the medium scale. “Slop” has become a popular term to describe low-quality content or images generated by AI. Think clickbait articles, keyword-stuffed blog posts, and photos of people with six fingers. According to Wired , this type of slop abounds on the blogging platform Medium, far more than any other website. An analysis commissioned by Wired by AI detection startup Pangram Labs found that 47% of roughly 275,000 Medium posts sampled over a six-week period were likely generated by AI. “This is orders of magnitude larger than what we see in other parts of the internet,” said Pangram CEO Max Spero.
Apple Intelligence exists, but people want it to get smarter. Most of the publications that reported on Apple Intelligence’s new iPhone features had a similar reaction to The Verge, saying, “Like most AI in smartphones to date, it’s mostly underwhelming.” . I have a Samsung Galaxy, so I wasn’t able to test Apple Intelligence myself, but it’s clear that more features are on the way. In yesterday’s launch announcement, Apple teased features that haven’t yet debuted, such as using Siri to perform actions. App. Currently, users can enjoy features such as AI summarization, email reconciliation, and call transcription.
US finalizes rules to curb AI investment in China. According to Reuters, the Biden administration announced yesterday that it is finalizing rules that would limit U.S. investment in Chinese AI and other technology fields deemed a potential threat to national security. The rule, originally proposed by the U.S. Department of the Treasury in June, follows an executive order signed by President Joe Biden in August 2023, which focuses on three key areas: semiconductors and microelectronics, quantum information technology, and certain AI systems. Targeting the field.
The fate of AI
Exclusive: “AI Colleagues” startup raises $8.7 million in seed round led by General Catalyst — by Sheryl Estrada
Citi moves critical infrastructure to Google Cloud as part of broader AI push — by Michael Del Castillo
Salesforce CEO Marc Benioff claims Microsoft has done the AI industry a ‘huge disservice’ — by Chloe Berger
Nvidia’s Billionaire CEO Says AI Can Do More Than Take Your Job By Chloe Berger
OpenAI suffers from the departure of yet another AI safety expert and new claims of copyright infringement — Written by David Meyer
AI calendar
October 28-30: Voice & AI, Arlington, VA.
November 19-22: Microsoft Ignite, Chicago
December 2-6: AWS re:Invent, Las Vegas
December 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia.
December 9-10: Fortune Brainstorm AI, San Francisco (register here)
Focus on AI research
Opportunities for LLMs in lesser-used languages. In February, Cohere for AI, a nonprofit research institute founded by Cohere in 2022, announced Aya, an open source large-scale language model that supports 101 languages. This is more than double the number of languages covered by existing open source models. The researchers also released the Aya dataset, a corresponding collection of human annotations from a community of “language ambassadors” around the world. This is important because one of the less common language obstacles is a lack of source material for training AI models.
Last week, the organization released Aya Expanse, another family of models aimed at helping researchers fill the AI language gap. “We are a small institute, but this is based on years of dedicated research to connect the world with language,” said Sarah, a former Google Brain researcher who has led Cohere in AI since its founding. Hooker says.
brain food
Can AI models predict “gray swan” weather events like Category 5 tropical cyclones? AI models have been used for years to improve weather and climate predictions. But can AI models predict “gray swan” weather extremes like Category 5 tropical cyclones? These events can occur, but they are so rare that they It is often not included. That was the question researchers at the University of Chicago wanted to address in a recent paper. They trained an AI model that included all the data and another version that removed tropical cyclones in categories 3 to 5.
The researchers wanted to know whether an AI model with cyclones removed could extrapolate stronger, less visible extreme weather events from the weaker weather events present in the training set. Unfortunately, the answer was no. “Our research shows that new learning strategies are needed for AI weather/climate models to provide early warning and inferential statistics for the rarest and most impactful extreme weather events.” the researchers wrote.