In 2014, British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title “Superintelligence: Paths, Dangers, and Strategies”. This has proven to be highly influential in promoting the idea that advanced AI systems, or “superintelligence” more capable than humans, could one day conquer the world and wipe out humanity. Proven.
Ten years later, Sam Altman, head of OpenAI, says superintelligence could be “thousands of days” away. A year ago, Altman’s OpenAI co-founder Ilya Sutskever established a team internally to focus on “secure superintelligence,” but he and his team are now building their own team to pursue this goal. Raised $1 billion to found a startup.
What on earth are they talking about? Broadly speaking, superintelligence refers to something that is more intelligent than humans. But it can be a little difficult to understand what that actually means.
different types of AI
In my view, the most useful way to think about the different levels and types of intelligence in AI was developed by American computer scientist Meredith Ringel Morris and her colleagues at Google.
Their framework lists six levels of AI performance. No AI, Emerging, Competent, Expert, Master, Superhuman. It is also important to distinguish between narrow systems, which can perform a narrow range of tasks, and more general systems.
A narrow system without AI is like a calculator. It performs various mathematical tasks according to a set of explicitly programmed rules.
There are already many highly successful narrow AI systems. Morris points to the Deep Blue chess program, which famously defeated world champion Garry Kasparov in 1997, as an example of a narrow virtuoso-level AI system.
Some narrow star systems even have superhuman abilities. One example is Alphafold, which uses machine learning to predict the structure of protein molecules, and whose developer won the Nobel Prize in Chemistry this year.
What about systems in general? This is software that allows you to tackle a wider range of tasks, such as learning new skills.
A common system that doesn’t use AI might be something like Amazon’s Mechanical Turk. It can perform a variety of functions, but it does so by asking real people questions.
Overall, AI systems in general are much less advanced than some AI systems. According to Morris, the state-of-the-art language models behind chatbots such as ChatGPT are general AI, but so far at an “emerging” level (i.e., “equivalent to an unskilled human; (It means “somewhat better than that.”) , but not yet “competent” (equivalent to 50% of skilled adults).
So, according to this calculation, we are still a long way from general superintelligence.
How intelligent is AI today?
As Morris points out, determining exactly where a particular system stands requires reliable testing and benchmarking.
Depending on our benchmarks, image generation systems like DALL-E may be virtuosic (because they can generate images that 99% of humans cannot draw or paint), or emerging systems. (because it generates errors that are inconceivable to humans, such as mutant hands or impossible objects).
There is significant debate even over the functionality of the current system. One notable paper in 2023 claimed that GPT-4 represents a “spark of artificial general intelligence.”
OpenAI says its latest language model, o1, is capable of “performing complex inferences” and “matches the performance of human experts” on many benchmarks.
However, a recent paper by Apple researchers found that o1 and many other language models have significant problems solving true mathematical reasoning problems. Based on their experiments, the output of these models appears to resemble sophisticated pattern matching rather than true high-level inference. This shows that superintelligence is not as imminent as many have suggested.
Will AI continue to get smarter?
Some believe that the rapid advances in AI over the past few years will continue or even accelerate. This doesn’t seem impossible, as tech companies are investing hundreds of billions of dollars in AI hardware and capabilities.
If this happens, general superintelligence may indeed become a reality within the “thousands of days” proposed by Sam Altman (not a science fiction term, but about 10 years). Sutskever and his team mentioned a similar period in their Superalignment article.
Much of the recent success in AI has been due to the application of a technique called “deep learning,” which simply means finding associative patterns in large collections of data. In fact, this year’s Nobel Prize in Physics was awarded to John Hopfield and the “godfather of AI” Jeffrey Hinton for their invention of the Hopfield network and Boltzmann machine, which are the basis of many powerful deep learning models in use today. Ta.
Popular systems such as ChatGPT rely on human-generated data, often in the form of text from books and websites. The increase in functionality is primarily driven by an increase in the scale of the system and the amount of data used for training.
However, there may not be enough human-generated data to take this process further (although efforts are being made to use data more efficiently, generate synthetic data, and improve the transfer of skills across different domains. may result in improvements). Even with sufficient data, some researchers say language models like ChatGPT are fundamentally incapable of reaching what Morris calls general capabilities.
A recent paper suggests that an essential feature of superintelligence, at least from a human perspective, is open-endedness. It must be able to continually generate output that human observers can perceive as novel and learn from.
Existing underlying models are not trained in an open-ended manner, and existing open-ended systems are very limited. The paper also emphasizes that novelty and learnability alone are not enough. Achieving superintelligence requires a new type of open-ended fundamental model.
What are the risks?
So what does all this mean for AI risk? At least in the short term, we don’t need to worry about superintelligent AI taking over the world.
But AI is not without risks. Again, Morris and his colleagues pondered that as AI systems gain greater capabilities, they may also gain greater autonomy. Different levels of competence and autonomy pose different risks.
For example, if an AI system has little autonomy and people use it as a kind of consultant, such as asking ChatGPT to summarize a document or letting YouTube’s algorithm shape their viewing habits, we may face the risk of overconfidence or overconfidence. – depends on them.
On the other hand, Morris points out that as AI systems become more sophisticated, there are many issues to watch out for, ranging from people forming quasi-social relationships with them to mass turnover and general societal malaise. It points out other risks.
What’s next?
Let’s assume that someday we’ll have super-intelligent, fully autonomous AI agents. In that case, do we face the risk of them concentrating power or acting against humanity’s interests?
Not necessarily. Autonomy and control go hand in hand. The system can be highly automated yet still provide a high degree of human control.
Like many people in the AI research community, I believe that safe superintelligence is possible. However, building it will be a complex and interdisciplinary task, and researchers will have to tread uncharted roads to get there.