In graduate school, I remember a professor suggesting that the rational expectations revolution would eventually lead to better models of macroeconomics. I was skeptical, but in my opinion, I don’t think that happened.
This is not because there is anything wrong with the rational expectations approach to macros, I strongly support it. Rather, I believe that the advances resulting from this theoretical innovation have occurred very rapidly. For example, by the time I made this argument (around 1979), people like John Taylor and Stanley Fisher had already grafted rational expectations onto a rigid wage and price model, which He contributed to the New Keynesian revolution. Since then, macros seem to have been stuck in a rut (aside from some later innovations by the Princeton School (related to the zero lower bound problem)).
In my view, the most useful applications of new conceptual approaches tend to be realized quickly in highly competitive fields such as economics, science, and the arts.
Over the past few years, I’ve had many interesting conversations with young people working in the field of artificial intelligence. These people know far more about AI than I do, so I advise readers to take the following with a grain of salt. In discussions, I sometimes expressed skepticism about the pace of future improvements for large language models like ChatGPT. My argument was that there is a pretty serious diminishing return to exposing LLM to additional datasets.
Consider someone who has read and understood ten carefully selected books on economics (perhaps books on macro and micro principles, as well as intermediate and advanced textbooks). If you fully understand this material, you will actually gain quite a bit of economics knowledge. Now I have them read another 100 carefully selected textbooks. How much do they actually know about economics? Probably not 10 times more. In fact, I doubt they know twice as much about economics. I think the same can be said for other fields such as biochemistry and accounting.
This article from Bloomberg caught my attention.
OpenAI was about to reach a milestone. The startup began training the first of its large-scale new artificial intelligence models in September, which it hopes will significantly outperform previous versions of the technology behind ChatGPT and move it closer to its goal of a more powerful AI than humans. It ended on. However, this model, known internally as Orion, failed to achieve the performance the company desired. In fact, Orion tried and failed to answer a coding question for which he had no training. And OpenAI isn’t the only company facing stumbles lately. The three largest AI companies have been rolling out increasingly sophisticated AI products for years, but now they are seeing diminishing returns from the hugely expensive efforts to build new models. I noticed that
Please don’t take this to mean I’m an AI skeptic. I believe that the recent advances in LLM are very impressive and that ultimately AI will transform the economy in some fundamental ways. Rather, my point is that progress toward some kind of supergeneral intelligence may occur more slowly than some of its proponents expect.
How could I be wrong? What I’m saying is that artificial intelligence can be improved in ways other than exposing models to ever-larger data sets, so-called “data This means that the ‘wall’ can be overcome through other methods of improving intelligence. But if Bloomberg is right, LLM development has stalled a bit due to the power of diminishing returns of having more data.
Is this good news or bad news? It depends on how much weight you place on the risks associated with developing ASI (Artificial Super Intelligence).
Update: Tyler Cowen offers some closely related views on this topic.