We’d like to welcome a new voice here at Econlib, Sam Enright. Sam works on innovation policy at Progress Ireland, an independent policy think tank in Dublin, and runs the publication Fitzwilliam. Most relevant to us, he writes a summary of popular links on his personal blog, in which he gives short commentary on the most interesting things he has read, seen and heard in the last month. His “link posts” are sometimes affectionately mocked for their astonishing length. Below is a condensed version of his links from October.
Blog and short links
1. Eva Huang’s theory of friendship. (I agree with this theory.)
2. We don’t have to choose between the environment and economic growth.
3. Free market economies work surprisingly well. As Noah Smith points out in this article, the benefits Argentina’s economy has received so far under Millais are probably largely due to orthodox macroeconomic stabilization policies. It is too early to tell whether other reforms will be successful. An alternative title would be “We All Owe the IMF an Apology”?
4. The only countries that tax non-residents on worldwide income are the United States and Eritrea. This is a wiki about other economic and legal restrictions that American citizens face after immigrating. This includes not being allowed to invest in personal savings accounts, which are the largest tax instrument in the UK. It’s from Bogleheads, a website for people who really like John Bogle.
5. Eventually, we’ll all learn to love congestion pricing.
6. A whirlwind tour of Chile’s economic history by Sebastian Galen. We will have more information on this soon.
“I would like to thank Sam Enright and the Fitzwilliams for leading me on this quest.”
music and podcasts
7. Chakravarti Rangarajan talks about what has happened to India’s monetary policy since liberalization in 1991. I had no idea how much of an issue fiscal control was (or what it really was) in India before the 1990s.
8. Dmitri Shostakovich, Symphony No. 8. A related sticky note episode. It’s darker and more complex than the Triumphal Symphony No. 7, so that’s a good place to start. I think I hear some cautious optimism about the march of the Red Army, but I think in general I’m more interested in composers associated with specific historical episodes (#8 premiered in 1943, #7 premiered in 1942).
9. Beet Science Table, Taramatrix. This is also Zakir Hussain’s band. If you haven’t read Zakir’s obituary by Shruti Rajagopalan, it’s the best thing I’ve found written about Indian music.
10. Richard Sutton, the father of reinforcement learning, talks about why he thinks LLMs are a dead end. When will I learn the “bitter lesson” that I am not smart enough to follow these podcasts on audio, so I need to switch to reading the transcripts?
paper
11. P. W. Anderson, “More is Different: Broken Symmetries and the Nature of Scientific Hierarchies.” I had heard the title of this paper many times before, but I never thought to actually read it. The author argues for anti-reductionist pluralism, which is similar to what Daniel Dennett says in Real Patterns (I think?). It’s been a while since I’ve thought about these issues, but from what I remember, I was struck by the philosophical confusion of the claim that “chemistry is just applied physics.” I also read a 50-year retrospective by Stephen Strogatz and others. From a sociological perspective, it is very interesting that a non-philosopher could write such a widely discussed philosophical treatise in just four pages.
12. Richard Sutton, “Bitter Lessons” If you’re reading Sutton, I thought you might want to read this famous essay. The lesson in question is:
“The biggest lesson from 70 years of AI research is that the general way we leverage computing is ultimately the most effective, by a wide margin…We must learn the bitter lesson that building a mindset doesn’t work in the long run.”
One thing I learned from Sutton is that the more general way to build AI (to scale up computing and avoid symbolic representations in GOFAI) was previously called “weak methods.” People really believed that scaling wouldn’t work, and honestly, who could blame them?
13. David Silver, Richard Sutton, Welcome to the Age of Experience. I read this easy-to-read essay as part of a machine learning reading group with the kind folks at coworking space Mox. They have a cool group called the 90/30 Club, where every week they read Ilya Sastkevar’s list of 30 AI papers. “If you really learn all of this, you’ll understand 90% of what’s important today.” At some point they seem to have finished that list and moved on to other papers. I didn’t think I’d be able to keep up with the conversation with the legendary “cracked” (is that the right word?) San Francisco engineer, but thankfully I was also able to listen to Sutton on the Dwarkesh podcast in preparation.
To be honest, I found the intense excitement of the Bay Area to be too stimulating, which led to me feeling depressed and having trouble concentrating during my visit. What I love about Dublin is that you feel like you can meet just about anyone with a particular interest. Small ponds are underrated.
In any case, the basic argument of Silver and Sutton’s paper is that AI has reached the limit of what it can learn from human-generated data, and that in the future AI will learn primarily from experience, trial and error, etc. In this view, achieving superintelligence would require a fabled “paradigm shift” and rely heavily on reinforcement learning. This is the important graph on page 6.
Figure 1: Schematic timeline of major AI paradigms. The Y-axis shows the percentage of the field’s total effort and computation that is focused on reinforcement learning. From Silver and Sutton, “Welcome to the Age of Experience.”
They have a more detailed picture of state-of-the-art AI being guided by human desires and feedback, but I couldn’t quite understand that. The paper will be published in April and will (eventually) be published in a book called Designing an Intelligence, so I’ll be pre-ordering it as soon as I know the release date.
This is all pretty heavy stuff and it’s starting to make my head hurt, so I’ll end this section with some recent wisdom from my friend David.
They should call the opposite of AI Ruiners Sloptomists.
You can read the full version of this post here.
[1] Reading this reminded me of John Bogle’s 2023 Marginal Revolution comment about how he should win a (hypothetical) Nobel Prize for the practice of economics.
[2] David Silver didn’t ring a bell when I heard the name, but I just realized I saw him in a great documentary about AlphaGo.
