ALISON BEARD: I’m Alison Beard, and this is the HBR IdeaCast. Harvard Business Review recently hosted the HBR Strategy Summit 2026, a day filled with expert advice and guidance from executives and academics. We’re sharing the highlights of the event in this special IdeaCast series.
Today, our final episode of the series is a masterclass from Andy McAfee, principal research scientist at MIT and cofounder and codirector of the MIT Initiative on the Digital Economy at its Sloan School of Management. In the session, he explains how businesses are building the right cultures to truly succeed with AI. He explains the rise of what he calls geek organizations and the gap that AI is creating between the best and the rest. He also takes audience questions with the help of HBR editor at large, Adi Ignatius. Here’s that masterclass.
ANDREW MCAFEE: I want to talk about a top-of-mind topic for a lot of people these days. Who’s going to succeed? What’s going to distinguish successful AI adopters from the also-rans in this era, right where we are now, where there’s just substantial uncertainty about what’s going on with AI. I want to grab some headlines from just the past couple days that underscore how profound this uncertainty is. I’m going to start with a headline from my cofounder, colleague, very good economist, Erik Brynjolfsson, who wrote an editorial in the Financial Times just a while back saying, “The AI productivity takeoff is already here. We’ve been waiting for it. We have been relying on this insight from Robert Solow from a while back saying, “We see evidence of the computer age everywhere, except in the productivity statistics. That has been true for AI.” Eric says that that era is finally over, and now we can anticipate moving into an era of really high-productivity growth because of AI.
However, a recent survey of lots of CEOs around the world says, “Not so fast.” We’re actually not seeing it. When we look around at the ROI of our AI investments, at the productivity boost or the KPI boost that our organizations have received from AI, it appears to not be there yet. Another very good economist who is still at MIT, Daron Acemoglu … You have the sign that he’s a pretty good economist because he’s got the Nobel Prize. He was just in an interview at MIT Tech Review saying that there’s not any productivity benefit from AI.
We’re in this era of deep, deep uncertainty. There was one chart that appeared in the FT a little while back that I think summarizes where we are beautifully. It said, “Okay, look, here are three possibilities.” Either AI is going to end scarcity and bring us into an era of perpetual abundance. That’s that crazy red line that’s going up. Or, the Terminator’s going to come. It’s going to drive us to extinction. That’s the line going down. Or, maybe it’s just going to improve the normal march of productivity growth by some relatively amount. That’s that continuation of the trend line. This is a very, very high range of uncertainty, to say the least.
I love the way that Derek Thompson framed it in his Substack just yesterday. He said, “Look, nobody knows anything. It is too early in the rollout of modern AI, of generative AI. It’s clearly a powerful technology. It can clearly do a lot. There’s reason to be optimistic about it. There’s reason to be pessimistic about it. It’s just too early to know for sure.” I love that way of thinking about it because I do think we’re extremely early in this era of AI diffusion, and its changes are not clear yet, especially when you look hard at the evidence.
What do you do in an era where there’s potentially a big deal? A transformative technology perhaps has appeared, is starting to diffuse, in many cases, diffusing rapidly, throughout organizations, throughout the economy, but nobody knows where this is taking us. Nobody knows anything definitive about where this is taking us.
I want to propose a very simple three-part playbook for succeeding with AI in the circumstances that we’re in now where nobody knows anything. The first part of the playbook is you got to make a bet. You’ve got to commit. I strongly suggest you commit in the pro-AI direction. Stop equivocating. Stop sitting on the fence. Commit as an organization to AI. And one of the ways you show that commitment is by making AI an OKR. In other words, make AI something that you expect out of the people in your organization. Communicate that expectation. Make it clear. Repeat it over and over again. Eliminate the idea that AI is one of the many things that are going on or that it’s a flavor-of-the-month technology. Communicate the expectation. And then the other thing that good OKR companies do, they measure progress toward the goal. If you have a goal about time savings or value savings or just raw amount of AI use, great. Set that goal, and then measure progress toward it over time.
The second really important part of the AI playbook when nobody knows anything was driven home to me by Steve Jurvetson, who is a very good venture capitalist early in both SpaceX and Tesla. And when I was researching The Geek Way, he drove home to me the importance of the difference in two schools about how to execute complicated interdependent projects in an environment of high uncertainty. And to oversimplify a little bit, those two schools are the waterfall method and the agile method. And waterfall is an upfront, planning-heavy method about executing big projects successfully. What waterfall says is, look, we have to get this right. This is a high-stakes environment. And the way to do that is to sit around with a lot of smart people, anticipate how it’s going to unfold, anticipate every contingency, plan for those contingencies, game it out, and make sure when you start that you know how it’s going to go.
The main problem with waterfall is that it doesn’t work. And the reason it doesn’t work was beautifully summarized by Clay Shirky a while back when he said, “Waterfall is a pledge on the part of everybody involved not to learn anything while doing the actual work.” When nobody knows anything, when the future is that uncertain, you cannot plan your way to success. You can’t anticipate all the contingencies that are going to come up and what the right response is to all those forks, all those branches, all those contingencies.
When I interviewed Steve for The Geek Way … And I was incredibly eager to interview him because about 10 years ago, I met him in Silicon Valley at a conference. And when I said, “What’s up?” Steve said, “I’ll tell you what’s up. SpaceX is going to bathe the world in abundant, cheap internet connectivity from space.” They had not yet launched their first Starlink satellite, and I thought Jurvetson was nuts. I thought he had been drinking too much of the Silicon Valley Kool-Aid because my naive thinking was we’re 60 years into the age of space communications. If it were really possible to bathe the world in abundant, cheap bandwidth from space. Wouldn’t one of the incumbents in the aerospace and communications industry have figured that out? Well, none of them did, and this complete upstart organization did instead. So, it occurred to me that Jurvetson was seeing some things and was aware of some things that I wasn’t. So, when I researched the geek way, I asked him for an interview. He very kindly consented. I want to give you a quote that he laid on me that has stuck with me ever since. And it’s about this distinction between a planning heavy, upfront heavy approach like waterfall and what the geeks do, which is to take a very agile approach to try things, to get rapid valid feedback, to incorporate that feedback as possible, and then to go back out in the world and try things again. It’s kind of the ultimate industrial strength learned by doing approach.
And what Steve said to me is, he said, “Andy, you have to understand, the companies in my ecosystem, the way that we learned to build software, the agile way that we started to build software about 25 years ago, that’s now the agile way that we build everything.” Hardware, software doesn’t matter. Atoms and bits, it doesn’t matter. We take an agile approach because that’s how we maximize our learning. And I love this. Jurvetson said, “I sometimes think I have a sixth sense. I can see dead companies. They don’t know it yet. They’re dead companies walking, but they’re the walking dead because they’re not responsive enough.” And he said, “The companies that are part of my ecosystem, the ones that I try to help out, the ones that I invest in, we run circles around these waterfall heavy, planning heavy incumbents. We innovate every couple years on things that take them close to a decade to do.”
As many of us know, we entered this very new era of software engineering, of coding just a couple months ago, with the release of some of these amazing models that are so good at coding. I think if I interviewed Jurvetson again today, he would not talk in the timescale of a couple years. He’d probably talk in the timescale of a couple months. So, the premium on learning by doing just becomes higher, because the amount of doing that you can do has increased and the amount of learning that goes along with that is absolutely going through the roof as well. So, part two of the playbook is to set up a fast cadence feedback cycle of learning by doing. And then the third part is to spread the best practices that are already in your organization. This was a article from late last year about the power users in any organization.
And at Workhelix, what we’ve found with every single company that we have worked with, is that the distribution of AI use follows a very, very particular shape. There’s a small number of people who are using it a lot, and then most of the people are using it sparingly. And there are a lot of people in every organization we’ve looked at who are still on the sidelines. They’re experimenting, they’re dipping a toe in, or maybe they’re not even doing that much yet. So, there’s a small group of power users. And there’s a large body of people who really could use some help understanding how to put this technology to work.
Now, to us, that implies a gigantic opportunity. Harness what those power users are already doing. Harness what they know, harness the best practices that they’ve figured out, and start to share and spread them, diffuse them as widely as possible. We are in the early stages of an era of very deep re-imagination, re-engineering. Use the term that you want, but we’re going to change the way that work gets done inside companies and between companies. There are a couple schools of thought about how you accomplish that re-imagination. And here again, there’s a contrast between a planning heavy approach and an iteration heavy approach.
The planning heavy approach to re-imagination or re-engineering says, “Look, get a bunch of the smartest people you can find in a room with a whiteboard. Diagram out how work is going to get done in the new world of AI.” And the geeks say, “No.” Let geeks do what they do, which is grab the powerful new tools, look at problems that need to be solved, put those tools to work and come up with something that does the work in a profoundly different way, a much more enabled way. The geeks say that is actually where the re-imagination’s going to come from.
It’s primarily a bottom-up process supported from the top. So, you’ve already got people starting that re-imagination work. I think one of the goals of an organization in this era of profound uncertainty is to identify those folk, learn what they’re doing, and spread their good ideas as broadly as possible. Adi, let me stop there. I would love to hear what kinds of questions and comments that prompts.
ADI IGNATIUS: Andy, that was great and very provocative. I’m going to start with a question or two from myself, but we do have a lot of questions coming in from participants. I guess with AI, with any new technology, obviously, is it over-hyped? Is it under-hyped? This one, for the most case, people are… It’s like number one, two or three on their kind of strategy. It’s like, we’re doing this.
ANDREW MCAFEE: We’re doing this.
ADI IGNATIUS: We may experiment our way into it. We may go big time, but we’re doing this. And you just said, even Derek Thompson, I love nobody knows anything, but you were kind of urging people to commit. At a certain point, is that an article of faith that AI is worth the commitment or are you a techno optimist? I mean, how should we think about this? Because you can argue it both ways.
ANDREW MCAFEE: I am a techno optimist, Adi, as you know, but we don’t just have to take AI on faith. Economists have come up with three ways to identify what they would call a general purpose technology, which is economists speak for a gigantic big deal for the economy, at the level of the steam engine or the internal combustion engine. One of these technologies that comes along and just accelerates the overall growth of the economy. There are three ways to recognize them, and you can recognize them in advance.
Number one, does the technology itself get better quickly? Check, with AI. Number two, does it spawn other complimentary innovations? We’ve already got AI powered self-driving cars. We are very rapidly trying to build and diffuse AI robots, that have AI at the core of what they’re able to do. So, I think that’s probably a check as well.
Then the third criteria is, does it remain confined to one or two sectors, one or two economies, or do you start to see it being used all throughout the economy? We’re already seeing AI, generative AI used throughout the economy. So, when I think about those three criteria and this new chapter of AI, yes, yes, and yes. That’s where my faith… And it’s not faith, that’s where the evidence falls that I rely on to say this actually is a big deal.
ADI IGNATIUS: All right. So, let’s project forward. So, if AI does what it can do, there are two big questions that occur to me and to a lot of people. If these tools are accessible to everybody, how do we think about durable competitive advantage? Where does that come from, number one? And then number two, what in the world does this do to management and to layers of middle management?
ANDREW MCAFEE: Yeah. Those are two big questions. And I hear the same thing you do, which is, wait a minute, if the cost of doing difficult things like writing software goes down, doesn’t that mean that the competitive advantage of being good at software goes down? I don’t think that’s the right way to look at it. The cost of writing software has already been declining. I don’t know. Is it 10X? Is it 1,000X? I don’t know. But it’s at least a couple orders of magnitude cheaper to write software today than it was 10 years ago. So those costs have already come down a lot. Does that mean that everybody is equally good at software today? Absolutely not. The era of powerful software has sharpened, has increased competitive differences between companies. I think AI is absolutely not going to be the great competitive leveler. It’s going to make the distinctions between companies much, much bigger than they are today.
We’re starting to get decent evidence on this from looking at the places where AI has been put to use most heavily, which was in writing software. There’s a really interesting pattern emerging. Adi, you and I both have careers long enough that we’ve been talking about the network organization and the flattening of hierarchies for a long time. We’re finally starting to see that. It turns out that as people get access to this incredibly powerful new AI, they tend to do a little bit less of the communicating, collaborating, coordinating work. Now that work doesn’t go down to zero, but it feels like the administrative busy work of running a project and coordinating other teams, we’re handing that up to the technology. And so the human amount of that is going down. While at the same time, the evidence is showing that the very best people, the people who you really want to unleash on the big problems that you’re confronting, those folk have more time freed up for exploratory work, for innovation, for envelope pushing, which is exactly what we want them to do.
I’ve also heard people say that the skill of running a fleet of agents is very much like the skill of being a very good manager, especially in a high-tech field or a product manager. So I don’t think the kind of skills that we learn about running large, complicated efforts are becoming obsolete at all. I think they’re going to be an incredibly rich blend of human beings involved in that work and pieces of technology involved in that work.
ADI IGNATIUS: So let’s talk about that. So I’m sure there are some people who are listening who, “Yeah, yeah, we have agents and I’m very comfortable managing them and maybe in some cases they’re managing me.” For people where that still sounds slightly sci-fi or just at some point in the future, talk about that. Make us comfortable with the idea of having agents in our work lives.
ANDREW MCAFEE: So a software agent is just a piece of technology that you don’t have to spoonfeed as much as we’re used to. You can tell it to go do something relatively complicated, hopefully give it pretty clear instructions, and it will go off in many cases for hours on its own, and then come back when it’s got that work done, present its results to you. Now, we had a couple things with this wild new technology OpenClaw where people were letting it maybe have a little bit too much autonomy, giving it your credit card and telling it to go out and do all of your Christmas shopping for you is probably a very, very bad idea. But with appropriate guardrails in place, we can now set up these technologies, let them go do their thing for longer and longer periods of time, let them interact with each other to double check, audit, make sure the guard rules are in place, coordinate their activities, and then they come back and give you a progress report.
And very quickly over time, there’s more and more of that progress and fewer and fewer of the bugs. Now, that doesn’t mean that we can just turn agents loose today. That doesn’t mean that we’re going to be able to do that next month, but are these pieces of AI exhibiting greater and greater ability over time to do increasingly complicated things and in some cases to interact with each other as they do so? Absolutely yes, in both cases. And nobody that I talk to says that that’s about to level off.
ADI IGNATIUS: My guardrail would be, I would give the bot my credit card. I would just say, “Don’t buy anything tacky.”
ANDREW MCAFEE: Nice.
ADI IGNATIUS: That’s not happening. So all right, so there are a lot of audience questions coming in. There are a couple that I want to combine here. One is from Moumita, who is a customer success strategist, the other is from Elena, who is an executive director, and they’re interested in success metrics and KPIs. So as we become more productive and efficient with AI, how should we be thinking about success metrics is one question. And for knowledge workers, what do real KPIs look like?
ANDREW MCAFEE: Yeah, there are a couple fascinating parts to that question. I break it down into two areas. AI is only going to move the needle on KPIs in an organization on the metrics that we care about if people are using it. So the first thing that we recommend is are you tracking the usage of your AI? As I said earlier, are you in the case where there’s only a really small group of people who have embraced this technology? Do most of them currently work in software engineering? And is the rest of the organization really still on the sidelines? We can measure that now. What you want is to not just see the power users doing their thing and increasing their use, you want to see broad adoption and you want to see broad increases in use. So the first part of the very rich KPI question is make sure you’re correctly measuring your usage and you want that usage to be going up.
Now, the reason you’re engaging in all of that use and the reason you’re paying your AI suppliers is because you want some performance measures to get better. And as the questioner points out, some of those performance measures are pretty straightforward. In a call center, for example, you would like agent customer service rep productivity to go up. You want call times to go down, you want resolution rates to go up. We’ve got good metrics for that category of knowledge work. For other categories of knowledge work, it’s a lot less clear. I was talking with the head of an investment bank who said, our analysts, what they really do, they’re doing very complicated analysis, but it looks like the product is a slide deck that we show to a customer. If I told my agents to be more productive, or my analyst to be more productive, I hope they wouldn’t just start churning out two times as many decks.
So there are parts of knowledge work that are actually pretty complicated to measure, not impossible, but it’s a little bit more subtle. To me, the really good news is that the economist toolkit for measuring changes, even if they’re subtle, even if they’re a little bit distant from the intervention like AI that we’re talking about, our toolkit for understanding whether this thing that we invested in caused good things or bad things to happen. That toolkit’s really good right now. We put it to work at Workhelix. It’s part of a thing called the credibility revolution where our ability to say, “Hey, you tried something. Did it cause the KPI to move in the direction that you want?” Our ability to do that is much, much better than it was even 20 years ago. So put that to work, put the credibility revolution to work on your KPIs.
ADI IGNATIUS: So one of the issues that leaves people skeptical is workslop. And this is a question, this is from Suzanne and has a lot of upvotes. So a lot of us are seeing workslop, and by that we mean AI generated content that looks great, but is not so great and maybe creates more work for colleagues. When you look at workslop, do you think of it as a training problem, a culture problem, a strategy problem, something else? It’s a problem.
ANDREW MCAFEE: It’s a problem, and it’s a little bit of all of the above. And the solution that I hope we don’t head toward is my agent generating slop and your agent trying to filter it out so that the stuff that gets to you is only the good stuff. We don’t want slop wars going on inside the enterprise or between companies. I think one of the really important distinctions here, Adi, you’ve heard of this distinction between a process focused organization and an outcome focused organization. The geek companies that I studied and that I wrote the geek way about, these are very much outcome focused organizations. They understand the need for process, but what they’re really about are the metrics that are going to indicate whether the business is heading in the right direction or not. How are those metrics doing? I think there’s going to be less demand for work slop in an outcome focused organization than a processed focused organization.
A processed focused organization is, did you bring all your deliverables to the next meeting? Did you go down the check mark of everything that we said we were going to do? Did you get everything on everybody else’s desk? Whether or not that moved a KPI that any customer would care about.
In process driven organizations, I think the workslop volume’s going to be already pretty high and unfortunately probably heading higher. In outcome focused organizations, I expect that situation to be pretty different. There, your job is not to generate documents to satisfy the process. There, your job is to get stuff done that makes a customer happy.
ADI IGNATIUS: So here’s a question that goes back to your conversation of agile versus waterfall. And this is from Rich who is a founder and a CEO. And this is really a request for practical advice. What is one thing that a leader of a traditional company can do to move away from waterfall thinking and that culture to adopting a kind of agile mindset?
ANDREW MCAFEE: Wow. That’s a great question. The one thing that a leader can do is understand, it’s a two-part answer, but they’re related. So forgive me, is to understand the difference between what Amazon calls one-way doors and two-way doors. In other words, can we not make a mistake here? SpaceX has launched a lot of rockets and crashed a lot of rockets because they realize that in many ways, launching a rocket is a two-way door. Launching a rocket with human beings on it is a one-way door. Understanding the difference, crashing that rocket is an unacceptable failure. So SpaceX tries very hard, very successfully, not to crash rockets that have people at the top of them. However, the way you learn how to build those rockets is by learning, trying, and failing, and being part of an organization that says, “Look, you built that rocket to learn something. We didn’t put any people on it. It crashed. That’s actually good. We learned something from that.”
So part of it is the understanding the difference between one-way doors and two-way doors and being very clear about that. The other part is for the two-way doors, let people try stuff, let your team try stuff. Signal in every way that you can as a leader that failure for two-way doors is actually okay as long as you learn something. What do we learn? What are we going to do different next time? Great. Let’s get back out there and try things again. That’s how I think a leader can start to build a more agile organization.
ADI IGNATIUS: I want to talk to you about talent. I want to talk to you about hiring and skills. You probably saw the report that IBM is tripling its entry level positions. It’s sort of counterintuitive because we’ve sort of assumed that AI is going to be wiping out entry level jobs. I think Microsoft has said something similar. What do you think? How should HR be thinking about hiring, about hiring profiles, about skills? People are glibly saying, “We want more liberal arts majors,” it turns out. And I don’t know if that’s a real thing or it’s a sound good thing, but what are the skills? What are the profiles? How should companies be thinking about hiring people for this era?
ANDREW MCAFEE: The reason that a lot of companies pulled back on their entry level hiring is because they anticipated a whole lot of automation via AI. And in particular, they anticipated automation that could do the more routine stuff that we relied on junior people to do. Now, the problem with that is twofold. Number one, how else are people going to learn to do the job except via on the-job learning and training apprenticeship? That’s how you learn to do difficult knowledge work is by helping somebody who’s good at that with the routine stuff. And when we put too much automation in that too quickly, we lose that apprenticeship ladder. The other mistake though, is if you cut off your entry level hiring, you are cutting off the pipeline of the people who are most likely to be AI enthusiasts, AI power users, AI enthusiasts in your organization.
There is a big demographic fall off. As people tend to get older, we tend to be more set in our ways and less willing to try crazy new things like AI. So if you’re pulling back on your entry level hiring, you are probably sacrificing future opportunities to learn and the skilled people of the future. You’re also turning off the spigot of the most enthusiastic power users of AI in your organization. I think it’s for those reasons that I’m not too surprised to hear organizations like IBM and Microsoft are now trying to get a pipeline of very talented AI forward young people.
ALISON BEARD: That was Andy McAfee, research scientist at MIT, giving a masterclass as part of HBR’s recent Strategy Summit 2026.
If you found this episode helpful, please share it with a colleague and be sure to subscribe and rate IdeaCast in Apple Podcast, Spotify, or wherever you listen. If you want to help leaders move the world forward, please consider subscribing to Harvard Business Review.
You’ll get access to the HBR mobile app, the weekly exclusive insider newsletter, and unlimited access to HBR online. Just head to hbr.org/subscribe. Thanks to our team, senior producer Mary Dooe, audio product manager, Ian Fox, and senior production specialist, Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a regular episode on Tuesday. I’m Alison Beard.
