Economic Policy Group co-organizers, John Cochrane and Valerie Ramey, hosted a talk on “Artificial Intelligence and Economic Growth.”
Presenter: Chad Jones, the STANCO 25 Professor of Economics at the Stanford Graduate School of Business and Senior Fellow at the Stanford Institute for Economic Policy Research
Moderator: Valerie Ramey, the Thomas Sowell Senior Fellow at the Hoover Institution
SUMMARY
In this presentation, Chad Jones explores key questions regarding the relationship between artificial intelligence and economic growth: How do economists think about A.I. and economic growth? What are the prospects for accelerating growth over the next two decades, or do the bottlenecks sharply limit this possibility? Are we massively underinvesting in mitigating the potential downsides of A.I., including existential risk?
To read the slides, click here.
WATCH THE SEMINAR
Topic: Artificial Intelligence and Economic Growth
Start Time: October 22, 2025, 12:15 PM PT
PARTICIPANTS
Chad Jones, Valerie Ramey, John Cochrane, Lukas Althoff, Annelise Anderson, Jonathan Berk, Michael Blank, Hoyt Bleakley, Valentin Bolotnyy, Michael Bordo, Michael Boskin, Ruxandra Boul, Marcelo Clerici-Arias, Richard Coillot, Christopher Erceg, David Fedor, Robert Fluegge, Jared Franz, Brandon Garcia, Patrick Gaynor, Nick Gebbia, Ben Ginsberg, Anthony Gregory, Joseph Gruber, Siddarth Gundapaneni, George Hall, Peter Hammond, Laurie Hodrick, Robert Hodrick, Bob Joss, Sanjeev Khagram, Evan Koenig, Steven Koonin, Oliver Landmann, Charles Leung, Ross Levine, Mickey Levy, Joseph McCormack, Sean McEwen, Ross McKerracher, David Papell, Elena Pastorino, Stephen Redding, Paola Sapienza, Allison Schrager, Pierre Siklos, Jeffrey Smith, Richard Sousa, Jack Tatom, Araha Uday, Victor Valcarcel, Bebel Vieira, Mike Wu
- We're delighted that Chad Jones has agreed to come give great talk. I've already seen it. That's why I asked him. Artificial intelligence and economic growth.
- Wonderful. Thanks very much, Valerie. So, so I'll present the usual economic seminar style, so I'm not gonna stop after 45 minutes, so definitely ask questions as we go along. I think that'll work better given, given the way the talk's laid out. This is a, a, a broad talk that I've been giving for, for the last year, actually. It's a collection of different projects I've worked on in the past. I'm currently writing it up for the Journal of Economic Perspectives. So this is a, actually a, a great chance for me to put it all together and, and, and present it. So what what I want to do is, is just think broadly. What are the lessons from growth theory and growth empirics for the implications of, of AI as, as we move forward? And some of my work, I've built some models where, you know, AI can help us make goods or AI can help us make ideas. And so what does that imply about the future of economic growth or about the share of GDP paid to labor versus capital? And then something I've been thinking about, you know, quite, quite recently, what about catastrophic risks from AI and what, what kind of lessons can I draw from, from earlier research about that? So he, here's an example of the several papers. I'm, I'm working right now on the last one on this paper with Chris Teneti, and there'll be barely anything, but there's a, you, you know, one graph that, that we're using in that paper that I like a lot that I wanted to, to, to pull in here. Okay, so, so the first paper I wrote was with Philippe Aon and, and Ben Jones. And in this paper we, we highlighted two themes that I think have really sort of stuck with me in, in a lot of the work I've done. First is that AI is just the latest form of an automation process that's been going on for centuries, right? So the industrial revolution was all about automating some tasks. We used to, you know, make textiles by hand, and then we had capital, we had looms that let's us, let us, let us make textiles. And so, you know, steam engines, electric power computers in, in, in the future in the present driverless cars, paralegals, pathologists, maybe researchers, maybe everyone. And so in some sense, the less it is, we can look back at economic history and our experience with automation in the past and use that to help us extrapolate what AI might do in the future. So that, that's kind of the, the first theme. The second theme that came out of kind of thinking about the first part in the context of models is that AI's impact may be limited by bottlenecks or economists like to call this BMOs cost disease. And I, I, I'll come back to this in a couple of different ways. The key insight as applied to AI is that economic growth is constrained not by what we do well, not by what we're really good at, but instead by what's essential and hard to improve by what we're bad at. In a world of compliments, you're constrained by the weak links. And so AI can get really good at a lot of things, but as long as we have some weak links that might hold us back more than you would expect. So, so when I draw out these, these points in a bunch of different ways, the a paper I liked a lot is this paper I by Joseph Sierra in 1998, and it was kind of the, the first model in economics that took seriously this notion that, you know, capital substitutes for labor in what we would now call a task-based framework rather than a usual say, cob Douglass production function. And so Zera is the one who, you know, sort of told the story that automation's been going on since the industrial revolution. That was, that was his opening for his seminar. So I wanted to show you his model. And you know, it's, it's a nice elegant model that that helps us understand how the subsequent literature a smolin repo, for example, work. And so, so I like to start here. So, so here's idea is that production uses end tasks or end goods. He, he called them goods, but I think now we would call them tasks and imagine it's a Cobb Douglass production function. So you got these, these 10 different things that get multiplied together with some exponents and constant returns to scale. And initially you only know how to make each X with labor. So one unit of labor can make one unit of an X over time. What automation is, is figuring out how to make the X with capital instead of with labor. So you might automate one thing first, then two things, then three things. And so you're, you're replacing the labor on an X with capital. And if you substitute that in, it gives rise to our familiar Cob Douglas production function, K to the alpha, to the one minus alpha. Where what's new here is that you have an economic interpretation, there's a micro foundation for alpha. Now alpha is what fraction of the tasks have been automated, okay? And it's, it's great to understand something that's your old friend. You've written down this production function a million times, and yet, ah, I never thought of alpha as the fraction tasks that are automated. So if, if you take that insight and put it in a neoclassical growth model, you get out two things. First is that the long run growth you imagine a is just growing exogenously. There's, you know, solo's exogenous technological change. Then the growth rate of income per person is the growth rate of this exogenous technology. Ga divided by one minus alpha. And the dividing by one minus alpha is 'cause you know, a better technology leads to more output, which leads to more capital, which leads to more output, which leads to more capital. So there's this, you know, one plus alpha plus alpha squared, which gives you the one one over one minus alpha. And so what's interesting when you look at it this way now is you realize, okay, the xera idea is a great idea and yet it doesn't fit the data at all because it implies that alpha should be rising as we automate more and more things. And automation's been going on for hundreds of years. But the capital share for the last, you know, from, from 19 hundreds, say until 1980 or maybe until 2000, one of the key Kaldor facts was that the capital share was stable. And you could see from this equation, if alpha goes up, the growth rate should go up. So again, if automation's been going on in the last century, the prediction of the simples, they are a model, is that growth rates should be accelerating and the capital share should be rising. And both of those go against what we saw historically. So zero's model was great except for this problem that it didn't fit the data at all. Okay? And you know, my favorite graph, which is in every talk I've ever given is this one, the straight line of GDP per person. So for 150 years, GDP per person on a log scale or a ratio scale looks like a straight line with a slope of 2% per year. So this, this ongoing automation should lead to rising growth. You just don't see it in the, in the data at all. And we could have, I could have shown you the capital share as well. You, you wouldn't see that, oh, although interestingly, post 1980 or certainly post 2000, you do see rising in capital share. So that's part of the reason why ER's idea is coming back. It was just a little bit too early, I think relative to the data.
- Now it would be consistent with a lot of historical facts of development up to a frontier
- That that's right there, this graph, maybe this is, you know, one way of saying what, what Michael just said, this graph is true for the United States, but it's not true for any other country in the world, right? So Japan was 25% of the US until after World War ii and then it rose to 75% a big transition path. So this graph is, is unique to the United States and trying to understand it is, is useful, but all it takes is one key example to disprove something. And so I think it's, it's helpful for many reasons, but that, that being one of them, okay, so, so a lin repo did just foundational work building on X'S model partly to, to engage with these, these counterfactual predictions that the zero model made. They have sort of new tasks being introduced that are, are done by labor. And so maybe the fraction of tasks can, can be constant. They thought about an elasticity of substitution different from one. So make it CES instead of Co Douglas and get, get lots of, you know, a, a top, top five journal article every year for the last eight years, right? So, so just incredibly productive. There's some related work by Amy Olson and then a paper by Ben Jones and a graduate student Leo that I'll, I'll come back to in a second and talk about, but can
- I ask you, yes. So what is it unique about the US is stability or the relationship or the linearity? I'm thinking about the structural transformational where we know, we a student here at Stanford working on it, that the peak, the transformation peak and the shift from agriculture to manufacturing. So if you coming earlier is less pronounced and cognitive to be moving to service a little quicker. And so that gives you burst of productivity grow early on. That's rich countries or the US in
- Particular. Yeah, no, I I I'm mainly concerned with the xera model doesn't match the US at all. And so there's something wrong with it. And AOD repo in some ways you can read what they're doing is trying to write in a model where automation is ongoing. But you kind of get a graph that looks like this. You know, your point about the structural transformation is a great one because it, it makes clear how surprising this fact is, right? So, you know, the agriculture declining from, you know, 50% of GDP to to 3% of GDP manufacturing rising and then falling the rise of services. Each of those industries having different productivity growth rates, your guess would be those composition effects should have a first order effect. And yet this is the fact for the aggregate. And so, yeah, that, that's what I mean by this. This is a graph that keeps on giving, the more you think about it, the more puzzling it's, okay. So, so, so what I wanna do now is talk about this, this BMO cost disease point and, and automation. And this is sort of the, the model that I wrote down with, with Philippe and Ben. And you can see, okay, there's, there's a task-based model where now it's a constant elasticity of substitution production function, CES instead of Cob Douglas, where the elasticity of substitution is gonna be less than one, okay? So tasks are compliments rather than substitutes. And so it's a weak link model in some sets rather than a love of variety model. And that, that sigma less than one, that weak link effect is what's gonna give rise to the bowel mold kind of bottleneck. Then the task bottle is exactly what X had. You could produce each task with capital or with labor, well, on all of them you can use labor. And on the ones from zero to beta T, those are the ones you've automated, right? So you, you've only automated up to beta T and you're gonna, we'll assume, you know, you can make assumptions such that you use capital whenever you can and then you have to use labor on the rest of them. So, so that second equation is very much the earth's equation. And then the rest of the model, if you look at it, it's just totally standard. Yeah, capital accumulates. You have to figure out how to allocate your capital to each of these tasks. Allocate your labor, C plus I equals y and follow solo. That investment is just a constant, a constant share of of output. And so if you look at this, when I look at this, I'm struck by how simple it's right. There's just not that much going on. And you might think, okay, simple models lead to simple predictions, but not a lot of richness. And in fact, this model leads to a great deal of richness. And if, if you ask why I think what's going on is the, this task framework with an elasticity of substitution less than one between tasks gives rise to a rich interaction between complementarity and substitution. We've got compliments on tasks, so a weekly model on tasks, and yet capital labor are perfect substitutes on the tasks that we've automated. And you know, if you think about in contrast, our old CES production function, F of bk, al, you've got one elasticy of substitute there. So it's either greater than one or less than one, but it's just one thing. And so, you know, a, a relatively small set of things can happen, whereas here you've got perfect substitutes at the task level and then compliments across tasks. And so this is kind of what gives rise to a bunch of interesting outcomes. And so even, even a model this simple can really benefit from that interplay. Okay? If you, if you look at this setup, you know, the tasks are all symmetric. And so wherever you can use capital, you're gonna use capital and you're gonna use the same amount of capital. So KIT is gonna be kt the total amount of capital divided by the number of tasks that have been automated. And the number of tasks that have been automated is beta. So KIT is gonna be k divided by beta, LIT is gonna be l divided by one minus beta, one minus beta is the number of tasks you use labor on. And so if you substitute that in, let me just show that equation that I'll, that I'll get, Sean, you get this as the, the, the reduced form outcome of what I just said. So
- Chad, you're defining tasks in some proportionality, so they could just be additive, right? So just add the numbers. Well, well form a task. It may be instead of doing it for 2000 hours a year doing calculate as 1000 hours, does that fits? How, how are you, are you just adding up looking at the fraction of tasks rather than the value added from the task?
- Ah, so, so, so good point. So, so the value added from the task is gonna be incredibly important here. It's just, I've assumed everything's symmetric. I've assumed all the tasks are kind of the same. So the symmetry makes the math simple. In no other paper, does anyone ever make that assumption? Normally you put alphas in front of say, each task. And so some tasks are very important, some tasks are less important and that alpha plays a useful role. I've just left it out here because it's, it's not the first order thing I wanted to emphasize,
- But, but then how do you speak to the bamal issue of what's essential if you've just defined 'em all to be Cleo?
- Yep. Ho hold that thought. You'll, you'll see, yeah,
- John, I would've thought when capital's really scarce, you would choose to use labor on some tasks.
- Ah, yeah. That, that would be the case here. When I say there's some assumptions. So whenever you can use capital, you wanna use capital, it's saying capital's sufficiently plentiful, right? If you've, if labor's the really scarce thing, you got enough capital, it's basically, you know, what's the wage relative to the interest rate? Yeah, yeah. So, so that assumption's buried here, but it's not, it's not important. Okay? So, so if you sub, if you use that symmetry, K is K over beta, li is l over one, minus beta, you get this CES production function in capital and labor, right? But where interestingly now you can see there's a green beta and a purple beta, right? So beta plays two roles and it plays the same two roles on the labor side. But lemme just focus on the capital side. On the one hand, as you automate more things, as you increase beta, you use capital on more tasks. And that's, that's a good thing. You're, you're able to use the plentiful factor on more things. That's the share parameter, the green beta. So that's going to that, that's gonna tend to, let's say, let, let's say going in one direction, but importantly the purple beta says, okay, you're taking whatever amount of capital I have today and you're spreading it over more tasks, right? So capital per task goes down and that's the purple beta. And the same thing's true for labor. On the one hand, labor's getting squished on fewer and fewer tasks as beta rises. That's the share. On the other hand, since labor's working on fewer tasks, labor per task goes up. Okay? Alright, so, so fine if you, if you take that expression and you know, add up the exponents in the right way on the beta, what you get interestingly is you get our old fashioned C-E-S-B-K-A-L, right? So you can write this as the old fashioned CES production function with some B and some A, where in this case the B and the A depend on automation, right? They depend on the beta. And if you look at it, the, the, the, the purple beta dominates, so the dilution effect dominates and an increase in beta actually decreases the B. So one question I always wondered about when, you know, for, for a long time people have been writing down CES and capital labor. Suppose you invent a better computer, is that a B or an A? If you have a better computer, is that like having two old computers or is that like me having twice as much time? And it was never obvious what the answer to that question is. But in fact, in the task model, it answers that much more clearly because you're only using capital show up as anything. Yeah, yeah. But, but in the task model, you're only using K or L. So a better computer is kinda like a, it's, it's something that multiplies the K, which i, I left, I left out here. But in any case, what you see here and increasing beta, if you, if you auto, you figure out how to do more things with capital than you could yesterday, is that a higher B or a higher A? Well, no, it's not either one of those things alone. In fact, it affects both B and A. Okay, that's fine. The interesting thing is, a higher beta actually decreases B and increases A, it's a twist of the production function. It's neither just affecting B nor affecting A, but a higher beta, you automate more things and that means capital is spread more thinly across tasks. Capital per task goes down with signal less than one that effect dominates. So you could see a higher beta decreases B. Okay, that's interesting. Automating is like capital depleting technological change rather than capital admitting technological change. But equally interesting, if you automate more stuff, that actually increases the A that's labor augmenting progress. And how could that be? Well, labor per task goes up, right? And so if you've got a fixed amount of L labor per task going up is like having labor that's more productive. It's like having more, more, it's back to Herod neutral technical change in a sense. It, it's neither Herod neutral nor hicks neutral. It's this weird twist, right? Because the, the B is going down and the A is going it's labor augment it, it is labor augment. So that's the Herod part, but it's capital depleting, which is not the Herod part, right? So it's this weird thing. It's not, it's none of the things that we used to think about either solo nor irid neutral. Exactly.
- I can I ask you, so here it's important that the sigma is less than one, and I'm thinking about the wage skill premiums. So core
- Yep.
- Ello and
- Yep, yep.
- And the labor side is bohan. So the, and Nancy is talking actually had a, a very early paper about this. The question is, is it true the capital equally substitutes for different type of labor? No, it tends to augment high school labor and
- Right, right.
- Substitute. So how, but there is this huge debate that is not really a classical proper framework. You're
- Yeah, yeah.
- But how do you come for,
- I, I think the modern interpretation, so the cce, Haney and Rios rule ante was a great paper. The Kazon Murphy unskilled versus unskilled labor. So, so we've played around with, as a profession adding more factors here. And you could have K and H and L for example. I think the, the consensus such as it is view of how to do this is capital substitutes less than one for one with labor, if you're only gonna have one type of labor, that's the way to write it down, skilled and unskilled labor substitute more than one for one. And yeah, is there a capital skill complementarity or an equipment or computer skilled labor complementary? You know, you gotta have four factors. So here I only have two. And this I think is if people who've estimated, here's another way to say it, there's a long literature trying to estimate the elasticity of substitution between capital and labor. 98% of those estimates are less than one, very much consistent with what I've got here.
- And then, I mean, thinking about this historically, as Larry Katz used to point out, there was capital, low skill complementarity in the automation that occurred with the assembly lines. So,
- Yeah, yeah. So I, I don't have high skilled, low skilled labor. I am gonna show you that that one graph I'm pulling from my paper with Chris is exactly on this point. So I, I have, I have some evidence that I find really intriguing on this point that I'll show you in, in a little bit, Steve,
- I guess we'll focused on the automation here, but I was also thinking the other thing, since industrial revolution might be an expansion in the range of tasks, right? To the extent which then is also gonna matter for the distribution of those new tasks initially performed by May.
- Yeah. Yeah. This is also something that, that I talk about in my, in my paper with, with with Chris, that that's not here. So, so importantly, if you want new tasks, you have to add them in a place where there's substitution. You can't add them in a place where there's complementarity. Because remember on the, on the production side tasks combined with elasticity, less than one. So it's a weak link model. It's not a love of variety model. So adding new weak links is bad for output, not good for output. So you wouldn't want to add new tasks. So, so in the work with Chris, what we're thinking about is we're, we're adding something we call procedures that you need to put the sub the the new tasks in the, in the K plus L Yeah, right, right there in the K plus L substitute equation, right? And what you could say is, you know, we, we definitely have to till the soil, if we're making agriculture, we have to till the soil. We used to do it with hose, then we did it with plows, with ox, then we did it with plows with tractors. Now we do it with fancy GPS, whatever you might call that, those new, new tasks or something. I'm gonna call plow tilling the dirt is the task. And then we've invented new ways of doing it, that's showing up in the substitutes equation. And so that's, that's the way we're thinking and we call those new procedures. So that, that's the way we're thinking of what people commonly call new tasks.
- I'm just missing the, the really basic intuition. So stuff gets automated and then we, we allocate more labor to the remaining stuff. Normally we think there's sort of declining marginal products, so that's bad. But you're saying somehow it's good, which is what I didn't understand back to your next slide, where beta going up raises a rather than lowers a.
- Yeah. So, so, so ask you, so, so raising A is good. Yeah. Lower lowering B is bad.
- No. Just so the standard intuition is
- Yep.
- As we take workers off and plug them all into doing the same thing, we're writing down the marginal product.
- That's ah, yes, yes, yes.
- What's wrong with that
- Information? So, so that's very true, except for the fact this is a weak link model, right? You're limited by the stuff you're bad at. The stuff you're bad at is the stuff that uses the scarce factor, which is kind of labor here. And so putting more labor on the weak links is good. It still runs into diminishing returns. So everything, you know, a, a weak link model, another way to say it, go back to, well, you, you could see it here, look at the exponent on l it's sigma minus one sigma's less than one. So that exponent's negative. So actually the production function is bounded. If you had infinite amounts of labor, you'd still get finite output. So there's really severe diminishing returns to capital and labor here. But you know, having more labor on the stuff that you're bad at, the, the weak links is actually still beneficial. So the, the marginal product's falling, but, you know, re relieving pressure on the weak links is a good thing
- To, just to follow up on, so the, the evidence in the labor literature for the debate on the recovery elasticities, that if anything, capitals become more and more complimentary to high skilled labor and more and more substitutable. But you're saying if I, if you look at the aggregate, this, this doesn't really seem to matter much and this is a good representation.
- Yeah, I I am, I am saying the, the, the fact is that people estimate elasticities between capital and labor, just one type of labor. Yeah. 98% of the time get less than one. I'll show you a fact. That's exactly on that point. That that will make sense. I, we haven't added skilled versus unskilled labor. So I'm, I'm less well equipped to say what that would do in, in this model. So that, that I just don't know. And
- Some above one, I remember car bru and neman, I forgot
- There are Yeah. Kius and Neiman is, is they, they find above one that's like the 2% Yeah.
- Is the right way to think of it. We're this declining, marginal product of labor, but the wage rate goes up so much. 'cause we need you to work to feed the machines.
- Exactly. Yeah. Yeah. That, that's the way to think about it. Yep.
- Exactly. The opposite of the standard AI story. As we all go down to doing one thing, we're gonna be paid $10,000 an hour to do it.
- You're you're already seeing where this is going. Yeah, yeah, yeah, yeah, yeah. Okay, good. Yeah. So an increase in beta simultaneously is capital depleting and labor augmenting, which you wouldn't have guessed. So that's kind of an example of what I meant by this simple framework gives you, gives you some rich predictions. Okay. Now let me think about, okay, how does beta change right now? I, up, up till now, I've taken beta as exogenous. We want to think about how beta changes in the new paper with Chris, we're measuring how beta changes. So I hopefully I'll come back and tell you about the new paper, but right now it's this stuff is, is, is we're just a model based. But I, I think a natural outcome of automation is this equation. What this equation says, it, it's quite intuitive. It says, look at all the tasks that use labor. That's one minus beta. Suppose every year we automate a constant fraction of the tasks that have not yet been automated, right? So one minus beta is the number of tasks that have not yet been automated. Beta is a constant. And so the change in beta is just a constant fraction of the tasks that have not yet been automated. Okay? That's what this law of motion says. Notice it, it implies beta's going to one asymptotically. You automate arbitrarily close to a hundred percent of stuff, but there's always, you know, it's the zeto, you, you get take closer and closer steps to one, but you never, you never exactly get there except in the limit. Okay? So automated constant fraction of the stuff you've not yet automated it, it seems like a natural model. What happens to one minus beta? So, so what about the tasks that use labor? What's happening to the number of tasks that use labor? We'll call one minus beta M And if you ask, okay, what's the growth rate of m Well, you could see it's gonna be negative beta divided by one minus beta. That's just negative beta. So with this law of motion, it says something is changing at a constant exponential rate, but it's the fraction of tasks that use labor is falling at to zero at a constant exponential rate theta, right? That's what this model of automation gives. Okay? So beta is going to one, one minus beta is going to zero one minus beta falls towards zero at a constant exponential rate, say 2% per year. So, so, so now let me put, put everything together that I've just said.
- Can I ask
- You Oh yes.
- With ai, since this is a story about AI that you're coming to, you would argue that actually it's not a constant exponential rate, it's at an exponential rate and it's gonna happen all at once where it's happening closer to all at once rather than like this. So how do you deal with that? Is that,
- Yeah, that's coming. Th this is an interesting question. I I, is AI speeding up the rate of automation and is it gonna be a sudden thing rather than a gradual thing? Those, let me, let me hold that off for a couple more slides, but yeah, yeah, totally. Yeah, no, no, it's, it's definitely the natural question to ask. Okay. So, so putting together everything I've said so far, we've got this BKAL standard CES production function where B and A depend on the, the automation amount. Beta beta's going to one that says that B is going to one, right? So if you wait long enough, we know that rise in B beta is lowering B, but eventually B just just goes to one. So the capital depleting stuff comes to an end. What about a, or remember I told you one minus beta is falling at a constant exponential rate. So one divided by one minus beta is rising at a constant exponential rate. So in fact, you get a model that says this labor augmenting progress A over A happens at a constant rate. And this is, now we're at Michael's point. This model says if you wait long enough, despite the fact that you can change B and change A in the long run, B is constant and A is growing at a constant exponential rate, which is exactly the Herod neutral technical change that YUSA needed if you wanna balance growth path. So the other surprising thing here is you would've thought in this setup, there's no reason why you're gonna get constant exponential growth, but in fact you do get constant exponential growth. Okay? So in the long run, if we wait long enough, A rises at a constant exponential rate, B becomes a constant, you get a balanced growth path, you get that 2% per year straight line. And if you ask what happens to labor share? So this is what, what John was already anticipating. You might have thought people when they talk about AI and automation, say eventually the AI is gonna do everything, and so there's not anything left for labor. And so the labor share is gonna go to zero and the capital share is gonna go to one. That's not what happens here, right? In fact, if you do the math, you see that the fact the sheriff factor income paid the capital, despite the fact that asymptotically a hundred percent of tasks are done by, done by capital, the factor share for capital converges to a constant, let me call it a third, right? Capital gets a third, labor gets two thirds. And that's true along this balanced growth path. Even though labor's doing a vanishing set of tasks, and I think I talk about this on, on on the next slide, why is this, this is exactly BMO's bottlenecks, right? Labor's the scarce factor capital, you're getting lots and lots of capital. The economy's growing. So you're getting saturated in capital capital's doing a lot of tasks, tasks, you're good at those tasks. The stuff you're bad at is the stuff that labor does, you can't yet automate it. That's the stuff that's essential, right? The, the stuff that computers do. We got incredibly powerful computers. It's like we have infinite computers on those tasks, that's fine. We're bottlenecked by the stuff that computers cannot do. And the fact that those are bottlenecked, that means labor gets a high wage, labor's the scarce factor with elasticity of substitution less than one. The scarce factor gets the share, not the plentiful factor. And so the labor share can remain at two thirds, even though labor's doing a vanishing amount of tasks because labor's doing the tasks that we're bad at and that are essential. And that's, i I don't wanna claim that's definitely the way the world works. What I think is, is kind of what John realized. It's just interesting if you just write down a model that doesn't seem like a crazy way to think of the world. You do get that automation computers eventually do everything almost, but it's still the case that labor's, you know, doing the weak links that that preserves its share and preserves high wages and everything else. Stephen, the
- Labor free mobile across the task. I was thinking in reality, you might imagine it's very bang bang here, but you might imagine that initially some displacement for work as it takes time for to reallocate. So thinking about kind of distribution consequences for information that could be very interesting, right? You need to smooth the model out a little bit. But that could also be very interesting.
- Yeah, yeah. No, no, absolutely. To the extent that programmers don't make good nurses and the programmers get automated away, the transition dynamics can be painful. And that's, that's not here. But totally, I totally agree. It's interesting.
- The complementarity is important. I was thinking, you know, if all we do, all we're left to do is play string quartets. 'cause you know, people like to hear people play string quartets. Well that's not essential in the production of other things. So that's not gonna work. Y
- Yep. No, no, that, that, that, trying to think through what that's exa you're, you're thinking about it the right way. So, so the string quartets, I think it is the case that, you know, like my favorite example is Magnus Carlson and chess, right? So my iPhone can beat Magnus Carlson and chess, and yet chess has never been more popular watching humans play chess on YouTube, right? It's just, you know, it's never been as successful as it is. And so there may be some tasks that we reserve for humans. Now the interesting question is whether those are compliments or substitutes, because if, if they're substitutes it, it, it gets paid nothing. But if they're, if they're somehow viewed as essential, so we choose to value things purely because they're done by humans and we don't wanna watch the virtual reality AI do the string quartet, we wanna watch the human in person, then it works out. But, but it's an open question. So I have a a later slide about that, but that's, that's kind of already the gist of it.
- So one thing you alluded to briefly earlier, and I'm trying to follow how it all fits, is, and you drew a decision, tasks and procedures, et cetera, but if you think in the context of much of the improvement stands living with new goods that weren't imaginable earlier,
- Right?
- And so if you think of those, maybe define a set of new tasks and so what's going on with the new task or think of the hixon sense of new goods we haven't developed yet. So we have the, the technology to understand the pricing of new goods not yet available and that sort of stuff. So how does that fit in?
- Yeah, that, that, that's another way to include new tasks. So I mentioned the new procedures angle in the paper with Chris. We, we've thought about we've, we've worked it out exactly what you're saying. You can also imagine there, put put another layer of substitution above my task model. So the, that, that's the production function for each, each good. But there are a bunch of different goods you can invent. You can come up with new goods and then those new goods could be produced with new tasks that we haven't seen before. And that model's relatively tractable and you can get results. But yeah, so, so you, you can write it down in such a way that everything I'm saying holds true. But that's not to say you couldn't write it down in a different way. So that, that's still an open question, I would say, but that's a very good point.
- Oh yeah, I, I'm an outsider this literature, but something that has struck me so far is that there's no state in this model, and we know the state puts taxes on labor, puts taxes on capital. The state does things that might influence beta. Should I think of this as a model of a world with a minimal state? Or is this, is the state implicit in the assumptions of the model or what?
- Yeah, no, it's, that's a great question. I, I'll, I'll say what my tendency is it, it's, it's not quite true in, in this setup, but I think this is at least one way to answer what you're saying. I I tend to like to write what are the possibilities, what could happen and what's the best we can do? So if we had a state that did all the right things, what's the best we can imagine? And then I'm not gonna say, does the state do the right things or not? I'll leave that for, for other people. So, so that's the, that's kind of the way I'm approaching this. I I don't have, yeah, so, so you'll notice I said capital labor allocated perfectly. There's no misallocation, there are no taxes that are screwing things up. And that's the set appearance. Not to say those things aren't important, it's just they're off the table for now. If we, if we did things right, what, what could we hope the world would look like? What's the best we out outcome? We could hope No financial markets. Yeah, no financial markets assume they, they, they work well to get the optimal allocation. Yeah. But that, that tho those other questions are very interesting for sure. They're just not here.
- Oh, yes. Yeah. Sorry, quick question. Also not an, you know, in any way, shape or form an economist. So the, but interesting on the, the, in the discussion of like a string quartet came up, like how do you define essential? Like, and as it ebbs and flows what we consider it, it's not a constant, you know, the chess example is a great one. Like the, the new aspect of a computer being able to be to human was incredible. And like, everyone wanted to watch that and see that, but now it's more important to watch a human, like it's, you know, that's more essential in this model. Does that matter? Is it as once we automate something, can it go backwards and does it matter not being constant?
- Yeah, no, I, I I I, I like that question. What, what I like about it is I think it highlights a theme that I, that I mentioned at the start, which is the nature of the interaction between complementarity and substitution is really important between love of variety, the expanding variety that Michael mentioned, and weak links. So that's another way to think about it. And in non-task based models, typically you only have one kind of elasticity. So you can't even talk about that. Here I have two and there's this interplay, which having two is much better than having one. But you know, your example about chess, like things could be changing dynamically and what, what used to be essential is not essential. 'cause we've invented new goods, so we don't consume the old ones. And you know, I, I think all that's true. And what I would say is this framework allows you to start to see what might happen in that world. But yeah, if the interplay between complementary and substitution is even richer than what I've got, yeah, we, we need to think about that and write down a, a an even better model.
- Oh, what's wrong on the strategy vari to be played by the best violinist because it's just aesthetic of it. And then you, you need an orchestra about the, it's not that far from what you have told us.
- Yeah, so, so, so, so let, I I'm not, I'm, I'm not such a great fan of string quartets. I'm more a fan of Taylor Swift is, so the, the way I like to think about it is, in this future AI world where everything's automated, she's seeing, you know, I could go watch Pete play guitar and sing Taylor Swift songs. I would love to, and, and that's pretty good. But it's not nearly the, the substitute is gonna be, I can put on my virtual reality goggles and it's as if I'm standing right in the ma mosh pit next to Taylor Swift watching her perform. Now it's a virtual reality version of it, but if the fidelity is good enough that I can't tell the difference, do I prefer watching Pete perform or watching the virtual Taylor Swift? I don't know. But that, that, that's an example of the substitution Cule charity thing. Like i, it, that's an interesting question to think about. Along these lines
- Certainly get crowding at our concerts.
- Yeah, yeah. No, but yeah, not in the virtual one. So, and even Pete and I can be there together watching Taylor Swift virtual all the more, okay, so, so, so this brings us to the Ben Jones paper. And so, so Ben built on the framework that I, that I've just elaborated and he noticed something that was unfortunate about what I've said so far. What I've said so far only occurs as beta gets close to one. So this is a limit as a hundred percent of stuff gets automated or nearly a hundred percent of stuff stuff, you might say, look, that 2% growth that we've had for 150 years, we don't think we were close to the a hundred percent automation point. And so there's still a, it still doesn't hang together. Exactly right? So, so Ben kind of fixed that and what he did is he said, let's add the fact that computers can get better. So it's not just one unit of K produces one unit of output, but maybe your computers get better. So, you know, half a unit of K can produce one unit of output, or a 10th of unit of K can so, so let computers get better. And then it turns out the capital share is the ratio of the beta. What fraction of tasks are automated to the productivity of computers? Okay, beta divided by z, Z is how good are our computers? Beta is what fraction of tasks have been automated. And what you can see there is, okay, we're automating more and more beta's going up that would tend to raise the capital share, but computers are getting better and better. That would tend to lower the capital share, right? Because having infinite computers still only gives you finite output. It's the diminishing returns that John was highlighting for computers are getting better and better, but that means they're becoming less and less scarce, more and more plentiful. And so we're more and more constrained by the bottlenecks there. So what Ben observed is maybe the rise in beta and the rise in Z exactly offset and we get constant capital shares even when we're away from the beta equals one point. And that was a, that was a great insight. You'll notice if you think about it that, okay, I said beta's bounded at one, at most you can automate a hundred percent, but you imagine computers could keep getting better and better. So Ben wanted to rule out that case. And so he made it, the computers didn't get better and better and better. They, they kind of, it gets harder and harder to make computers better and just the right, right way that you get beta over Z constant. And that, that's interesting. But the other side is also interesting, and this is the side that Chris and I are thinking about. As beta goes to one, if computers keep getting better and better, what happens to the capital share? It falls to zero, right? So before now we had the capital share level leveling off at a third, the labor share at two thirds. And that was already surprising. I said, we don't get a hundred percent capital share, but now if computers keep getting better, then actually the capital share could fall to zero. And this is exactly the weak link. We're constrained by the stuff we're bad at. We got infinite computers 'cause they're really, really good. But that means labor gets all the GDP not just two thirds of the GDP. And it's really the Sigma less than one is, is doing a lot there. And what's happening is computers are everywhere, but the price of computers is falling, right? The quantity is going up, the price is going down with elasticity less than one, the price effect dominates in the capital. The computer share goes to zero. Okay? Now an interesting question. This brings us to the paper with Chris, but let me, let me skip the description. Interesting question is, okay, what's happened to the factor share of computers? So, you know, labor share, capital share, you can look that up. Capital share in the last 20 or 30 years has been going up a little bit. Labor share has been going down a little bit. The computer is one form of capital. We can just look that number up as well. What's happened to the share of GDP paid as a return to computers? So on the one hand, computers are everywhere. On the other hand the price is going down. Well, here's what you see. So during the.com boom, the factor share of computers did go up from 2.6% to just over 3%. So computers going everywhere dominated the price declines. But since 2000, sure computers are everywhere, but the price declines are dominating. And in fact, the fact share factor income paid to information technology has fallen in half in by a third from 3% to 2%. So this is the point I was making Elena about. This is a, a world with sigma less than one is how you interpret this fact Sigma greater than one would say the computer share should be going up, the computer share. And the data seems to be falling a lot. And maybe that supports this view that sigma less, this is the Google maps world
- Where it's basically, it is free and but except the sheer going to energy to run the computer seems to be rising
- Or relatively stable. I think Steve would say it's not rising.
- I think it's going up, it's gonna go up
- Very,
- Very recent. We look over 30 years. It's not going until the computers get better. It's just kind of fun how much we're spending on energy.
- So what are you counting as a computer here? There's hardware, there's software, there's,
- This is not software, this is hardware, this is computer and information technology equipment. So okay, the software share goes up actually, but the the the hardware shares
- Actually there's
- An Exactly. People make the software. So that's,
- But actually, actually, there's an immense amount of software now embedded in the chips.
- Yeah, that's true too. That's true too. Okay, so, so I love this graph because it's, it's surprising to me, we don't all know this fact. 'cause you, you, it's, it's a relatively easy number to look up, but it seems very informative. So, so this future, this is where we get to the AI future a little bit here. This future where AI does everything or whether it's everything or almost everything I'm gonna, I'm gonna rely on almost everything. And that's gonna come back to, to your point in a second, whether it's everything or almost everything is completely different. But if it's almost everything, computers being everywhere, computers writing software, computers doing all the work. Except for me watching Pete play play guitar in the bar and seeing Taylor Swift songs, Pete gets a high salary for that. 'cause computers are basically free. Yeah. Computer company stock price is exploding. Is that a problem or is that consistent? It it, it, it, it, it is. Well, so, so another thing you can like about this graph is in the short run, computers are scarce, right? So computers are very scarce right now. And so the fact of share did go up for a while, but, but I I do agree that the stock market booming for companies that make computers makes you wonder, yeah, maybe in, in one, one interpretation to come back to the point that made earlier, maybe the automation is eventually a hundred percent right in, in this world. I said there's always something reserved for humans. If, if AI does everything, and maybe maybe for the substitutes, this is your point for the substitutes, you can have humans still doing stuff, but for the compliments, maybe AI can do everything and then the capital share goes to one. So, so whether humans are doing epsilon necessary stuff or zero necessary stuff is a big deal.
- Well, there's also adjustment costs with the standard
- Reason for temporary stop. And The, the other thing I'll say about this, the framing that I, I kind of like that Chris and I are using in, in our, in our new papers, what's happening for the next 10 or 20 years. So for the next 10 or 20 years, I suspect there's still gonna be stuff that humans need to do. And it, it's gonna look a lot like the model I've described, you know, this limit of whether it's a hundred percent or 99.99%, you know, that's probably too far in the future, but we'll, we'll comment on both of those things. Oh, oh, sorry. Behind you, I think it was
- So, is, is this model really true for the high-end computing when we talk about, you know, AI here, I mean, I understand the washing machines and the cars and, you know, all that kind of stuff,
- Right? So, so, so these are not washing machines in cars. I think, I think this is, this is, you know, computer equipment, so it includes the high-end computing and it includes, you know, your, your laptop. I don't know what it looks like if you break it out into things clearly. And I, I think this is kind of one of the points that's been coming up. You know, Nvidia computer chips are incredibly scarce right now, so I suspect Nvidia computer chips look like the.com boom period where the shares going up. But that's, that's this artificial scarcity that we just can't make them fast enough. Eventually that's gonna relax and you'd expect the Nvidia computer chip share of GDP to go down. I think based on this,
- Simply we talk about ai, I mean, how do you fold in these ideas, discussion about deepsea, right? You know, suddenly you get a model even on a, on a less worthy chip. That's, that's highly effective that represented here. I mean, do you, do you kind of, and you're planning in your foresight, can you plan that into the model?
- Yeah, I haven't talked that much about AI yet, but yeah, that point's not gonna be one I I engage with. I I think, you know, that the AI models are getting better and better and better, and the fact that you're able to do it even more cheaply is only supporting this point, right? It sort of says, you know, are you so sure that OpenAI is gonna be able to maintain a huge margin? If there's an open source model that comes two months later that's 95% is good, it seems like the margin you could charge for is, you know, the, the bertran, you know, thing. And are those margins large enough to support the market values of these companies? Who, who knows it, it probably works out. But I think that's a fair question to ask, but not one i, I really engage with in, in this paper.
- Ask me very quickly if I understand the graph correctly. So what, what, what does it take to get a stable one third capital income share if this is going down so much?
- Some, well, so the capital shares even going up, right? So something else must be going up. Do we know non-IT capital? Yeah, I I I, I haven't looked that closely. I don't know. It's a great question, but yeah,
- I know no reason to doubt the veracity of the, of the figure. But when it starts John class in graduate school, that's mostly mainframes, right? That are consuming that much their factor income share, right? 85 in the early days of PCs, reasonably early days ofs, that's mostly mainframes. I just find the graph itself fascinating. Yeah, yeah. Share is less now than it was when it was really just about mainframes.
- Yep. No, I I totally, and computers are everywhere. The, the number of computers, the number of chips, the number of flops we're doing is like orders of magnitude more. That means they get cheaper and which effect dominates evidently the cheaper dominates. And that's very much a sigma less than one view of the world. You did mention quickly, oh, I'm
- Sorry to interrupt, but it's just on this point
- Sure.
- That the factor share of income for software is going up.
- Yep. - So the question that I want put when you get to ai
- Yep.
- Is AI software a hardware?
- Yeah. Yeah. No, I, I think I, I would say what, what John sort of alluded to briefly when, when we mentioned that point, which is AI's currently very human intensive, right? At least until last year or the year before and five years from now, you don't think that'll be the case. You think it'll be hardware intensive and then it probably looks like this. So it is true in the data the software share is going up, but maybe it's like the.com boom for computers that if we Yeah, yeah. So it's, it's, it's a, it's, but it know, I agree. This graph is just very thought provoking, which is why, why I like to show it.
- You gonna get back to AI and ideas soon? Is that where you're headed?
- Yep. Yep. AI and ideas, you, you called it right here. Here we go. Exactly. Okay. So, so, so an interesting question that's not here yet. What if AI automates the idea production process, which I think is what we, we envision could happen, right? How do you think about it in the model? And, and a simple way to do it is, let me go for the production of goods and services. Let me forget about automation. So I'm gonna, I'm gonna put it all on the ideas instead of in both. And you know, the paper with Chris is putting it in both. So, so, but this is just to keep it simple. So goods are just produced with labor, but ideas now have this familiar CES production function across tasks. And the tasks, it's just like what we saw before. The fraction of beta, beta of tasks are done with Capital One minus beta are done with labor. And a is the, the number of recipes in the cookbook, the change in a is the number of new recipes. So adot is the number of new ideas. And so this model also allows old ideas to be useful in producing new ideas, right? That's the eight of the fee. So fee tells you how, how, how, how helpful are old ideas in producing new ideas. Okay, all the work we've done carries over. So you get an F of bk, comma, I'm tempted to call it A times L, but I've used A for ideas now. So I call it C instead. So C is the labor augmenting term for the idea production function. And S is scientists rather than L for labor, but it's just S is people. S is people producing ideas. So you get the same F of B, bk, CS in this case where B is the same as it was before one over beta and C is one over one minus beta. Okay? So it's exactly like what we saw before. Now notice this with sigma less than one. The scarce factor dominates. Even if we had infinite computers, this F would be just C times S and without infinite computers, the F is just proportional to C times S with a, with a constant that, you know, you can see bks going up faster than Cs. So computers are becoming more and more plentiful. Capital's becoming more and more plentiful. So this production function of ideas is sort of proportional to C times S in the long run. That's true. So with continuous automation, this idea production function eventually looks like eight of the fee times C times s times a constant. And what's nice about that is that's, that's the kind of idea production function that a, a lot of the growth literature works with very frequently. We know how that one works. If you ask what happens to the growth rate of a in that framework, well basically people are producing ideas or effective people, C times s, effective people are producing ideas. And if you want a dot, if you want a to grow at a constant rate, then Adot is proportional to a, so adot grows at a constant rate. And in the long run, the growth rate of ideas is the growth rate of effective researchers. GC plus GS divided by that one minus fee parameter. The growth rate of ideas is proportional to the growth rate of effective researchers. What's, there are lots of things interesting about that. In my models, the GC is zero. And so population growth, how fast do scientists grow? They grow at population growth. So the population growth rate plays a key role here. You'll notice even if GS is zero, even if there are a constant number of scientists, the sci, the constant number of scientists get squished onto a small and smaller, smaller set of tasks that are essential. So scientists per essential task rises exponentially and that can sustain growth here. So you get this boost from continued automation and just as we saw before, what's the growth rate gc? Well, GC depends on the automation rate. Theta divided by one minus sigma. So theta remember is just as before what fraction of tasks get automated each year, the the what fraction of the labor tasks get automated each year. And so now you can see a couple of things. First, the automation of the idea production function has presumed been going on for a hundred years, right? All of us in this room, remember writing papers 50 years ago, it is a very different process, right? So a lot of stuff has been automated and yet 2% growth. This model predicts that if you've got a constant automation rate, GC is constant, GS is constant, then you get constant growth. So even though automation is going on, you can get constant growth. Now, one of the things you might realize is maybe AI is gonna speed up the automation rate and the, in the paper with Chris, we see some evidence of that. So I think it's plausible to think AI could cause the automation rate to speed up and then if theta goes up, that accelerates growth. So that's one way you could get accelerating growth in this framework, but it's, it's not enough to say the automation rate remains constant, then maybe you have ongoing automation without any speed up in growth. But if, if AI speeds up the automation rate, that could raise growth. The other way is, this is coming back to the point that I was made earlier. You might suppose instead of that beta dot equals theta one minus beta, suppose all research gets automated. Suppose the AI can do everything. AI plus robots, right? Do all the experiments and produce knowledge, then there's no labor on any task. And then you kind of replace that C times s with a K. Right? Now machines are producing ideas rather than people producing ideas. And if machines produce ideas, then it's different, right? If machines give you more ideas and more ideas gives you more machines and you've got romer's increasing returns because ideas are infinitely usable, then that's a virtuous circle that explodes. And you can show formally, you can get, you know, infinite income in finite time if they're infinite number of ideas to be discovered, if not infinite ideas. You discover all the ideas very quickly and then you've got an AK model where growth is still extremely fast. So, so it's interesting about this framework is again, what I was saying, whether a hundred percent of tasks are automated or 99.8% of tasks are automated, you get very different outcomes. It's, it's really that difference really, really matters and it's very hard to know which of those possibilities there. But the, this, this view in Silicon Valley, among the, the ai, you know, extravagance, which I, I consider myself partly a part of this explosion is totally possible. And it, that's surprising to me 'cause I'm a straight line 2% guy. But it, it is, it is definitely possible. Now we had a point
- Too. Oh no,
- There's, he's been waiting for, oh, great. Sorry.
- Then finally three days ago, OpenAI made like a statement saying that older molds engineering, they are hallucinating. So there is not like a way to I hallucination in any, so how, for example, this mold can account for example to ideas that are wrong. So for example, you generate a bad idea and then you take it or generating new ideas and then you have a chain of wrong statements and wrong ideas. How,
- Yeah, no, I, I, I think one thing I find helpful there is to ask, you know, are humans perfect at generating ideas or do we hallucinate too? We hallucinate all the time. Science is incredibly flawed. We go down wrong paths all the time. Why does it work? 'cause we have competition and we have the scientific method and we'll have competition with ai. The nice thing about ai, it does hallucinate, but it's hallucinations. Are IID they're kind of independent, doesn't hallucinate the same way twice. So if you ask the ai AI model a hundred times, here's an idea I have check it, okay, maybe on 20 of the times it hallucinates, but the other 80 times it'll say, okay, no, this works or it doesn't work. And why? And so I think competition among the ais, because these hallucinations are, IID is a way to get science the same way, you know, humans are imperfect. So I, I see the analogy kind of, kind of that way the, the AI committee is gonna be the answer to.
- So I have a point to make, which is, to me, the most fundamental thing about AI and AI discussion is the uncertainty and the most basic form of uncertainty is it's just not possible to know what it's possible to know. So therefore your choice about whether this keeps going or not is obviously important, but it's just, it's just not possible to know what it's possible to know by the computers or by us. And, and so I'm just wondering if you have thought about how to deal with that and
- Yeah, yeah, no, I, I, I totally agree with the point. And I, the one way I've been trying to make it is whether literally a hundred percent of stuff is done by machines or it's only 99.8%, those lead to two very different worldviews. And so that's like an extreme version of something that's very hard to know leads to two very different paths in the paper with Chris. The way we're dealing with that is we're gonna say, let's simulate a model that has one has this outcome. When has this outcome, and let's ask for the next decade or two. Do they look pretty similar? And I'm, I'm hoping that they do. 'cause I think that would be a way to say, okay, we don't know what's gonna happen in the very long run, but for the next decade, it's not that sensitive to which of these world views you hold. And that, that seems useful.
- The other thing is a huge amount of the attention in a lot of the books right now are going on LLMs, but there's a view that actually inferencing will be more important. So I'm wondering if you've thought about that division and if it makes any sense in your model.
- Yeah, and not, not so much. I mean, here it's really a question of can AI invent ideas and what, what tasks does AI help us with? And, you know, which algorithm helps us do that eventually, I think is not something that I totally engage with, but I do think the algorithms are getting better and better. And it does seem like, kinda like my answer to the previous question, they're kind of ways that clever computer scientists are finding around these problems. And I feel like it's, you know, your best prediction is, is probably gonna get better and better for the next, you know, five or 10 years. And building that into your worldview is probably a, a a useful thing to, to do.
- For example, your Taylor Swift example though, it sounded like consumer preference drives what becomes the weak links. Not anything else. Like, we like this and therefore it's a weak link and can be labor in humans as opposed to anything else.
- Yeah, yeah. I I, I would put that at the, at the very far off things like that, that we don't have a good sense of matter for the outcome and I, so, so I I put that in that bucket of a long way away. Those things matter a lot and it's useful for us to think about it, but it's not obvious which way it
- Goes. I probably would like your that answer.
- Yeah, yeah, yeah. I dunno. Okay. So, so, so let me turn now on, on the next two slides. I, I wanna give kind of a, an optimistic view of AI affecting growth and a pessimistic view of AI affecting growth. And again, I don't know which of these views is right, but I find it helpful to lay out what I've learned and thinking about it. Sorry. Okay, so, so, so first the optimistic view. So, so what would it look like if we were living in a world where AI accelerates economic growth? So I think in the near term, you'd see a lot of the things that we're seeing now that, that the AI labs are saying. They're saying, look, the first thing AI automates is software production. It's already useful. If you talk to computer programmers, it's raising their productivity. Dario Ade at Andro claims that 90% of anthropics code is currently written by the AI models. And so yeah, software being heavily and, and maybe in the not too distant future, fully automated, if that happens, and let's say in the next decade, I don't, I don't, the timeline thing I don't want to get hung up on. I'm a long run growth economist. So I feel like whether it's two years or 10 years or 20 years, I don't care. What I care about is, is the world changing profoundly in the next 20 years or not? You know, that, that seems better to me. So maybe in the next decade, is it plausible that AI agents automate all coding That's totally on the table. I think, you know, that, that seems likely. Okay, if AI can do that, what happens? Well, you ask the AI to code up better and better algorithms, right? It, it does become a virtuous circle in producing software. And these ideas, these algorithms are infinitely usable. So once you come up with a better one, you use it on a billion computer clusters to create even better algorithms, right? So you kind of get this, this explosion of software at
- The moment, my understanding of the AI is it collects a human written database, gives you the best that a human has written, but doesn't yet come up with new algorithms to do something that humans haven't come up with.
- They, they, they, they're working on that. So, so, so it does sometimes, right? So we've, we've seen it, so the, the the alpha AlphaGo moment where Lizzo do's playing go and the AI does move 37 that no human would've ever played the way that AI model was trained, it wasn't trained the way LLMs are on human, they didn't watch a bunch of humans play go. Instead it played itself over and over again and came up with new knowledge. They're definitely, the labs are to Michael's point, the labs are building this in now and they're trying to say, okay, let's let the AI experiment and have this reinforcement learning.
- So you give it a goal and then it can
- Exactly.
- That's a different thing that it just goes out and finds how somebody else programs data today.
- Exactly. And it's, they're very much, that's the thing they're doing right now and for the last like two years. So
- Ho Blakely, oh great. Hi, I'd like to ask a question.
- Hi Chad. Great presentation. The one thing is cur I'm curious about is whether a physical quantity, you have predictions for physical quantity like floating point operations, which is something that is perhaps testable in, in over the next 10 to 20 years. Is that gonna be super exponential? I worry.
- Yeah, you know, Moore's law is that it's, you know, exponential at 35% per year for the last 50 years, right? So, so the number of the, the the number of, you know, transistors we can fit on a computer chip is grown by 50 million and since the early seventies, a factor of 50 million is basically infinite to be. So I think we're saturated in computers now, evidently not because of data centers and everything else, but, you know, yeah, I think that's an example of the kind of thing and, and that'll be, that actually transitions nicely to the next thing. O once you automate all software, what it, you know, just to, just
- To follow up, the Moore's law is log Yep. Is logged, so that's exponential, but we're also exactly increasing the capital stock constant, but we're also
- Yeah, constant exponential growth.
- That's right. But we're also going to tasks that presumably get harder on a, on a flops basis as we substitute humans away, right? So that's what, and we add more chips in the, as it's growing. So that's why I'm asking whether it's super exponential in flops, not in the chip chips,
- That, that getting harder is kind of built in here, right? Because, you know, labor's getting, you always automate 10% of the tasks that haven't been automated yet. Additionally, that's a lot of tasks. And as the tasks that haven't been automated yet shrinks, you're automating fewer and fewer tasks. And so it, it is kind of consistent with that, with that worldview. But whether, whether the growth rate speeds up or not is kind of what it, what I want to talk about here. And I showed you a model where it doesn't, and I showed you a model where it does, and now I want to ask, you know, what's the story you would tell where it does, what would it look like? And I I, I find it helpful. So, so after you automate all software, you could imagine having anything that a remote worker can do over zoom, right? I, I zoom with my RA and I say, take your computer and estimate this thing and simulate this model. Well, my RA could be a software agent rather than a human being. And if you've automated all software and computer activity, you know, that's the kind of thing again, you tell it better and better algorithms, maybe eventually it could automate all cognitive tasks. Again, this is a, a speculative thing, but that would be the next step. Cognitive tasks include inventing not just better algorithms, but better computer chips. This will get to Hoyt's point if we accelerate the production of computer chips, better robots, you know, design better robots for me in virtual reality land, and then we can build them in the real world, better medical technologies. I think an important class of ideas are these medical I ideas. And then when you have good robots run by ai, you're automating physical tasks as well. You automate all the cognitive tasks and all the physical tasks maybe except keep playing Taylor Swift songs. Does that raise growth rates substantially over the next 25 years? Yeah, I would imagine that would, right? So, so that to me is kind of the, the scenario that, that the accelerationist or the most optimistic people about AI kind of see happening. And it's not clear to be where it breaks here, it definitely could break, but this is not an insane story given the kind of model that that, that I wrote down before this, this, this could happen. Let me give you the pessimistic take as well though. And the pessimistic take comes from lessons from economic history. So, so first, one of the key lessons by, you know, Paul David here at Stanford, by Eric Boff and also here at Stanford now, is that everything takes longer than you expect, right? You automate the steam engine goes to electricity or computers are everywhere. But in the productivity statistics, it seems like those changes take decades, maybe half a century to show up. And it's probably true that AI is gonna take a while too. There are all these complimentary innovations that have to occur before you can incorporate AI into every business in America. So that that probably doesn't take two years, it's probably a couple of decades. That's one point. The second point is that straight line graph. So that straight line graph I showed you the Moore's law of economics, GDP per person, straight line on a log scale. Think about all the technologies that were developed in diffusing during that 150 year period right at the start. Electricity was, you know, barely there diffuses throughout the economy, radically changing the economy. Internal combustion engines, transistors, semiconductors, information technology, the internet, smartphones, all these big major innovations occurred in diffused throughout the economy. And yet straight line 2%, right? So one view is, okay, within any general purpose, technology ideas get harder to find the steam engine runs outta steam and to maintain the 2% growth you need to invent electricity and then the transistor and then the semiconductor. And so maybe those great ideas are what kept growth from slowing. The natural thing is for growth to slow down, but we kept coming up with great ideas. And in that view, maybe AI is the latest great idea that lets us stay on that 2% trend for another 50 years. Okay? That's very much consistent with the history. That's also a plausible worldview between these two worldviews. I don't know what's right, but it seems to be those are are kind of views that intelligent people can hold and you know, reality's gonna be somewhere in between.
- You, you gave a paper here and I think it was in year two, what are we gonna call before ai? A AI where you said we're running out of ideas and growth is gonna be terrible. Yeah, absolutely. And there was a discussion, well maybe some new idea is gonna come along. Is this the one that ca some, you know, some fundamental steam, you were saying the steam engine kind of really running out of ways to make the steam engine better. Yep. And we said, well maybe diesel engines are coming
- Along. Yeah, yeah, yeah. No, that, that, that's that our idea is getting harder to find papers is exactly what, what I have in mind when I'm talking about the, this view of the world. Yeah. It took new big general purpose technologies to maintain the 2% growth. And even those, I will say even those get harder to find the way we defined it in that paper,
- The issue of technology has a lot of examples of what the original use the inventor invented wa the steam engine, steam steam, steam motor was taking water outta coal mines. Yep. Marconi was trying to compete with Tele telegraph mine. Yeah, yeah, yeah. Any event. So it would suggest that having a flexible dynamic economy where entrepreneurs can keep doing this, funds can flow in, there's some creative destruction, all that sort of stuff.
- Absolutely. Again, how enroll got the Nobel prizes for good
- Reasons that I, I totally agree with that. That's kind of, that's kind of my view of how we've maintained 2% growth, but it's not the style model, but
- Yep. No,
- That's
- Critically important in all this. I think it's in definitely in, in, in the underlying the, the graphs that I'm showing you. Okay. Let, let me jump ahead to catastrophic risks and in particular, you know, we, in Silicon Valley, you hear about PDU and other things that they seem kind of out there. I wanted to ask, what can economic analysis say about these risks after all? You know, it's interesting that Sam Altman, Dario Amedee, Elon Musk de has Jeff Hinton, Nobel Prize winners, the leading people at these labs, when they started working on this, they all said this technology could be more important than electricity or the internet, but it may be more dangerous than nuclear weapons. They very much take these risks very seriously. And so if the world's experts before they had lots of money on the line, were taking it seriously. Maybe we should take it seriously too. So if, if you, if you read what those people and, and, and other people like them, if written, they're kind of two views of the kind of risks that could be out there. One is bad actors, and this one I think is totally a obvious real risk. So imagine a terrorist with access to chat GPT eight, this oracle that can, you know, has access to all knowledge, all human knowledge and can make combinations of human knowledge. You ask it to design a virus that's more lethal than Ebola, it takes three months to display symptoms, if that's possible. Some smart model could probably figure out how to do it. Again, the model doesn't have any agency, but a bad actor could use that to do a lot of harm. Nuclear weapons were manageable because only kind of two parties had red buttons in front of them. If 8 billion people have access to red buttons, can we ensure it never gets pushed? That seems like a stretch. So that's, that's one kind of worry. The other kind of worry is where the alien intelligence, so imagine we find out this afternoon there's a spaceship on its way to earth passing by Pluto. How do we feel? Pretty excited. And then maybe 10% we say, well wait a minute, probably 10 versions
- On the internet right now.
- Yeah. You know, maybe it doesn't end well for us. There's some chance it doesn't. And this computer scientist at Berkeley, who's the co-author, one of the leading machine learning textbooks, I was on a panel with him and he had this quote that I liked a lot. How do we have power over entities more powerful than us forever? And, you know, is this something we might want to think about? I think probably yes. So how am I doing on time? I I only have one more minute. I'm gonna, I'm gonna skip the i some of these findings here and ask the following question. So I think these risks are things we should take seriously. We ought to be spending time and energy and dollars to mitigate those risks. However large you think they are, however small you think they're, how much arguably should we be spending? And I thought this was a very speculative question that would be very hard to, for a while. And then I realized that we faced exactly this question quite recently, which was the COVID-19 pandemic, right? The COVID pandemic led each of us to face a mortality risk of 0.3%. When faced with that risk, we spent 4% of GDP not going out to mitigate that risk. Okay? I suspect the P doom, the AI risk is at least 0.3%, in which case maybe we should be spending 4% of GDP. Now you might say, well that 4% maybe wasn't optimal, but if you look at the value of life, if anything, it was maybe too low. Okay? So the value of life that the EPA or the Department of Transportation posts on its webpage is something like $10 million per person. To avoid a mortality risk of 1%, you'd pay 1% of 10 million or a hundred thousand dollars. That's more than a year's per capita GDP. So if you thought X risk were realized over 20 years, if we could totally avoid it, you'd be willing to pay five percentage GDPA year. Now we can't totally avoid it and so that's why there's a paper behind this. But, you know, should we be investing a third of 1% of GDP In all the ways I do the calculation, the answer to that question is yes. And we're probably investing $2 billion instead of a hundred billion dollars, which is a 50 x under investment. So I, I think we should probably do it.
- What can you do about it?
- Ah, there, I think there are a lot of things we could do about it down rat. No, no, but, but there are a lot of things we could do about it. So, so think about deep mind and alpha fold protein folding. Do the narrow AI that's likely to save lives and cure cancer and cure heart disease. That's much less likely to kill everyone or slow down a little bit and work on the safety technology. I, I think there are things we can do. There are pe there are people working on it and give them a little bit more time. So my, my big point, I i I maybe just end with this. How much did the internet change the world between 1990 and 2020? If you look at that straight line graph, you might say you don't see it, but I think the answer is really that it changed profoundly. You don't see the counterfactual. What would happen if we hadn't invented the internet? Ask what is AI gonna change things by more or less over the next 30 years? Probably a lot more. So I, I like to calibrate it how many times the internet is ai and it's probably gonna be a, a decent number. So I'll end there. Sorry for going a.