What Markets Tell Us About AI Timelines (with Basil Halperin)
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through What Markets Tell Us About AI Timelines (with Basil Halperin), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 91 full-transcript segments: median 0 · mean 1 · spread -22–17 (p10–p90 -7–9) · 2% risk-forward, 98% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: high.
- - Emphasizes alignment
- - Emphasizes policy
- - Full transcript scored in 91 sequential slices (median slice 0).
Editor note
Use this to calibrate planning horizons before making strategy or policy commitments.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video AuLhkCWIukc · stored Apr 2, 2026 · 2,475 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/what-markets-tell-us-about-ai-timelines-with-basil-halperin.json when you have a listen-based summary.
Show full transcript
So within macro I think the big question is will AI lead to a speed up in economic growth or will it get bottlenecked by certain sectors or areas. The effect of aligned AI and unaligned AI goes in the same direction on interest rates unlike equities or other asset prices. It's hard to get away from the idea that there will be skyrocketing inequality in a truly transformed AI scenario. But skyrocketing inequality might still be consistent with everyone being better off. Coordination with other people is not something that AIs as they exist today in helpful harmless chat bots can can really help with. It's very plausible that benchmarks are these narrowly defined tasks that don't really capture the breadth of what a worker does every day. Welcome to the Future of Life Institute podcast. My name is Gus Dcker and I'm here with Bassel Halperin. Basel, welcome to the podcast. >> Thanks Gus for inviting me on. Excited to be here. >> All right. Could you give a little background on yourself to start with? >> Yeah, so I just joined the University of Virginia as an assistant professor in the economics department after finishing a posttock at Stanford. Uh my background is that I did a PhD at MIT. In past lives, I worked as a data scientist at Uber and uh maybe relevant to today's conversation, my first job out of college was at a quant hedge fund uh researching and trading inflation linked bonds. Interesting. Perfect. Right. So, so the the theme for today's conversation is actually the intersection of economics and AI and specifically what we can learn about AI risk and AI timelines from economic indicators. we could say you have this fantastic essay on AI timelines and the efficient market hypothesis and interest rates what we can learn from interest rates when we're trying to predict when we might get advanced AI. So I I'll link that paper in the show notes but could you sketch out the basic idea here or the basic conundrum? Yeah. So this this paper which was joint with or is joint with uh Trevor Chow and Zack Mas uh the the argument in one sentence is that if markets were expecting transform of AI to be coming in the next say 30 years either aligned AI going to rapidly accelerate economic growth or unaligned AI that was going to lead to existential risk of the kinds that I know your listeners are very familiar with. Either of those possibilities would result in high long-term real interest rates. And looking at markets, looking at real interest rates today, we don't see particularly high uh real interest rates. So I break that down maybe starting with what what are real interest rates? >> Yeah, that that would be useful, I think. >> Yeah. So, you know, when when you open up the Wall Street Journal, the Financial Times, interest rates are not usually, if things are going well, the sort of thing on the front page, right? You you see stock prices. Uh interest rates though are this very important price in the economy. They affect a lot of other prices. So, if you do see interest rates in the newspaper, you'll see a nominal interest rate. So when the US government borrows money, it typically issues these nominal loans that pay back in uh dollar terms. So if it issues a 10-year bond at I think the current rate something like 1.6% last I checked, that means that in 10 years it has to pay back the amount of the loan plus 1.6% interest on that loan in dollar terms. real interest rates are different from those nominal interest rates in that they adjust for inflation that occurred over that period. So they're sort of the the real thing so to speak. >> Um so uh why look at real interest rates to think about transformative AI? Well interest rates clear the market the way economists think about real interest rates is that they clear the market uh in the supply and demand for saving and borrowing. So if I want to borrow uh then I need to take out a loan. The real price, the real uh cost to me of that loan is the real interest rate. >> Mhm. >> Uh meanwhile, what does the lender get in return for lending to me? They get the real interest rate. So if I really want to borrow, if everyone in the economy really wants to borrow, that would push up interest rates uh in order for the markets to clear. How does that come back to AI? Well, if we all expect to be super rich next year, uh, you know, if I'm going to earn a lot more money next year than I am this year, there's much less reason for me to save today, that lower supply of savings, so to speak, speaking a bit loosely, uh, uh, pushes up interest rates. >> Mhm. >> Similarly, if we all expect to be dead next year, no reason to save today, that would push up interest rates. That's the argument in an abstract level. can get into more concrete things like you know do we see people on AI Twitter talking about taking out long large long-term loans talking about not investing in their 401k we can get into that but that's a highle abstract argument >> yeah and this rests on the market for interest uh or the yeah the market for for loans and the the market that sets the price of of interest the market that sets interest rates being efficient in general. So maybe we could explain the efficient market hypothesis and what it is that we are. So if we are saying that the market is wrong, what does that actually mean? Yeah. So the efficient market hypothesis, which is sort of what we're leaning on or hinting at in our argument, is this idea that financial markets reflect all available information. Mhm. So, uh the price of General Motor stock reflects all available information in the world, so to speak, uh about future profits. Uh that's because stocks reflect uh expectations about future profits because those are paid out as dividends to shareholders of equities. Similarly, with interest rates, uh I just explained how interest rates reflect the supply and demand for savings. Therefore, we would hope that if markets are financially uh informationally efficient, they will reflect uh correctly the market participant beliefs about future consumption savings decisions, about future economic growth. Uh the idea for markets being efficient is sort of essentially supply and demand. uh essentially just no arbitrage that if you knew with certainty uh that General Motors was going to have really high profits next year and that was not reflected in stock prices, you'd immediately want to go out, buy a bunch of GM stock, hold on to that, earn the dividends next year. That's harder if you're unsure. That's harder if you sort of don't have the capital to invest in GM um enough to move prices up to the correct level. But that's that's the sort of basic idea underlying this. Uh markets are good information aggregators particularly forward-looking financial markets. Yeah. The basic idea here is something like if you think you know if you think you have a piece of information that the market is not incorporated, you then have an incentive to use that information to try to earn money. And because that because that incentive is is is quite powerful and because there's so many people looking to um to to earn money and to to price assets correctly, the the the assets tend to reflect all of the information also because people now have an incentive to seek out new information. Um, so yeah, I think just a quick point here. Why is it that when my uncle say say uh hears about Chat GBT and invests in in in uh Apple or Nvidia or some kind of tech stock and he he beats the market, why isn't that a reputation of the efficient market hypothesis? >> Yeah. Uh so I think even if you're someone who thinks I have information about one asset to buy to beat the market, you're you're still a believer in sort of market efficiency for 99.9% of assets 99.9% of the time. >> Uh so so even if you have some insight, uh >> it's a good benchmark to trust markets to get things approximately right >> sort of as the sort of outside view perspective. There are a number of reasons why someone might be able to beat the market. If you just look at historical data, number one, there might be selection bias. No one talks about all the money they lost on bad cryptocurrencies in the late 2010s. They only talk about all the money they made in Bitcoin. Another reason is that you can earn excess returns in financial markets by taking excess risk. So for example, investing in bonds has historically a very low return whereas stocks have a higher average return. Does that mean that there's a market failure here or a sort of market inefficiency that you can always just invest in stocks and get like a 7% annual real return historically something like that versus like a 2% roughly historical annual real return on bonds? No, that's not a missing arbitrage there. It's that stocks are riskier. If you invest in stocks, sometimes stocks go down a lot and you would like to avoid those big draw downs x anti beforehand investing. Investors need compensation in order to bear that risk. Uh maybe for example, you're most likely to be unemployed right in the middle of a recession and those are also periods when stock markets are down. this correlation between uh when you want money the most when your margin utility of consumption is highest uh in in econ speak uh that correlation with uh stock market returns means that stocks are risky um and is is one way that people can beat the market. >> Mhm. There there also are genuinely people who have information that hasn't been incorporated into financial markets yet and they are uh getting compensated for bringing that information to mark to the market. So sometimes I think it might be better to talk about market efficiency in terms of like is the market for information perfectly competitive? Are people getting compensated for going out and doing the costly work of acquiring information, processing information and bringing that information and incorporating it into prices? >> Yeah. Yeah. I guess one one question here is to ask whether markets are actually good at at pricing in so to speak uh the possibility of either extreme growth from AI or existential risk from AI. These could be very low probability events. the could these events could be far out and what do we know about how markets uh price in such information or such possibilities? Yeah. So I think there are two lenses I would think about this from. So one is that a lot of important asset prices are incorporating expectations or require incorporating expectations about things very far in the future. So the average duration of the stock market I don't I don't have the number off the top of my head unfortunately but it's certainly greater than 10 years maybe maybe even more than 20 that is to say that like the average cash flow roughly speaking of stocks is like at least 10 or 20 years out in the future. So markets market participants have to be doing uh far uh far far future or you know maybe not far future by the standards of FLI podcast but far future by the standards of like contemporary media discourse uh uh forecasting. So that that that's one thing. A second thing though to say is that yeah there is a lot of evidence that markets are worse when it comes to things further in the future and that's because this no arbitrage that I emphasized as important for financial market efficiency uh is harder for things that take a long time to pay off. So if if you've paid attention to prediction markets around elections, this is something that you'll have seen where uh four years before US presidential election, uh there'll be a lot of crazy odds on people uh who are never going to win the election. >> And the reason those odds can persist is because you would have to hold on to a short position against some crazy person who's never going to win the presidency for four years. and there's a high opportunity cost of holding on to that trade because you could be doing other things with your money over that time >> or because there's random fluctuations in the market over that time and uh at some point you could get blown out. Um uh in general limits to arbitrage, this is the technical term for uh things that can prevent arbitrage errors from correcting market mispricings. Limits to arbitrage. there's a good amount of theoretical and empirical evidence that this is more severe with uh arbitragees that take a longer time to pay off. >> I mean, so you mentioned your your background in finance, right? I I know some people who have profited uh tremendously from from uh the COVID pandemic uh from Nvidia, from predicting the Trump tariffs and so on. And and you probably know many more of these people than I do. So it seems that there are pockets of uh people with special knowledge, people who are extra super smart say uh that know something that that others don't. Could it be the case that there are insiders uh that know something about AI progress uh that others don't or that the broader market uh is not incorporating? >> It absolutely could be the case. I I am still wary of extrapolating too much from anecdotes because again no one hears about the anecdotes where my first investment funnily enough uh uh when I was like 14 uh for my bar mitzvah my dad gave me a few hundred play money to uh invest in the stock market so I could learn how to invest. Uh I being 14 years old had no idea uh what to invest in. And so I just go to him I'm like what should I do with this money? He's like, "Well, I read this article in the newspaper that said TSMC is a good investment." This was 2008 or 2007. >> Oh. >> And so I put the few hundred in TSMC. And uh uh it like went up a little bit at first and then the 2008 financial crisis happens. Stock market plunges. Uh I hold on to it for a few months expecting a recovery. Uh he keeps telling me to hold on. I'm like, "No, I got to sell out." >> So I looked at this a few years ago. I sold out the week that the market bottomed. Uh uh so that that's that's the average investor for you. Me at age 15 or whatever. Um and of course if I'd held on to TSMC today would probably be doing a lot better. Um uh that's so so I think that really is an important point that anecdotes are hard to extrapolate from. >> That said a second point is that again I think the right way to think about >> what we should expect in markets in financial markets is that you should be compensated for doing real work. So if you spend 24 hours a day uh reading AI Twitter, spending time on less wrong etc etc you reading papers on archive maybe most plausibly uh really doing the hard work of trying to understand what's going on in the AI industry then you do deserve compensation for that and we should expect that you will achieve some alpha reflecting the opportunity cost of your time. >> Mhm. That's that's uh one important point and the third point just is there can be instances where like someone has to go and collect the information to trade and have that reflected in financial markets. So there can be instances where you happen to be someone who collects some alpha uh some some excess returns uh because you had the information first that is possible. >> Yeah. Yeah. Do you have a good sense of whether ideas about either explosive growth or existential risk from AI has spread into kind of mainstream Wall Street institutions? Is it the case that uh the the highly informed people there with all of the compute and all of the data and so on, have they heard the arguments and rejected them or have they perhaps not heard the arguments? So the filtering process the the the process of information diffusion is happening happening pretty quickly still ongoing is my read. I haven't seen like a definitive survey. So some things I can say are like we we wrote the initial version of this essay published in January 2023. So 2 months or uh 6 weeks after chat Gvt was released. Since that time Nvidia has gone up like a thousand% Microsoft has doubled or tripled or something. uh real interest rates kind of interestingly at the 30-year horizon at least have gone up a percentage point. Uh and and maybe I should say uh when when I first started thinking about this issue and debating with very bullish friends in San Francisco whether transform of AI was coming soon uh at that point real interest rates this is like the depths of COVID uh 2021 2020 uh real interest rates in the US in the UK and the developed world were were negative. So, if if the US government issued a 30-year bond uh for $100, after 30 years, it would only have to pay off like 98 or $99. It's like a negative 1% interest rate, negative 1.5%. >> Uh over the succeeding four years, real interest rates rose by uh two to three percentage points, which is kind of a large move. That's probably not because of AI. It's probably because central banks around the world have been raising interest rates to fight inflation. That's sort of a separate issue. We can get into shortrun macro on inflation if we want. I love that stuff. Maybe less relevant for your listeners. But definitely kind of interesting that interest rates rose over this time and have continued to rise further since we published uh this post. So that that's some context. >> Yeah. Yeah. An additional context to say or a reason to bring that up is that I I think this marketbased perspective was particularly useful a few years ago prior to chat GPT because there was such a smaller fraction of the world thinking about these issues. Such a smaller uh amount of information processing by uh by humans uh trying to argue uh how long until transformative AI develops. And so having this look at interest rates was was particularly useful then still useful today but even more so then. >> Yeah. >> Anyway coming back to your question about is this information diffusing through markets or uh uh through the financial industry. >> Yeah. >> So uh certainly two or three years ago uh there were very few people thinking about it. Today there are more. So famously Leopold Ashen Brener who was actually sort of a conversation with him inspired the argument with him debate with him inspired this whole essay uh has since launched a hedge fund situational awareness had an essay last summer uh that made a big splash and just in the Wall Street Journal this week has reported that he his his fund uh with Carl Schulman made a ton of money trading on these ideas uh Leopold uh in his interview with Dor Edge Patel directly cited my work with Zack and Trevor on interest rates saying yes he expects interest rates to rise eventually as bond market traders wake up uh and he has like 1.5 billion in his fund is reported uh so 1.5 billion uh in in situational awareness alone is maybe a niche part of the market maybe that'll grow uh I guess 47% return that was reported is is already growing his fund but then the industry more broadly there cert Certainly uh like Goldman Sachs or other investment bank reports bare minimum thinking about how quickly is the data center industry growing uh thinking about what could the AI the impact of AI be on long run growth the latest numbers I've seen are that if you look across investment banks consultancies on average 10-year growth forecasts are more or less unchanged >> but there are individual forecasters in the financial industry who are much more bullish uh and much more aligned with the man on the street, the woman on the street in San Francisco, so to speak. >> Mhm. I mean, how would we even know? You you discussed uh an increase in interest rates since 2020, but this is this is probably very little to do with AI. If if we saw interest rates increase, uh how would we know why they were increasing? >> Yeah. So, I don't think there's a great definitive way, >> but there are some sort of consistency checks we could look at. So I mean one thing you can do is sort of build a full model of interest rates uh trusting that you're able to forecast the path of interest rates well you understand the determinance of interest rates well I'm skeptical that macro financial models have that much predictive power in general. Hence why I want to look at markets and look for giant changes in interest rates which is what this transformed AI perspective would predict. Mhm. >> Um that said, you could still look at other prices, other things in the economy to try to understand uh is the change in interest rates caused by uh AI expectations. So for example uh the effect of transform on stock prices is plausibly harder to interpret for various reasons we can get into but it's very plausible that certain stocks would benefit strongly from expectations of transformative AI uh for example Nvidia TSMC uh and indeed we have seen those go up a bunch over the last few years so that's kind of interesting I should say that's in the case of aligned transport of AI online transform AI would obviously wipe out not just Taiwan and the United States but the entire world and send those stock prices to zero at at some point. Um so so that only helps us corroborate an increase in interest rates due to transform of AI or sorry align transform of AI. >> Yeah. >> Um you can also look at these surveys of financial market participants of bank analysts and see if their expectations are changing. That said, and actually this is a very important point that I think a lot of people misunderstand, financial market prices do not reflect consensus views, consensus expectations necessarily. They're not meant to track the average level of expectations like a forecasting aggregator like Metaculus more or less tracks average expectations of participants. Financial market prices and all prices reflect the well financial market prices reflect the marginal unit of capital so to speak. They reflect the views of the marginal trader. The marginal trader being the person who uh is just at the indifference point between buying or selling this asset. So it's their beliefs that matter. So if there's someone who has very strong beliefs about AI, there will be the person who's disagreeing with others and uh is trading the asset put put loosely. Um, and there are lots of good reasons to believe that that marginal trader is more informed than the average person because if you have particularly strong beliefs, then you must have uh or it's plausible that you have uh better reasons for those beliefs. The people who have spent 10,000 hours reading the bioenos report or whatever maybe are more willing to make bets in a certain direction >> and maybe they also have access to more capital. I think unfortunately there are many amateur traders that have strong beliefs and not not a lot uh to to kind of back up those beliefs but maybe th those people don't have a lot of capital and so they can't really move the market a lot. >> Totally. Totally. Um, so now I've lost track of where we were on the question, but uh, oh yes, I was explaining that we could look at surveys of financial market participants to see if their beliefs have changed. But that said, surveys are not definitive because the average belief does not determine the market price. The marginal the belief of the marginal trader does. >> Yeah. Yeah. Maybe you can talk a bit about why it's not straightforward to interpret uh stock prices or equity prices. It's not it's not just that we can we can look at uh yeah why is it that stock prices aren't a perfect indicator of of AI timelines. >> Yeah. So there are a couple reasons and I'll I'll say that stock prices are not necessarily uninformative. It just you might need to make additional assumptions to interpret them. >> Yeah. So one thing already discussed is the unaligned versus aligned distinction where aligned uh advanced AI would plausibly raise profits of companies a lot push up stock prices whereas unaligned AI would push uh uh profits down by you know exterminating humanity. Uh that's one issue. The second issue is that you can only invest in publicly traded companies. For example, OpenAI is not publicly traded. Of course, Microsoft has a 49% share, I believe. Uh so, so if you want to uh look at stock prices to uh interpret the effects of AI, maybe it'd show up in Microsoft, but other companies maybe this is less the case. A related issue is that it's not obvious that advanced AI would would indeed in the align case lead to higher profits. So, uh, OpenAI, at least historically, has had this hundred billion dollar profit cap, promising that any profits above 100 billion would be rebated to humanity or something like that. >> Um, and so advanced AI might not even lead to a higher valuation of OpenAI. Again, historically, that seems to be in in process of being changed. uh or there's been talk of this windfall clause that if uh uh just like OpenAI's hundred billion dollar profit cap if AI really leads to some massive windfall then companies could commit to give that windfall to humanity or something or coming back to Leopold dash and predators work for perhaps uh there's been talk of nationalization of companies and then these companies wouldn't earn profits. Final reason why stocks are hard to interpret potentially uh that's sort of the most economically interesting is that higher growth rates for the economy higher expected growth rates for the economy that is do not necessarily lead to higher stock prices. The reason for this is kind of subtle. So stock prices reflect the present discounted value of future dividends, the present discounted value of future profits. That's the way uh that that's that's the most successful framework for thinking about equity prices. >> And when you when you say discounted, what do you mean? >> Yeah. So, if you have a company that exists today and tomorrow, it's going to exist at the the stock price is going to reflect the value of any dividends it pays out to you today and the value of any dividends it pays out to you tomorrow. But not just the sum of those two. you discount the value of the profits it pays out to you tomorrow by the interest rate that you could earn in the meantime by putting your money in the bank earning some interest rate. Yeah. >> So, exactly to your point, stocks reflecting the present discounted value of future profits means that although transformative AI could push up future profits, as the whole thesis of our blog post and paper argue, it will also raise interest rates. And so, it depends on will future profits go up by more than future interest rates go up. That in turn depends on this very important parameter in economics, the elasticity of intertemporal substitution which reflects how people trade off consumption today versus consumption tomorrow. I can go into more of that. Uh it depends on whether this elasticity is above or below one and the literature is not settled on that. Famously or sort of famously macroeconomists think it's below one. Financial economists think it's above one. The estimates in our paper suggest below one. Uh but it's it's a hard parameter to estimate. Mhm. Mhm. This is actually kind of maybe an interesting objection to your thesis. Um yeah, the argument or the objection goes something like this. Uh so if we have advanced AI, we might expect to have uh products and services that are of much higher quality say in 5 years than than we have today. And so you would expect people to to save money thereby driving interest rates lower um because you say you can you can buy an amazing virtual reality headset in 5 years or you can buy medicine that can extend your lifespan in 5 years and so on and so this might drive saving um and thereby lower interest rates. Uh is this is this incorporated into your argument or how would you think about this? >> Yeah. So basically, I think this is one of the two or three best arguments against the whole thesis, but I I still in my best guess think it's it's not powerful enough to outweigh all the other factors. >> Yeah. >> So, do you mind do you mind restating perhaps the best version of of the argument that I just tried to give? >> Yeah, totally. And and I think this will give a good opportunity to get more technical on the the argument that we're making. Yeah. So the precise reason why uh higher future growth traditionally we think of that as leading to higher interest rates today is that if we're going to be rich in the future the marginal utility of consumption in the future is lower. margin utility of consumption meaning that a dollar in the future is worth less to me than it is today because I have diminishing marginal utility where diminishing margin utility means that uh going from earning an income of $100 to $1,000 is a bigger gain than going from a million to a million and $900 >> because if I only have a $100 a month going to $1,000 a month means that I can get more basic necessities. If I go from a million a month to a million and $900 a month, that doesn't that doesn't do that much to me. >> Yeah. >> So, if money is less valuable in the richer future because of diminishing margin utility, there's this uh again the argument that there's less reason to save in the future. I'd rather have that dollar today. So, this counterargument about new goods in the future, uh I I think of Phil Traml as having prominently argued for this and it's really good point. It's a really underststudied point in general economics. uh he has ongoing work with Chad Jones I think to uh flesh out these thoughts and I think it's a really interesting conceptual idea. Uh the argument is exactly as you said Gus that like if we're going to have these amazing goods in the future that don't exist today then potentially that dollar in the future still is worth more than it is today even if I'm going to be rich in the future because there's not that much I can do with the dollar today. The example that Phil gives is that if you're a member of Kingers Khan's Golden Horde, uh if you had uh an extra dollar, like what are you going to do? Buy another horse or something? Uh versus today, if you have an extra dollar, there's like all this cool stuff you can buy, something like that. Um so there there's not this diminishing margin utility because of these new goods. >> And that seems very plausible. The reason why I don't think it overturns the argument is a couple of things or two things. One is that if you look historically looking at economic growth and this is what we do in the paper, you do just see this strong positive relationship between higher growth and higher interest rates. We we have some nice data that I think is sort of a contribution to normal macroeconomic literature away from AI on this relationship between RNG real interest rates and growth showing that historically across 60 different countries a number of decades uh higher growth and higher interest rates just really are pretty correlated. Um so that's that's one thing that historically the invention of new goods has not outweighed the traditional diminishing margin utility mechanism. The second is it's plausible that AI will be different that AI will lead to all these new goods that again all your listeners are very familiar with life extension etc amazing things I'd love to have access to that's all true that does give a motivation to save for the future depressing interest rates at the same time we'll still be super rich because we'll have transformative AI uh leading to rapid economic growth and so I I will be rich enough to you know hopefully uh afford life extension so on. >> Even even if even if you're not saving, you mean you will still be rich enough to to basically afford all of the goods that are available in this potentially amazing future. >> Yeah, that that's that's what I think is really most plausible. Um but all that said, I think this is like one of the two or three best arguments against the whole thesis, but o overall not convinced. >> Yeah. Does AI change anything here? So you mentioned that uh this this is not a phenomena we've seen in the past but maybe AI is different in that um you would expect these these great products and services sooner just because the rate of innovation could be higher. So maybe if you if you don't have to wait 30 years you have to wait 3 years uh in order to to um to enjoy better goods and services you are more uh inclined to actually save the money or it is more uh rational perhaps for you to save the money. So what I would say is that it would be great if there was more historical work looking at how much of growth came from new varieties of goods versus more of horses in the Golden Horde. Like someone should just do that decomposition and uh to my knowledge there's not any definitive work on that or really much work on that at all. >> Yeah. then you could think about how could the future be different affected by the channels you describe like maybe AI in particular uh is is sort of biased towards new varieties rather than more of the same and again I I think that's extremely plausible uh is it enough to overcome the fact that in this transformative AI world that we're considering we're having 30% overall income growth I don't know 30% 30% annual growth is a lot >> yeah how much is that actually you know maybe you could put that in in perspective for our listeners because you know maybe the difference between 3% and 30% doesn't sound incredible but it it really is. So maybe you could you could say something about how extreme a 30% uh yearly growth rate in the economy might be. >> Yeah, great question. So so to expand on that the the transform of AI align scenario we're considering is 30% growth. The reason for that is that's roughly a 10x increase in GDP growth compared to what we see today, which is about 3% as you say. That number comes from the existing literature, Tom Davidson's work, I think you've had Tom on the show. Yeah. >> Uh where looking historically prior to the industrial revolution, something like uh 0.3% GDP growth was what we saw. So there was an order of magnitude increase in GDP growth from before to after the industrial revolution. Maybe similarly around the agricultural revolution and so maybe uh you know in AI the AI world we we love talking about orders of magnitude maybe there'll be another order of magnitude increase in growth around uh transformed AI. >> So 30% growth that's a lot a lot more than 3% that we see on average today and even a lot more than like the really fast growth episodes that you might think of in history. So, China had this astounding sustained growth episode from the reform and opening up period to around 2010. Things have slowed down a bit since then. Still remarkably fast, but their remarkable growth rate was sustained three decades of 10% annual growth. >> 10% versus 30%, still a large gap. >> Uh to put 30% in perspective, uh I don't know, you can use Moore's law as one benchmark perhaps where Mo's law we think of as this astoundingly fast thing. uh computing power doubling every year historically. So 30% isn't quite as fast as Morse law, but it's like nearly there. Mors law is like 40% 44 42% annual growth, something like that historically. Um so 30% would be like the economy as a whole is growing as fast as the incredible progress, nearly as fast as incredible progress we've seen in computing systems over the last 60 years. >> Yeah. Yeah. So, so, so life would change rapidly and and kind of tremendously under 30 30% growth rates. >> Yes. Uh, so if if we have 2 or 3% growth historically in uh the developed world uh in the post-war era, then that's like what a 36y year doubling time for for incomes. So like once a generation your income doubles. >> 30% growth means every two two and a half years your income's doubling. Just a totally different world. >> Yeah. Yeah. Totally. It's actually surprising to me to go back to something you mentioned earlier that we don't have more research on the question of whether most growth comes from kind of new inventions or or most growth whether it comes from kind of uh more production of of already existing things. That that's that seems like a massively important and and and interesting and kind of deep question that we should we should know more about. >> I totally agree. Um, I can speculate that one reason why it's hard is that it's it's hard to think about the introduction of new goods. >> Um, because it breaks a lot of things both economically and philosophically. So >> like would you rather live in the year 1500 without vaccines or today? uh uh is it's it's much harder to make that comparison versus would you rather live today versus 1980 with approximately the same set of goods or something uh because you're comparing preferences over different non non-totally overlapping sets of goods that sort of just breaks a lot of basic uh microeconomic theory. So again, Phil Traml, Chad Jones, I think are doing some very cool work on this. Hopefully they'll inspire others. >> Yeah. Yeah. I mean, it just thinking about this for the first time, it seems to me like new goods and services are introduced all the time, the kind of set of good goods and services that that's available to me right now, right in front of me is very different from the one that my say dad had access to 30 years ago. It's so isn't there a b isn't there kind of a yeah I mean what what do we do with the fact that this is already happening that new goods and services are continually being introduced. So in in some sense economists must must be thinking about this problem just because it's it's a reality. >> So I I think if you if you talk with Phil he would say that economists are not thinking hard enough about this issue and sort of slip it under the rug. So one way you can slip it under the rug is by only making local changes or sorry uh making local comparisons. Local in the sense of uh comparing you today to you two years ago because for over two years in the modern era at least two years there's like not that much change happening. Those are pretty comparable. And like if if you uh read in the footnotes or whatever of your your favorite econ textbook, you you'll see notes that comparing over different decades is a lot slipperier because of new good introduction, how that affects uh price indic. >> Um >> so so like this this is a known issue. uh there's just like not a great way around it beyond taking these local changes and sort of extrapolating them um at least as far as I'm aware. >> Yeah. Got it. Got it. Okay. So, so if we go back to the question of kind of economic indicators for AI timelines, we've talked about interest rates and maybe summarize again for us why is it that interest rates is like the thing to focus on? Why is that a great indicator? Why in particular is that a number uh that incorporates a lot of information? >> Yeah. So one one way of framing this is that Paul Cristiano had this blog post uh over a decade ago saying uh three three implications of advanced AI and the three implications he lists are number one growth will speed up number two wages will fall number three uh humans won't control or sort of uh set the future. thinking about alignment. So, so plausibly with advanced AI that is superior to human at humans at all tasks, wages will be driven down to zero. >> Yeah. >> An issue with looking at wages to understand to forecast AI capabilities is that wages will not get driven down to zero until we sort of have those capabilities at hand. >> Interest rates on the other hand are forward-looking. Financial market prices in general are forward-looking. So, uh, like the US government issues 30-year bonds regularly. Those incorporate expectations about future savings decisions over the next 30 years. UK government issues 50-year bonds. I think Austria has a 100red-year nominal bond. Maybe Argentina does too. So, these these instruments exist looking forward a lot. Uh and and so that's useful because uh it's useful for forecasting instead of just contemporaneous >> Yeah. >> uh uh economic conditions. >> Yeah. >> Um additionally, interest rates are useful going back to the discussion about stocks because the effect of aligned AI and unaligned AI goes in the same direction on interest rates unlike equities or other asset prices. So within the class of economic indicators you can look at that are forward-looking, interest rates are nice because they both go in the same direction for aligned and unaligned AI. And I at least do really want to take seriously this these risks from unaligned AI. >> Yeah. Yeah. >> And then third, as you say, in general, financial market prices in particular, even more so than other prices in the economy, uh are useful to look at because uh uh prices uh financial market prices uh update quickly are liquid unlike wages again for example those are sort of sticky only maybe update every year if you're lucky. Um so we we have lots of empirical evidence that financial markets are good at behaving in a forward-looking way. There's various sort of amusing historical anecdotes you can point to providing some evidence I won't say demonstrating but providing some evidence that financial markets are forward-looking and useful way. One that I like and I think is uh relatively robust is uh uh Arman Alchian who uh was a great price theorist economist worked at Rand in the 1950s the the sort of defense think tank and uh the hydrogen bomb uh had super bomb had just been tested for the first time >> and it was not known uh publicly what sort of material was used to develop the bomb just like uranium was used to develop uh uh the hy the uh atomic bomb. >> Mhm. >> Uh and so he went and looked at the stock performance of various metal producers and saw that I'm going to get the element incorrect. Something like the the lithium producer had outperformed other or producing companies in in the period around uh the hydrogen bomb test. and he writes this report internal to Rand saying, "Oh, this is evidence. I think it's uh lithium was a key ingredient in the hydro bomb." And famously his superiors who knew what went into the development of that technology force him to like burn the draft of the paper. Uh so so like that's a very cutesy anecdote. >> We shouldn't read too much into cutesy anecdotes versus systematic analyses. Um, but that's like one vivid example of of why financial markets are good at incorporating forward-looking information. >> Yeah. Yeah. It's actually an interesting point because I mean there was a bunch of secrecy surrounding the development of of nuclear weapons and you might imagine that there would be secrecy uh surrounding the development of AGI also. Maybe maybe this is even a nationalized progress project that's completely locked down. uh and how would that affect whether the market would know anything or incorporate anything about AI progress into the the interest rate or into the price of of public companies? >> Yeah, great question. So, there's definitely a world where it's like 10 people in a basement working on AGI. >> None of that information leaves the basement. They don't leak to anyone. No one really knows or everyone keeps their lip shut. No one secretly trades and make a bunch of money and no financial market prices show up until the AI in a box is unleashed upon the world. No. >> Uh that that's logically consistent. If AI occurs sort of more gradually as I think one should compared to like the 2010s era discussion Nick Bostonramm super intelligence your discussion of AI seems much more plausible today than it did 10 or 15 years ago. there's just a lot of public information. Uh the information will get incorporated or the information will get leaked. So I don't know how much people in these labs are are doing trading on the side. If if uh you know you're you're an ML researcher and anthropic and you're up to that, I'd love to hear from you. Um there are other cutesy examples from history. So uh Sesh Nidu and co-authors have a sort of hilarious example of they look at 20 different CIA orchestrated coups during the Cold War and they go and look at it's it's something like United Fruit Company in Costa Rica. I I believe I could be getting that wrong. Uh was like the monopouist fruit producer in Costa Rica. Uh there was some sort of revolution in Costa Rica, forgive my ignorance of Latin American history here, uh where United Fruit Company was pushed out. CIA for reasons of fighting communism goes and tries to orchestrate a coup in Costa Rica. And in this paper they show that the stock of United Fruit Company and in parallel in these other 20 incidents that prior to the coup attempt uh there was excess returns to these companies >> and they've had this anecdotal evidence uh again sort of incredible that there were just a lot of leaks from the government initiatives uh and that insiders were trading on these expectations. >> Yeah. So this would mean that we would have to expect AI development to be unrealistically contained and secret for it not to affect affect public markets. Even you know especially if we have stories of of of leaks like that that that affect uh prices. That that's actually that's surprising to me that that you would see a consistent pattern like that. Um, is is it would it just be a kind of insiders uh seeing an opportunity to make to make money and then acting on that? >> Uh, yeah. Or uh uh government regulators eventually get permission to go into these labs and keep track of what they're doing and it leaks out via that. >> Any any sort of story like that? Like obviously this is all speculation and again it's totally logically consistent that the information remains private either through uh social pressure or legal legal force etc. Uh, and like it'd be interesting. I haven't done this if we like went and looked on polyarket and couchy or something about GPT5 release dates. How much does it look like there's leaks going on with that? That that would be some interesting evidence. >> Um, but I think the overriding the more important point plausibly is is the fact that takeoff is slower than uh the AI in a box in the basement. >> Yeah, that's the slower takeoff part. There's also a question of how we're developing AI. So, so in the era of scaling, you need massive data centers. That is something that gets out almost immediately. And perhaps if we're moving to a paradigm of trying to automate AI uh research and development, >> that's something that can perhaps be done more internally and more in secret. And yeah, of course we're we're we're speculating here, but this is this is something that could push us in direct push in the direction of less public information. I think >> that sounds totally right to me. Uh big picture, what I would say is that there are definitely scenarios you can tell where interest rates will not move before some sort of transformer of AI. Uh I I think it's like the best guess and high probability that they will go up quite a bit beforehand and maybe even already have. And then another thing to say is that if you have AI in the basement where it's developed in secret, no one leaks out about it perhaps via the story you tell, like if that leads to uh nanobots terraforming the Sahara Desert with solar panels and then colonizing the stars and so on and that is not leading to richer humans for one reason or another or richer vast vast majority of humans say even uh that would not show up in interest rates because there wouldn't be this consumption smoothing mechanism of people expecting to be rich in the future lowering savings rates today. >> That said, like if Sam Alman's getting super rich in the future, then maybe he has enough money to push around markets uh even if he's the only one to move interest rates. So the mechanism or the story here really is focused on this particular transformed AI scenario of either unaligned AI human extinction or aligned AI leading to consumption growth. >> Yeah. Yeah. How would you settle an argument between you and a person like Daniel Cocatello who's who's been on the on this podcast? And just for listeners uh to uh to remember here, he uh Daniel has has very short timelines. expects us to get to AGI by 2027, perhaps 2028. Now, um, and this is not a story like developing a AGI in a basement. This is more like developing AGI in a desert of like robots quickly building up uh the the facilities you need and this is all happening extremely quickly. So in a sense is if you have very short timelines, how do you argue how would you argue which such a person or how would you settle differences and and find out who's right beyond just waiting and seeing? Uh just because if there are information that can be incorporated into interest rates uh it seems like the mechanism you would you would use for predicting what's going to happen is not really active. >> Yeah. So I think there's a bunch of points one can make here. One is that I I I'll note that personally I find the marketbased perspective most useful for being less worried about short timelines worlds >> because as discussed earlier you might expect markets to be sort of more efficient at these shorter horizons. >> Uh so like th this perspective gives me tangibly substantially more confidence that AI 2027 is less likely. >> Yeah. >> Uh that's one point to make. Another point to make is perhaps markets are wrong. Perhaps AI 2027 is correct. And indeed like perhaps this is a good time for me to say I I I personally am substantially more bullish on prospects for transform of AI than markets are. And in fact I've reallocated my most valuable asset possibly my most valuable asset at least for now my human capital away from traditional macroeconomic issues that I studied for a decade. I was obsessed with monetary policy for almost a decade before uh uh trying GPT3 for the first time in summer 2020 freaking out a little bit and now sort of spending at least half of my time working on economics of AI. Uh so so to some extent I I won't fight the AI 2027 argument fully. Yeah. >> Um that's second point. A third point is again this marketbased perspective can only speak to I I think uh this particular scenario of advanced AI leading to rapid economic growth or human extinction. It can't lead to it can't speak to if this is all happening in the desert not affecting the broad-based economy. If there's sort of a separate AI robot economy, >> yeah, >> this market perspective won't speak to that. And then finally, yeah, we can we can debate inside views about is AI 2027 likely or not >> and set aside the market base perspective. I think that would be the most productive thing to do uh on on that which is maybe another point to make here which is inside views on AI timelines like a J Kotra's bioeners report like AI 2027 like the work done by many others uh I think is extremely useful extremely useful complement to this market space view which does not for example take a stance on uh the compute ccentric worldview uh which I I think the vast majority of recent AI work, AI forecasting work, uh is is leaning pretty heavily, if not entirely relying on the idea that scaling up compute is sort of all you need, at least eventually to have transformative AI. The market is just perspective of just saying, will we see rapid economic growth or human extinction? That's all you need. >> Yeah. Yeah, makes sense. Um yeah, let's see. Oh, you you mentioned uh perhaps a lot of profits from from AI development will flow to someone like Sam Alman or say a broader group of investors and uh leading AI figures. What does it mean if wealth is extremely concentrated? What does that mean? You know, specifically the question is something like they might save and invest differently from from the average person. And so if the if the wealth is much more comp concentrated, what does that mean for for interest rates? >> Yeah. So um what I can say is that the idea that AI could lead to uh rapid economic inequality or even like emiseration of large swaths of humanity while making others really rich. Mhm. >> Uh there are elements of that which I would put in I mentioned before this list of two or three best critiques of this whole framework. >> Uh so it could be up there where uh like if you talk to people in San Francisco about their savings behavior. I think you do sort of get one of two responses but it's two very polarized responses. One is, yeah, I'm not I'm not saving for my uh kids college education, which is a real life example from your former your former guest and my colleague at UVA, Anton Coron. Uh he was interviewed by NPR and him and his wife talk about putting uh >> uh less into their kids' college savings accounts. Uh so that's one possibility consistent with our story. The other possibility is people talking about wanting to save a lot because of uncertainty about the future. >> Yeah. Um there there's more that could be said there but bas uh one one important point is that that that would indeed push down interest rates and is a potential counterargument. The way I think about that is >> wait what would ex what exactly would push down interest rates in this scenario? So, if I'm really worried about not having a job in five years because of AI, >> yeah, >> even if there's 30% GDP growth, if that's all captured by Sam Alman and and Co, uh, and I'm unemployed, then I don't want to draw down my savings today so that I have those savings in 5 years when I'm unemployed. >> Yeah. >> And that higher savings rate will push down interest rates, so to speak. Again, >> got it. >> Again, interest rates, clearing the market and supply and demand for savings. >> Yeah. Um so I guess number one historically again we don't see that higher economic growth is associated with this higher reducing credit risk that would lead to higher savings and lower interest rates. That's one point again AI could be different and there's good stories and mechanisms to think that maybe it could be. So another thing I think about here is that asset prices in general are sort of like a wealth weighted risk tolerance weighted average of people in the economy. This came up in the discussion earlier about heterogeneous beliefs and the marginal trader. Um, but it's also particularly relevant here where uh if if Sam Alman in the future is capturing all this wealth and he's just saving it all, that's that's depressing interest rates. >> Um, so if from 2025 to 2030 we have sort of the economy as exists today. In 2030, 99% of humanity is unemployed, 1% is super rich, and there's 30% consumption growth within that 1% from 2030 to 2035. >> That would mean that from 2030 to 2035, interest rates are reflected by are reflecting the consumption growth of that 1%. And so the 10-year interest rate today reflects like the average of the first 5-year period and the second 5-year period. So the 10-year interest rate today would still reflect uh at least in half the rapid consumption growth in the latter half of the period. Although the first five years might have depressed uh interest rates due to precautionary savings due to fears of unemployment by me and the masses. >> Yeah. Yeah. So, so you think this is this is plausibly a good counterargument, but but how plausible is is the scenario itself? Do you think that AI will will lead to uh kind of extreme wealth inequality? >> Yeah. So, to a large extent, I think this is a question of political economy and depends on what the political response is. So, uh economists for the last decade have spent a lot of time thinking about the China shock in the US where China entered the World Trade Organization. This led to a lot more trade between uh US and China and arguably this led to losses of manufacturing jobs in the US >> and standard the standard econ argument would be more free trade between China and the US is good for the economy as a whole and might impact some people negatively. But those people can be made better off by, for example, taxing consumers who get cheap goods from China, transferring uh some of that tax income to those who lose their jobs or helping them retrain to new work. There's there was limited amount there was a limited amount of such policies in the US in response to the China shock. Um, and that might make you pessimistic that in the future there there could be uh similar amounts of redistribution and that you could see skyrocketing inequality. I I think it's it's hard to get away from the idea that there will be skyrocketing inequality in a truly transformed AI scenario, but skyrocketing inequality might still be consistent with everyone being better off. Yeah. bec just because 30% GDP growth as discussed earlier is such a massive growth rate, it's hard to have 30% growth in the economy overall and to leave the vast majority of people or even a large chunk of people worse off. >> Yeah. So something like perhaps Sam Alman is a multi-trillionaire, but the the average person is is a millionaire. And so even though we have extreme inequality, we we you know what we actually care about is is is more how people um you know the welfare of the population at large. >> Yeah. And I I think the China experience here is a good example where uh if I'm recalling my numbers correctly like inequality has certainly skyrocketed in China over the last 40 45 years. uh but poverty has been reduced so much the average person uh their their their life has been improved so substantially that uh concerns about inequality are are less of an issue in China than in the west. >> Yeah. Yeah. Before we we move on to other topics um we've talked about interest rates as a as a great indicator um of of what's going going to happen with with AI. Are there any other indicators that might be interesting to look at? So I I wrote some someone down or some of them down that I could think of. Maybe maybe capex capital expenditure from from the large corporations. Uh maybe patent filings perhaps papers published by the AI corporations. What what do you what do you think of other indicators? We've discussed stock prices as perhaps not I mean somewhat useful but but perhaps not as useful as as interest rates. Is there if you were to write another uh paper on on on any other indicator what would you choose? >> So if I had to choose an indicator for forecasting it it would be equities because stock prices are forward-looking. >> Yeah, >> capital expenditure is also forward-looking to some extent but uh it's less likely to be made on thirdyear horizons. >> Uh wages again are contemporaneous. So I think there's a lot of good work that still could be done with stock prices and understanding uh why is Nvidia's market cap so high? How much of that is future profits uh because of high markups versus actual quantities are going to be high? Uh why has Nvidia uh increased so much more than TSMC has? Like there's a lot that could be done with stock prices that hasn't been done. I would love for someone to write that paper. >> Yeah. um more more broadly about economic indicators for interpreting AI. The big ones I think are wages, interest rates, the labor share of the economy. So how much of national income is going to workers versus going to capital owners and other and something like unemployment or uh labor force participation. So how many people can find jobs and want jobs or how many people just are working in the economy as a whole. Those prices and quantities I think would really reflect the different scenarios that people talk about for the possible implications of AI and you could also look at those by sector in the economy in particular to see if AI is affecting the economy in a heterogeneous way. for example, automating all of white color work. Uh whereas while making blue collar work more relatively valuable. Uh so you might see skyrocketing employment in factory workers. >> Mhm. >> And rapidly declining employment in software engineers. Uh uh so so those four or five big indicators for the economy as a whole and by sector. >> Yeah. Yeah. When you think of the labor share um Yeah. of the economy. So, how do you define that? Because I'm guessing most people most individuals uh are both um kind of capital owners and uh workers and and so they they maybe people own some assets, they they have some retirement funds, they own a house maybe and they also go to a job. How do you define the labor share versus the the kind of capital share of the economy? Yeah, this this terminology that I and economists and others use uh I think is actually really bad of workers versus capitalists is this very marxian framing um when exactly as you say most people have some mix of those certainly at any point in time and then even more so over the life cycle where >> yeah yeah I I won't belabor that uh pun intended um so what is the labor share technical definition is the total wage wage income earned by workers divided by GDP. Uh so that reflects uh in in in in the world as we know it today at least that reflects uh how to put this um it reflects what share of output is attributable to labor in in in a specific sense under specific assumptions that apply pretty well we think to today's world. >> Yeah. Okay. Interesting. So, so it you can't really look at papers directly or look at patent filings directly or something like that. I'm just wondering whether there's some like intellectual output from the companies that would be worth looking into or measuring. Um, is it the case that whenever there's a new paper or whenever there's a new patent filing that is immediately incorporated into the price of of of the stock or how should I think about this? >> Yeah. Uh so I should have said all your suggestions were also great. Someone should like make a dashboard of all these things. Uh capital expenditures in particular I'll note like I this this gets a lot of discussion but I think it's still underrated that uh hyperscaler capex that's like Google Meta etc. their capital expenditure is like 1% of GDP. That's comparable to the height of the dot uh boom. >> Not clear that we're at the height of anything. uh the the numbers I've seen for real uh the the railway boom in the UK in mid 19th century is like 2% sustained for 25 years of GDP on railway investment uh so 1% is getting up there of course I should say in the railway uh era that that 2% wasn't constant over time it was higher and lower um >> so there's a lot of capital expenditure I just moved to Virginia it's basically the ground's covered in wires for all the data centers obviously that's exaggeration But a lot of data centers here. Yeah. >> Um >> you you bring up the impact of patents on stock prices. There's very interesting work that's been done by folks looking at >> historically when patents were granted. If you look at the stock price of firms who were granted that patent around the day the patent was uh was was granted uh you can see a large impact of patents on stock prices and you can like cumulate that add that up over all companies in the country or the world over the course of a year and you can sort of see a measure of innovation's uh value captured by company profits as reflected in stock And you can like see how that varies over time. Uh Lena Kogan and many co-authors uh have have done this very cool work. Um so something like that could could be done today as well. Uh AI it's a bit trickier because there's less that's get less gets patented. Uh again many of these companies aren't directly listed though again they they often have publicly listed affiliates uh like Microsoft etc. Google. The moonshot economic indicator that I think of is benchmarks are this huge thing in the AI world. We want to look at how good is AI or new AI models at well- definfined tasks uh that are scorable in an automatic way or like the math Olympiad recently. Yeah. >> Uh, in economics, there's this database produced by the government in in the US, uh, the ONET database that has this list of 19,000 tasks in the economy performed by American workers. And it would be amazing if we could have uh some very expensive program because it would be very expensive to do measuring what fraction of tasks can models perform themselves. Uh or >> how much do uh models improve human productivity on each task and keep track of that over time by task. Then you could do things like what share of tasks has gone up over time or what share of tasks is automatable and how has that changed over time. You could do the standard thing in AI and take a trend and just extrapolate it and you could try and see when will we have 100% uh automation of the economy uh of of 2025 er tasks. New tasks are constantly being invented. Um that that I think would be the moonshot for uh if if the government decided it wanted to throw a lot of money at understanding the economic implications of AI. That that would be what I would suggest. >> Yeah, that would be extremely valuable data to have. That that would be super interesting but also quite complex as you mentioned. Um yeah, so how would um how would these tasks change over time? So I would imagine as AI gets better, uh some tasks are now automatable. Those tasks now make up a smaller percentage of what people actually do. So the the task list from from 2024 is not relevant uh is not as relevant in 2028 anymore. So I guess the the what you're tracking is also how which tasks people are spending their time doing. >> Yes. So the way I would put this is that the second moonshot I wish the government would find would be keeping track of exactly this where this this onet database of tasks is static over decades. it gets very occasional updates. So, number one, like we could really do with an update in the year 2025. That would be great. Or potentially just sort of a new way of thinking about this, like should we be using technology to keep track of what people are doing minute by minute of the workday and even using AI to classify what people are working on or something uh more dynamically like maybe this whole static database structure needs a rethink. Uh I I don't think that sounds like it'd be really hard for one individual researcher to try and do on their own. Uh but the government might be able to do it. Uh so so a one time big update would be nice. Even better would be real time updating every year or whatever. This list of tasks, how is pe how is people's time allocated across these tasks changing over time? What new tasks are added versus deleted that humans do versus machines? That would all be great. we uh to my knowledge at least don't have that information. >> Yeah, we would have a much more granular view of what's happening. I think when you're looking at uh the unemployment rate or the labor participation rate, it's it's it's it's all condensed into one number and you don't really know what's causing what and it's of course you can see that there's not a massive effect on of AI on unemployment yet, but it would be certainly it would be great to see uh whether some industries are being affected. this could be happening right now and and maybe we don't have great insight just because it's not moving the needle on the one number that we might be looking at. >> Yeah. Uh and of course these numbers are uh certainly in the US I'm sure elsewhere broken down by industry uh uh there's surveys that are pretty frequent uh that get you richer demographic information. So we could see things like are younger coders yeah >> losing jobs? Uh that would be super interesting. But those surveys while uh you know in the tens of thousands uh provide very nice data for lots of purposes the data is pretty noisy is pretty small when you try kind of want to go down to for new graduates working in CS >> uh how is their employment looking >> uh expanding those or using private sector data which I I think there's there's work that should come out soon on this topic uh uh that that would be really useful to look at. >> Yeah. Yeah. Here here's a question I've asked a bunch of guests. Um it's about the difference in economic effects the economic effects of AI and then what we see on benchmarks. It is surprising to me and I think to to uh other people also that we are seeing this very strong performance on a bunch of benchmarks. So AIs are now incredible at passing uh college exams and they're they can pass the bar exam. they can do all they can score highly on medical uh kind of examinations. They can they can do especially well on on coding and and math and so on. Um how is it how is it the case that I am in daily dialogue with a chatbot that is better at me uh at math and and coding but it is not yet it is not yet it it doesn't uh have uh massive economic effects yet. >> Yeah. So like this is just one of the greatest questions of our time. So I can offer some speculations or a couple different answers. >> One is like maybe capabilities really are this good and it just hasn't the technology has not diffused through the economy. So firm managers are old, crusty, they don't want to adopt uh AI to replace workers uh or workers don't want to adopt AI because they don't know about it. Things like that. That's one possibility. Slow diffusion in general is a major lesser lesson of economic history that technology takes time to diffuse for various reasons. Um I I I I don't think that's sort of the main reason just that uh uh people in San Francisco want to fantasize or something that uh old fusty people don't want to adopt new technologies. I I don't even think many people believe that because it's so a priority. Hard to believe. Yeah, it doesn't seem super plausible to me either just because I mean it is quite easy to incorporate these models into your workflow and you can I mean it's something that's available to to individual uh workers that they can use for themselves and they have incentives to try to use these models to make their their lives easier and so on. It is something that is yeah I much much more easy to implement in the economy than than say if you had to spread physical hardware to do something better. >> That's definitely true but I think will lead to or I want to disambiguate two things where one one is pure diffusion uh just have people learned about this technology or something like that. Another is incorporation into workflows. Let me drill in on that where uh workflows involving other people can be hard to change around. And so there's a Microsoft study where they experimentally rolled out uh co-pilot to some workers versus others. And they found that workers who started using copilot really reduced the amount of time they spent on email a lot because copilot was helping write emails faster. Um >> but the time spent in meetings didn't change at all. uh because the coordination with other people is not something that uh AIs as they exist today in uh helpful harmless chat bots can can really help with and work by people like Eric Melson and others has emphasized that to fully gain the benefits of a new technology firms will often typically historically have to reorganize their internal processes to take advantage of these new techn technologies. The most famous historical example being the adoption of electricity initial factories in the 19th century uh sort of just uh taking existing workflows sort of plugging electricity in not changing anything else but then over the course of actual decades really changing to uh the sort of model T setup that completely changed these internal processes to best harness the new technology. And it's very plausible that that the sorts of intangibles, these internal processes are are slow uh to figure out and not something directly improved by AI. >> The counterpoint there is that maybe we'll have drop in remote workers >> in in two years or whatever. And those are drop in remote workers. You don't need to change for internal processes at all. >> Yeah. So these dropin remote workers would be able to understand the context they're working in. They would they would function like remote workers basically. They would understand the context. they would understand the code base say they would read the internal documents, emails and so on. Um the the first part of that argument is is actually quite plausible to me that that we are not taking uh as much advant advantage of these models at we could as we could. We're not properly integrating them. We're not kind of pushing pushing to get as much out of them as we could because it takes time. It's difficult to integrate uh AI into into all of our processes. Yeah, totally. So, so these two arguments, diffusion and reorganizing internal processes, sort of take the idea that these models are really good. They're just other things that need to change uh before the effects diffuse through the economy. >> But it's also possible that benchmarks just aren't representative of economic tasks. >> Yeah. >> Uh sort of as hinted at earlier, benchmarks need to be things where the output's verifiable. uh math olympiad is something where you you can check whether you have the right answer or not. Uh maybe it takes some time to verify a proof or something but it it is sort of checkable. Whereas writing an economics paper, how do you verify whether you did a good job on that or not? Much harder. >> Um so benchmarks are a certain kind of task. uh to my knowledge there's not so much work investigating how is progress advancing on uh tasks by characteristic. So there's this famous meter study uh meter being model evaluation and threat research a sort of think tank uh that uh studies evaluates uh new LLM models new LLMs uh they they have the same study uh showing that the horizon of tasks that uh LLM can do has been increasing sort of doubling every seven months or something like that. >> Yeah. Um, so GPT2 could do like a 1 second task. GBT5 can do a 2 hour and 15 minute task, something like that on a narrow set of tasks that they they have these uh Hcast uh uh machine learning or software engineering type of tasks. >> Yeah. >> Uh that's amazing research, a super important data point. And in the paper they look at if you have messy tasks versus non- messy tasks, do we see faster or slower progress on messy tasks versus non- messy tasks? They don't actually have very many very messy tasks or any maximally messy tasks at all. They they don't find differential progress uh in messy versus non- messy tasks. They do find like a lower level of progress, but the slope is the same. I would still be interested in seeing for sort of ONET as a whole for example does that sort of result hold up because I think it's very plausible that benchmarks are these narrowly defined tasks that don't really capture the breadth of what a worker does every day like work is pretty complicated. >> Yeah. Yeah. And and actual workers tend to to carry out plans over weeks or months perhaps years and so on. But still, I mean, what does it even mean? What does it even mean for a task to take a week or a month or something? That is something that I that I'm kind of interested in because we are we are kind of on this curve according to the meter study where the models uh in two or 3 years I think we'll be able to do a month's work uh a month's a task that would that would take a human a month with 50% uh success rate. And what is what is a month-long task? I I can't actually kind of perhaps conceptualize what it would mean to work on one task for a month. Uh perhaps I just lack focus or something, but do you have like do you think it's plausible that throughout the economy there are tasks that are very long-term and and and what would be examples there? >> So there definitely are tasks that are very longterm and they're very painful. I say that as a researcher where I >> Yes. to me having worked in industry before going to grad school like research just has such a long cycle before you put out a paper or whatever it's much more painful feedback loops are much slower so those tasks do exist but exactly as you said I think that's not a huge share of the economy the the way the the the framing of this that I find convincing I'm totally recapitulating argument from Toby or here is that so if a model can do a one minute task uh Why can't it do two one minute tasks in a row and that that means it can do a two-minute task? >> Yeah. >> And the argument he makes is that if you can do a one minute task with 50% probability as you said this is this is what the meter study is looking at. Then that means uh doing a two-minut task uh those two tasks are getting chained together at 50% probability each. If those were say independent probabilities, then it would be a 25% probability of succeeding on the two-minute tasks. Hence why models are worse at longer horizon tasks than shorter horizon tasks. I agree without sort of decomposing it that way. It's it's not obvious what what distinguishes a short horizon task versus a long horizon task. Uh it it does feel like there is something uh that maybe we just haven't captured. Otherwise this Toby framework is is what I find most useful for thinking about it. >> Yeah. Yeah. This is I think also what you see in coding that uh if a model is unable to fix its own mistake those mistakes accumulate over time such that the output is no longer useful beyond a certain kind of uh time horizon of a task. Um I do still I mean perhaps this is getting too philosophical but it's just even if you're even if you've written a paper uh and you're now you're now trying to incorporate feedback on the paper and this is kind of this tedious and slow process that that might be necessary for good research you are not spending you know you're not spending all your time on this one task for six months say right so yeah I mean even even book writing or something very very concentrated Okay, my question is what do you think is the upper limit for how humans can for how much time humans can spend on a task and could the AI be approaching that limit? >> That's a very interesting question. Upper limit. Uh the the sort of rule of thumb I I have in my head for when would I be satisfied that we have AGI is when any task that takes a month >> uh AI can do it. That's just a completely madeup number. Yeah. >> Uh, another another thought I can throw out is thinking of decomposing the world into tasks. This taskbased framework that has in economics become very dominant very quickly in this macro labor uh uh area and is also used often in the AI world. Yeah, >> perhaps it's just sort of a conceptual error to decompose the world into individual tasks as opposed to thinking about how tasks fit together into a broader puzzle. Uh, and like human civilization as a whole or groups within the human civilization certainly spend uh uh have have initiatives that last many years to achieve a goal. >> Mhm. And maybe there's some category error in to to think of decomposing these into smaller tasks and they're not separable in some way because you have to hold context in your head and if you don't have a long enough context window like plausibly these LMS don't um you just can't you can't do it. So you can't decompose it that way or the self-correction like you describe can't be separated across these tasks and so you really need to think in terms of like grand projects grand arcs grand initiatives. Uh but this is me just philosophizing though if someone wants to come up with a replacement for the task model in economics I think there could be something real there. >> Yeah interesting. I think we should end by chatting a bit about the most interesting kind of open problems as you see it. We we've touched upon upon some of them in this conversation, right? The things we would like to know about AI uh that are at the intersection of AI and economics. Are there other that that that come to mind as like this is you know this is like the research paper you would love to see or you would love to write? >> So the big answer is that there's a lot there's so much lowhanging fruit. It's super exciting to work in the economics of AI. I I have to advertise it to other economists. You should you should consider transitioning over. Don't get hung up on the sunk cost of all the years you've spent studying monetary policy or whatever. >> Yeah. The question is whether being an economist actually makes you better at avoiding the sunk cost fallacy over whether economists are kind of just as human as the rest of us. >> That would be an interesting paper. I I would read that paper um in in terms of most important questions. So some of the most important questions that I think of as like the hamming question, what's the most important question in your field? >> Uh I I'll I'll distinguish between empirical questions or sort of sort of applied micro empirical questions versus microeconomic theory questions that might have relevance to AI safety and then grand macro theory or maybe a bit of empirical macro questions. Uh and macros where I have the most familiarity. So let me start there and we can talk about the others uh if if there's time interest. So within macro I think the big question is will AI lead to a speed up in economic growth >> or will it get bottlenecked by certain sectors or areas. So those bottlenecks could be things like energy we just don't have enough fossil fuels that's a that's a limited quantity that bottlenecks AI the price of fossil fuels is going to spike. land potentially >> land very plausibly or certain sectors of the economy AI just isn't good at uh clearly we've seen much more progress in co the cognitive domain than in the physical domain robotics is uh has has a lower level of progress will we end up >> all as blue collar workers in 10 years so so where will the bottlenecks be I think is plausibly the best answer to the hamming question >> a second one is this idea of automating AI research that you brought up a bit earlier. How much to put in economic terms, how much dynamic complimentarity is there between AI today and AI tomorrow? Uh to spell that out. Uh if if we have faster AI progress, let me put this a different way. Going back to uh J good at the very least there's this idea of recursive self-improvement that if you have better AI it can write better it can do better AI research which will lead to better AI so on and so forth >> and the speculation that's often laid on top of this is that this would lead to an intelligence explosion that's good terminology uh a recursive loop like that does not necessarily lead to an explosion does not necessarily lead to a mathematical singularity, it depends on the strength of that feedback loop. So, uh to to uh I'm trying to think about how to say this about drawing graphs in the air. So, if the feedback loop is super strong, then you can indeed have a mathematical singularity, infinite growth in finite time. If the feedback loop is only moderately strong, you can just have exponential growth, which is sort of what we're used to in in the post-war era or in the last 200 years. If the reinforcement, the feedback loop is too weak, you can even have things leveling off. So you have self-improvement. It leads to faster AI and it's always leading to a little bit more and more AI improvement, but things level off. >> Yeah. >> So what is the strength of that feedback loop? What is the diminishing returns? The intertemporal diminishing returns to AI progress. Uh or to put another way, uh our idea is getting harder to find in AI. uh or are they getting easier to find uh in which case you would see a singularity um there there's a limited amount of work on that uh ego and uh to my busy I think uh have have some papers on this looking at progress in AI chess and uh maybe one other domain and uh anyway they they have the best work on this more work on that could be done >> that one is really interesting I mean it's it could be it's potentially so consequential for for for the future we're likely to see. Um yeah, isn't it the case that that in in any domain uh you will kind of pick the lowhanging fruit, find the ideas that are easiest to find first and then you will face the kind of it will be more and more difficult to find good ideas for how to improve uh beyond that or or is that a misunderstanding of the of the kind of ideas uh and growth uh literature in economics? So that that's definitely the prior in the literature. I think that's definitely the right prior just thinking about reality. >> Yeah. >> One could imagine though that it takes a really long time to pick all that low hanging fruit >> sufficiently such a long time that for an extended period like you know maybe even centuries or whatever there is a period of increasing returns to scale where getting the low hanging fruit allows you to you know beef up your muscles and pick fruit even faster. uh even though eventually you'll have to reach higher up on the tree and those strong muscles won't help you reach the apples to really uh extend the metaphor. Um and and and so if we look today what do we see increasing returns or decreasing returns? Um yeah so so on on the macro side of things those are sort of the two top two things that I think of. uh on the micro theory side of things. I will give a pitch for I think there are a lot of lessons one could take from microeconomic theory for AI safety for agent foundations work uh where uh like the vonoman Morgan Stern axioms that are sort of often discussed just as one example in the AI safety world like that is sort of the foundation of modern economics. uh one one would hope that there's further lessons from micro theory there and there has been work uh done Eric Chen uh Alexis Gersingerin and Sammy Peterson have very interesting work on uh the AI alignment problem from a microeconomic theory perspective as do a few others. Uh and so I I I can imagine that in a few years if this problem seems increasingly serious that there will be more microcomic theorists working on this question. Yeah. Do do you think there are other areas of economic theory perhaps something that's that was conceived way before AI was even a thing that's relevant to to to to thinking about AI? >> Yeah, great question. So I think there's a lot uh I think there are a lot of essays or papers that could be written just applying ideas from ecom broadly in particular perhaps economic theory to the problem of AI. So for example uh after the 2008 financial crisis uh regulators developed this idea of stress tests for financial institutions going in and uh doing a simulated scenario of uh a financial crisis in banks given their sort of asset holdings their loans etc to see if they would survive the financial crisis that this madeup financial crisis. >> Mhm. So there's this large econory literature for some reason investigating when is this efficient or something like that. That's not so far from the idea of red teaming in AI where uh Anthropic has a team, others have teams uh trying to sort of battle test LLMs in the worst case scenario. >> Yeah. Uh and in fact there there's a paper by Ja Guerrero uh sir Sergio Rebello and a third co-author forgive me I'm forgetting sort of taking this exact idea to the AI world and in fact in the first draft of the paper if I'm not mistaken they didn't even use the term red teaming because they were economists they hadn't heard this term in newer drafts of the paper like oh yeah this is a theory of red teaming the optimality of red teaming >> um or there's a literature on uh insurance against cyber attacks of firms. So firms can take out insurance about getting hacked. Uh and there's been discussion Gabriel while and others have written nicely about should AI companies be forced to take out liability insurance >> uh as a mechanism for uh uh encouraging them to internalize risks from advanced AI. >> Yeah. uh the cyber the cyber insurance risk literature could be adapted there or I'm sure other parts of the insurance literature and in fact uh Gabriel's work is of course directly economics related itself. >> Uh so those are some miscellaneous examples. Uh yeah I think there's a bunch of things that could be done. Yeah. And I and I think in general, and this is just my impression as a non-ressearcher, but I I I think there's something about finding an intersection of some area that's been studied deeply and then applying those idea those ideas to a a completely different area. So AI and economics would be an example. Perhaps a geocartra's uh report on bioanchors is an example of studying uh machine learning by looking at evolution and kind of finding some intersection that's that's that's unexplored is is often fruitful is my impression. >> Yeah, totally. So, uh to circle back the whole conversation the the way this paper on AI and interest rates came about is that >> again for for like a decade I was obsessed with monetary policy. How do we prevent recessions? How do we prevent another 2008? And in the monetary policy world, central bankers are very worried about predicting future inflation. >> And so this gave me this background on forecasting in particular, >> central bankers often look at financial market expectations of future inflation over the next 10 or 30 even years because there's instruments that directly forecast inflation expectations. Mhm. >> Uh and and so that sort of market monitorism or market based perspective uh is is how Trevor, Zach, and I ported uh this interest rates perspective to the AI forecasting world. >> Oh, perfect. I think that ties up our conversation nicely. Basel, thanks a lot for chatting with me. It's been great. >> Thanks very much, Gus, for inviting me on. Uh, super fun conversation.