Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
Why this matters
This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.
Summary
This conversation examines core safety through Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 92 full-transcript segments: median 0 · mean -0 · spread -20–17 (p10–p90 -7–0) · 1% risk-forward, 99% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Technical lens. Evidence mode: interview. Confidence: medium.
- - Emphasizes alignment
- - Emphasizes safety
- - Full transcript scored in 92 sequential slices (median slice 0).
Editor note
A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video JA64Ft62SQE · stored Apr 2, 2026 · 2,370 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/reasoning-robots-and-how-to-prepare-for-agi-with-benjamin-todd.json when you have a listen-based summary.
Show full transcript
Welcome to the Future of Life Institute podcast. My name is Gus Stalker and I'm here with Benjamin Todd. Benjamin, welcome to the podcast. Uh, hi. Thanks. Great to be here. Do you want to start by introducing yourself to our audience? Maybe talk a bit about what you're working on at the moment. Yeah, in the past I founded with Will Mascal 80,000 hours and then was the CEO for 10 years. In the last year or so, I've been focusing on writing and writing about understanding AGI and how we can respond to it both individually and as a society. And the main thing I'm working on right now is a guide to AGI careers that tackle AGI for 80,000 hours. So, one of your essays is about reasoning models. This is a reason reasonably new phenomena where you can have an AI model think for longer on certain questions. Maybe you could tell us how does that work? Uh what are the advantages? The basis is a very simple innovation called chain of thought with a large language model. Instead of when you ask it a problem, instead of asking it to just kind of generate the solution in one shot, instead you ask it to generate a chain of reasoning towards that solution. So it produces you say like okay if you're going to solve this math problem how would you reason towards that and then it produces a token of reasoning. It then reviews that token and then produces another one and then it produces a long chain towards that solution. And then the yeah the final then the the extra addition then you you already get a big boost just by using chain of thought. But then where it really gets going is then when you use reinforcement learning on top of that. So if the solution is correct then you adjust the model which is called reinforcement to be more likely to do things like that in the the next time and then you can do that loads and loads of times with loads of examples until the model gets better and better at generating these chains of reasoning that tend to lead to correct answers. Are there any kind of deep technical reasons that this has only recently started working? Reinforcement learning is is is not a new technique. Maybe chain of thought wasn't possible to the extent that it is now before. Yeah. So, you know, chain of thought started working a bit with GPT at least definitely by GPT4. The the reasoning models paradigm is really only started getting going in 2024. Maybe the wider world still has not quite recognized this cuz these models are best at things like difficult mathematical and scientific reasoning which just most people aren't doing in their day-to-day life. they're just using it as a chatbot and they haven't realized how much bit better it's gotten at these things. But yeah, in terms of why it just started working in 2024, I'm not actually sure anyone totally knows the answer to that. But at a very high level, I think one way this can happen is, you know, if each chain of if each step of reasoning only has like a 90% chance of being right, like 10% chance of being wrong, by the time you've tried to reason through 20 steps, I think you only then have about a 12% chance of being correct. So, previously with language models, they kind of couldn't keep it together for long enough to really get to any answers. But it what seems to have happened is around early 2024, the models have just about got to the point where now they can reason for quite a while like maybe the equivalent of an hour in or you know at least minutes and probably maybe like the equivalent of human thinking for an hour about something. And then the next thing that happens is like if you can't even get close to an answer, you can't do reinforcement learning because there's no reinforcement signal. But once you start getting right like some reasonable fraction of the time the right answer then you can get the flywheel going and start using reinforcement learning to make it even better. Yeah. The underlying model has to be of a certain quality or it has to produce the right answer at a reasonably high percentage of the time for for this for this reasoning to work. Yeah. And I actually think this phenomenon comes up a lot in different parts of AI. Like I think we might end up with quite a similar thing happen happening with agents where like right now they kind of don't really work like each step they just kind of fall apart. You can't really do reinforcement learning but we might suddenly get to a point where like they start to work pretty well and then then you can use reinforcement learning make it even better and you get quite dramatic change. Yeah. And this is really I think a common experience kind of looking at how AI is developing. there are or it seems to be that there are these thresholds where AI is bad at something until it's actually pretty good at that thing. Just a couple of years ago, I was discussing with AI experts whether large language models could ever become good at math or programming. And with reasoning models, it now seems that AI are excellent. Maybe they are the best at exactly math and programming. So maybe we could see something something similar with with agents. Do you think what is actually what is the connection between reasoning models and agents? I mean one very simple connection is just you could if you have a really good reasoning model you can kind of use that as the brain of the agent the planning module and so the the better the the better reasoning models we have that can do good planning and can like figure out what the right next step should be then the more likely agents are to work. Now, one advantage of reasoning models is that they might be able to generate data that they can then that can then be used to train the next generation of models or even the same model. How how does how can this possibly work? It it seems like something that it seems like an an idea that's too good to be true where yeah, how can this work? Yeah, I mean it works in this case just because there the solutions can be easily verified. It can have a large language model solve a bunch of math problems and then it's quite quick and cheap to check which solutions were actually correct often. And so then at the end of that process, you actually just have a bunch of new like correct solutions to these problems and also a whole chain of reasoning that leads to that solution. And that's super good training data. And there's there's nothing circular about it. It's just because it rests on them being easily verifiable. So you would expect domains that are not easily verifiable to be less to be less useful of domains to use reasoning models in like for example I'm thinking writing a a fiction writing a novel for example there you might get pretty it's it's difficult to get feedback on whether the novel is good. It's difficult to is there even something that can be kind of formally verified about the quality of the novel something like that. Uh will we see this divergence between domains that can be be easily formalized where we have strong progress and domains that can't be formalized where we perhaps don't have as strong progress. Yeah. And that's what we've seen in the last year where there's been a huge divergence in the kind of like hard scientific domains. There's been way more progress than in the others. Yeah. Uh and I think looking forward you could almost see this as the key question of kind of like forecasting AI progress is how many domains will be amanable to reinforcement learning. Will we just be able to ride the current techniques to you know superhuman levels of performance across most tasks or will it just be limited to math science programming? Yeah, I mean there's a few things that go into that. One is it at least seems true to a little a small extent at least that if a model gets really good at maths and science. It does actually get a bit better at everything else. It is it is learning some type of like general general logical reasoning that is useful but it kind of remains to be seen how big that effect will be. And then the other thing is like how good can we make the reinforcement signals in these more nebulous domains. And yeah, that's how that's going is getting a bit out of my expertise. But you know, I understand with things like with something like writing, you might be able to use some AI models to rate intermediate output outputs. So you could have like an evaluation model which checks and then you could use that as a reinforcement signal. You can also use human feedback though obviously that's much more expensive to gather that that type of data. And then there's the kind of final feedback that comes from whether the novel sold a lot of copies, though that's a very long horizon thing. So that's you can't get a fast iteration cycle with that type of thing. Yeah, that's that's an interesting point. Does this mean that when we're looking at something a question like does this piece of code compile correctly or does this does this model do what I want it to do? What's the accuracy of this model? Something like that. Those are questions that can be answered rather quickly in a in a kind of fast feedback loop. when you're interacting in the with the world at large, you're interacting with human systems that are moving slowly. And so the question here is whether there will be a wall where reasoning models perhaps agents won't be able to interact with with the human world as well because the feedback is simply too slow. The feedback cycle isn't fast enough. Yeah, it's more that Yeah, you won't be able to rapidly train models with those feedback signals. But I mean there could be it's possible that you'll be able to kind of break things down again into like much smaller tasks that where you can get quick feedback and chain them together that yeah so I think this is really remains to be seen how well all this will work and is a really central question about the next couple of years of AI progress. Yeah, there's there's another quite central question which you mentioned before which is something like how much progress do we get if reasoning models are only good in programming, mathematics and the hard sciences. How do how much progress would you expect from models being good in in in only those domains? I think in terms of economic growth, it's possible it would be quite small because quite not much of the economy is kind of difficult scientific reasoning. But I mean, if I was going to make a more bull case, it's possible those systems could be very useful for accelerating certain parts of scientific research and then those scientific discoveries could then cause a lot of economic growth. The kind of strongest case for acceleration would be like, well, actually, yeah, these models are still not very good. They don't not good at social skills. They're not good at business strategy. They're not good at physical manipulation, lots of things you would need to have maybe a very general AGI. But if they're super good at programming and maths research, that could be really useful for doing AI research in particular, doing ML research. And so then and then that could then unlock the next paradigm or wave of progress after that. So I think that would be the the strongest case for rapid progress based on this. And why is it that AIS are particularly well suited to do AI research? Well, I mean the biggest thing is what we've just been saying because you know ML and programming are domains where you can get this reinforcement signal. So then the current models are becoming really good at exactly those types of tasks which is exactly the type of thing you need to do AI research well. But yeah, I mean a few other factors. One is like it's it's fully virtual so you can just do loads of experiments virtually but without having to say like wait for lab results or wait for something to happen in the real world is the other thing you were just saying. And then there's a kind of other big factor which is it's also what the people doing AI research understand the best how to do AI research. So it's very natural for them to try and use the things they're developing to help with their own work. Yeah. Isn't there a big barrier here in terms of uh training runs being incredibly expensive? So there's there are pro there's probably some kinds of information about machine learning research or results in in that field that you can only get by running experiments that are very expensive. Totally. So the extent to which that's true is in a way kind of the key way of seeing whether there's going to be something like an algorithmic software feedback loop and intelligence explosion based on that or not. So yeah, you can kind of think if we get AI virtual AI researchers and so AI AI researchers that's you can think of that as really expanding the labor pool of people doing AI research but there's two main inputs into AI research. There's the labor or the kind like the researcher time and then there's compute which you need to you run all the experiments and compute will stay the same in the short term because that's just determined by how many chips we have in the world. So even if you increase the labor pool a lot because computers stay the same, it might not that's a big reason why there might not be that big an acceleration of AI research. But yeah, I mean the large training runs still only take about three months historically. So you can still, you know, in a year you could still, you know, you could train in theory a bunch of much more advanc you could do three three whole generations in a year if you were maxing it out, which is still about 10 times faster than we've had in the past. And then I think the the bigger thing on that is that you know because in this reinforcement learning paradigm you don't actually need to run these massive training runs necessarily. You they're they're using much less compute to do this reinforcement learning on top of the large pre-training run. And so you can get much faster iteration cycles in this. And apparently this has been a big trend in the AI labs recently is they've been preferring to distill distill the models into these like smaller and cheaper models which are a bit less powerful but then you can you can iterate with them way faster. So you could have like 10 generations in the time when previously you only had one generation and then you can actually end up ahead even if you're kind of starting position is a bit worse. Explain that. Why? Why is it is it just because the model is cheaper to run? Yeah. So you can just do way more experiments with the same amounts of compute. All of the AI companies are still going for very expensive very large training runs. Do you think any anything fundamental has changed with reasoning models? And and if so, why are we still kind of scaling compute in this this very ambitious way? I want to distinguish between total amount of compute spent on all forms of training including post-training and then like a large pre-training run and I think the large pre-training runs like training GPT5 and GBT6 those have been delayed I think compared to what we would have guessed a year or two ago in but instead that compute is now being used for reinforcement learning or just increasing inference so more test time compute and then soon I it will also be used a lot on these like a getting agent experiments going and getting agents to to generate data as well. So the you think there's actually been a move away from kind of large traditional kind of foundation model training runs to spending that same compute doing more inference time you know using it at at inference time and using it for experiments instead. Yeah, definitely in the last year. Like previously the Metaculus forecast was for GPT5 to be released in like around now in March, but that's I think when I last checked it's they think now it's going to be the summer. So like July or something like that and then instead yeah all the recent models that have been released have been reasoning models. So there's been a clear shift recently. Exactly what happens going forward is not clear, but like my guess is the returns from improving the reasoning models or working on agency will be bigger going forward than just doing another 10x or 100x to the the pre-training run. Oh, that's interesting. So, so this this actually this actually might mean that we we have kind of crossed some some level of quality for the foundation model where it's now more efficient or this this more lowhanging fruit in in running that model in a in a in a mode of a reasoning model. Well, my yeah, my thinking was just the re like the reasoning model paradigm is still right at the start. So, you're on a relatively sharp curve still, whereas like most people think GPT4 to GPT4.5 was kind of not a game-changing amount of change. So, I mean, still I I think it's been slightly overstated how exact how bad it was cuz, you know, GPT4.5 caught up with 01 on a bunch of reasoning things, but it like doesn't have to do the reasoning part, which actually seems quite good. One useful thing we should touch upon is how likely we are to get a positive feedback loop in AI research. So you can you can lay out the different kinds of of feedback loops we we might we might experience. Yeah. With different types of positive feedback loops. The one that is kind of most concerning and has also had the most attention is a purely algorithmic feedback loop where if you get to the point where you have an AI that can substitute for people doing AI research. You can do a bit of back of the envelope estimates of how many of those would we be able to run in 2027 say or end of this end of the decade if we used all our compute to run those and those estimates tend to be between say 1 million and 100 million equivalents in terms of how many tokens of output they can produce to humans. So if the quality is also similarly good then it's kind of like expanding the AI research workforce like at least 100fold now but there's the factor we just said which is the amount of compute wouldn't increase at that time. So then you have this question if there was 100 times more AI researchers how many how much more how much faster would AI algorithmic progress actually be and that's quite a difficult question to model you can try to estimate historically as inputs into AI research have increased how much has say algorithmic efficiency increased and one key factor is each time you double inputs do you get more than a doubling of algorithmic efficiency or or more generally what we care about is algorithmic kind of quality overall and the past record is a little bit ambiguous about that but epoch has this paper where they look at some estimates and they kind of conclude it's around the threshold it could be below it could be above so I think that that kind of means as a very high level estimate I would be kind of like well it seems like it's kind of a 50/50 whether it would actually become a feedback loop or not and then once the feedback loop starts that you you could also have increasing diminishing returns so that can also dampen out the feedback loop quickly yeah Why would that happen? The idea is as you make more discoveries, it becomes harder and harder to make more discoveries because the easiest ones have been taken. To some degree, that's taken into account in the past estimates because that's also been happening in the past as as we've doubled each time it's become harder to do the next doubling. But as you get say closer to fundamental limits, you might kind of expect the diminishing returns to you didn't expect them to kind of increase even more than they have in the past. So then yeah, weighing all of these different factors and figuring out what will happen is is kind of is is difficult. But um Tom Davidson has a new paper where he looks through all the dynamics of all these and I think his bottom line is we would see something his it's something like a three to 10x we'd see like three to maybe 10 years of AI progress in one year. So probably not more than 10 years in one year. He think that's he thinks that's relatively aggressive. A couple of years of progress in one year seems like seems kind of like a reasonable place to be at which which is a wild thought because AI progress just say in 2024 is already pretty fast. Yeah. And you also have to picture this happening. This is at a point when AI can already basically be doing AI research. So it's already very good and then it suddenly goes like three more years of progress in one year. So yeah, that could be pretty crazy actually. Tell us how it could be. maybe paint us a picture of the impact of of a feedback loop like that. So if you just look in the past, algorithmic efficiency has been going up 3x per year. So that means with the same number of chips, you can basically run three times as many of the same model. So if you get three years of progress in one year, then that's 27x increase in algorithmic efficiency in one year. So it means if you have those say 10 million automated AI researchers in at the start of the year by the end of the year you can then run 300 million so 30x more on the same chips. So nothing else has changed and that's an underestimate because that's just algorithmic efficiency. In reality you'd also have three years of improvements in post-training techniques. So reinforcement learning type stuff or whatever they're doing at that point. And you could also you could almost train a whole extra generation of do a whole generation of pre-training cuz well it would be like it would be half a generation roughly a 30x so you'd also go from like GPT 6 to 6.5 in one year and so all those would happen at the same time. Yeah. What about chip design? Because there's there's algorithmic improvements there's the improvements to the to the AI researchers doing AI research themselves but there's also improvements to the hardware. What how how would the chip design fit into this picture? This is a slightly underappreciated aspect of the situation which is even if you don't get this algorithmic feedback loop, it seems much more likely that we do get a feedback loop in chip design and there's kind of two levels to that. One is that AI's could help with doing chip design itself. And I mean, Nvidia is already using AI a lot to help with it chip designs. And so then maybe, you know, you get again a similar type of thing where you get several generations of chip design progress in one year type thing. You you'd need to kind of do the maths on on the exactly how fast it would be. But then there's the second level, which is just simply producing more chips. Historically, the kind of this key parameter, if you've doubled all the inputs into the semiconductor industry, how much more compute do you get out? Historically, it's been much more than a doubling. So, the kind of empirical case for this feedback loop working out is much stronger than than the algorithmic one working out. On the other hand, it's a bit less risky, but or a bit it's a bit easier to deal with because it will be slower because each generation you have to produce all the chips and ship them. And that takes that takes some significant time. It's not that you can just have like three generations in one year. It would probably take three years or something, but it would still be super fast compared to like normal economic growth. Yeah. You describe this the impact of of a feedback loop in AI as an industrial explosion. If you think about kind of AGI level AI plus robotics perhaps what does that look like in your mind? Well, yeah, in a way that's that's the kind of like the third level of feedback loop, which is so you have algorithmic feedback loop, chip design and production. And then the third level is when you can automate industry in general. So a kind of a complete loop of production, which would require robotics. And that one is in a almost the one with the strongest empirical support cuz if you double the number of workers and factories, you'll roughly double the amount of output. And in fact, it's more than that because as things scale up, they get more efficient. So you actually get more than a doubling and that would mean you get faster than exponential growth for a while until you hit until you hit some type of diminishing returns. Again, Epoch have just released a new economic model trying to look at this and they actually, you know, they see growth accelerating over like a 10 or 20 year period. So it's not even it's not this like oneoff oh we get a big loop leap and then it's kind of flat. It's like things could keep accelerating to maybe like very high rates. Like the the end question is just like what's the whole what's the complete doubling time that you could achieve with like a if everything was fully optimized how quickly could things could things double and it seems at least pos that that's possible that could be you know more than 100% per year. What I fear isn't really fully coming through when I have conversations like this is how kind of crazy the world would become if something like this happened. Why isn't this always the kind of front page news do you think? Why isn't this just even the possibility of thinking about this and we can discuss how likely it is but even the possibility should receive a lot of attention but perhaps isn't re or it isn't receiving as much attention as as I think is warranted. You know if you if robots and AIs could say produce the solar panels and chip factories to like make enough chips to to double the number of AIs and robots within a year. So on the earth, we're only using about 1 10,000th of the solar energy that's coming in. If you get that to 1% of solar energy, which is still maybe not that high, then that's 100x more energy use would be possible. And so this doubling thing could quite quickly go to say kind of like 100x the output of now. that would just be getting started because with the sun there's another forget the exact figure but I think it's maybe four or five orders of magnitude more energy and so say within like I don't know if you can do how many doublings do you need to get 100x I don't know if you know your powers of two eight or something so within eight eight years you're like at 100x but then like then after that we're suddenly like in space constructing solar panels around the sun which is not on the scale of things not that technically hard and So we could literally go from current society to Dyson sphere is being created in a span of say like 10 to 20 years which yeah I think is a kind of radical change of the economy that people are not really at all taking seriously. Yeah even people who are pretty into AI humans are really bad at kind of extrapolating forward things that haven't happened before. And yeah, COVID was a great example of this I think where you could pretty clearly see an exponential curve of cases in say January or February and basically very few people took any action about it until it was completely hitting them in the face and like hospitals were overwhelmed and just had to shut everything down. And this is like in a way a much more abstract and weird thing to think about than just oh people are getting a disease. I mean in some sense the conversation around AI and AGI and super intelligence and all of these terms have become much more mainstream since the chat GBT moment in 2022 but still it seems like we as a society we're not grappling with some very important questions around this. Is this fundamentally a social problem or is it just that people uh perceive this to be kind of wild speculation? Uh and that, you know, I'll see it when I believe it. I'm looking out of my window. I can't really see anything that's changed. And so you're you're predicting all of these radical things. There's been a lot of people uh throughout history that has predicted radical changes. And so yeah, do you think that do you think there's a concern about seeming weird if you actually believe and act crucially act on on beliefs like this? Well, just as a quick caveat, I'm not predicting this is like definitely what will happen. I think you know all of these feedback loops there's a chance they don't work or AI doesn't advance to that level in time, that type of thing. Yeah. I mean, I think it's interesting because I think even with myself, I believe some of these things intellectually, it still takes me a long time to actually internalize on a more gut level that this could be really happening. Mhm. I feel the same way. Yeah. I still don't internalize a lot of it fully for sure. I've like internalized more over time. I kind of like but I think feeling the AGI is actually a big spectrum and I like feel it more and more over time but still I don't fully feel it and yeah I think a lot of it's just to do with this yeah until something is completely hitting you in the face it's pretty hard for humans to get motivated to do anything about something there's also the question there's a question around whether we can actually kind of fully internalize these beliefs and and feel it in our feel the AGI in our guts to so to speak where we've we're just we are not evolved to to handle questions like this properly. I think we're not we're not used to dealing with with things that are moving this quickly and uh time scales that are this short. So the question is whether whether we will learn to internalize beliefs that are that are accurate about our situation before we are severely kind of overwhelmed by the situation. Yeah, I think that really I think it really remains to be seen. It could well be that most people wake up after it's already quite a bit too late though. Yeah, I do think there will be some whether you want to call it warning shots or just like very powerful demonstrations that you know as as we've seen already many more people are taking it seriously than they did in the past as better capabilities happened and I think that will keep happening and there'll be more and more waves of people realizing this is a big deal which yeah I mean just as a kind of aside for someone thinking about career planning I actually think still in many ways this is quite an early it's still quite early even it's it is a very weird situation because it does feel like everyone is talking about AI a lot but the number of people really working full-time on tackling this especially the a lot of the risks is still probably under 10,000 but I think you know between if we're on this timeline where the techniques do just keep working and AI keeps improving to it transformative level before the end of the decade then by that by that point so between now and 5 years from now AI is going to go from what it is now to being just like the number one econom eomic, political, social issue. It'll be like the front page every day will be to do with AI. And that's a very long way from where we are now where when 0 o the 03 results were released which showed that this new reasoning model paradigm was yielding really impressive results that wasn't reported in any of the newspapers and in fact the Wall Street Journal was running an article about how GPT5 was behind and disappointing on that day which is really missing the point because even if GPT5 is a bit disappointing and behind schedule it doesn't matter because we've got this even better thing now that's like completely taking off and that just totally missing the mark reporting to be focusing on Yeah. the the old paradigm. Yeah. Yeah. And I you mentioned this before, but there's also a phenomena where if I ask one of the reasoning models an incredibly difficult problem in programming, mathematics or physics, I'm not really in a position where I can accurately evaluate how well it's doing just simply because I don't know the domain well enough. And very few people are I I would yeah I think it's true to say that very few people are good enough at physics programming and mathematics to accurately evaluate you know is this output genius or you know it's difficult to distinguish between outputs at at a high level if you don't if you don't have a have a deep understanding of the domain. Totally. But though actually the even bigger thing is people are still using only the free version of Jack GBT which doesn't even include 01. So they're actually still using like a one or two year old model and being like oh it's not got better. Yeah. Yeah. That's you always have to account for the kind of Yeah. for problems such as that. That's true. All right. I think one of the things that could serve as a form of warning shots or something that could make people much more interested in AI is if they see robots moving around physically in their environment. Where are we with robotics? Do you think we are mostly limited on the hardware side or mostly limited on the software side? Yeah, I haven't heard a clear answer to that. My super rough read is that algorithms are the bigger bottleneck. Making really good robotics is a much harder challenge in some ways than language models because for one thing, we don't have the data set and it's quite expensive to build the data, build a really large data set. So yeah, I I have heard other people saying that there are still some kind of hardware limits around like really really precise motors. It's like if you think about how complex a hand is, it's not just the extremely subtle manipulation it can do, but it's also say all the sensors like that we have in a hand so that you can, you know, if you can hold an egg but not crush it, you need to be able to feel like the exact pressure and in your hand to do that. And so having all of these cheaply in a package, I think is also a bit of a bottleneck. But my my my main sense is like if we just had a big leap in algorithmic progress for for robots, a lot more stuff would start working. How quickly do you think we can scale up our production of robots? One of the things with with production that you that you mentioned is that as you mass manufacture something, it it decreases in in price sometimes quite radically. So there's a question of how quickly we can ramp up production to get those decreases in in cost. really depends on how good robot capabilities are at that at that time. So in my post I imagined that we just had a sudden transition where like humanoid robots start working and then the question is like from there how quickly could you scale it up but that's not that's not exactly what will happen in the real world. In reality it'll be like a more gradual thing at least for a while as kind of things get gradually better. Yeah. One thing that I looked at to try and answer that question was imagining that car manufacturing capacity was converted to robotics. And yeah, you can do a very rough back of the envelope based on this cuz a car is about a ton of kind of industrial material all put together and a robot is about a tenth a tenth of that. Actually, it's a little bit less. Like you could say a robot is a humanoid robot is 80 kg. Maybe we'll actually have a bunch of smaller robots that are specialized for particular things. So they'll they'll actually be like 40 kg or something. Cars are about a ton and a half. Let's you you could say well robots will be more complex to make so we shouldn't kind of convert one to one. But even if you convert say like half or a third current car manufacturing capacity could produce something like a billion robots a year. That's a lot. And we we should also we should also remind ourselves that modern cars are in a sense uh robots. They are they're much more complex than they were say 50 years ago. They contain a bunch of chips, a bunch of sensors. Think of like a yeah, a modern electric car that's that has all kinds of cameras on it and so on. Um, so so yeah, of course robots are probably even more complex than that, but modern cars are complex. Yeah. Though I I also had someone saying that cars are also hard to manufacture because you're dealing with these big heavy parts, whereas robots you'd be dealing with much smaller and lighter parts. And so that's like one respect in which it's easier. I agree probably the kind of sheer complexity of say making a robot hand would be higher. So yeah, I I think overall it's not a crazy comparison. You have some kind of like estimates of how cheap uh robots could become in our production costs, how cheap it could be to run them and so on and th those are those are quite interesting. Maybe you could you could tell us what world we might be in there. Yeah, I mean the the main estimate is just based on what you mentioned earlier that a typical industrial scaling curve is roughly every time you double production it becomes 20% cheaper and that's what we saw with solar panels. It varies a bit from industry it could be 40% it could be 10%. But if you assume a similar cost curve on robotics if you say roughly now they cost $100,000. Some of the most recent ones are actually a bit cheaper than that. If you then imagine a scale up to a billion robots a year, it should cost at least 10x less. So that would be $10,000 per robot. That also roughly Well, yeah. So I think actually it could even go beyond that. One other way of limiting it is to think again do a comparison with a car. where if a car costs about $10,000, but a robot is only a tenth as much material, you might think in the long term it would be more like a tenth the cost of a car, maybe a little bit more because of the complexity. So that would be a couple of thousands per robot. And then yeah, if you imagine those last for a couple they they can work for a couple of years 247 then yeah $2,000 a robots then it's it's under it's under 10 cents per hour for the hardware and then yeah so actually the yeah the maintenance could be about the same again maybe electricity that would actually electricity prices could go up a lot if we making all these robots and all these AI chips but at current electricity prices that would be something like 3 cents. given like current current power consumption. So yeah, you end up with a total cost per hour of maybe like 20 cents full scale. Yeah. And again, this is a wild conclusion that that's is difficult to to kind of fully absorb. But I mean, if we imagine having a robot that's able to solve a bunch of tasks in the physical world, that's able to work 24/7 for 20 cents an hour or so in in running costs. say just, you know, say $1 an hour, that would be revolutionary, right? That there would be an incredible amount of demand for that. I I could I could use 10 of those robots just to do things around the house or to help me with things. So maybe this is a maybe this question is is is dumb in some sense, but would there be demand for for such robots? Do you think do you think people would resist buying them out of nostalgia for human labor? Do you think they would be, you know, maybe they become illegal? Maybe maybe they're assisted by unions and so on. Do do you think in some sense demand would be there, but do you think in actual kind of in in in practice that demand would be allowed to be expressed in the market? It does seem like once we get to the point where there is a lot of a lot of automation and people are actually losing jobs on a big scale, both from AI and from robotics, it seems like there's going to be some type of huge backlash at some point against that. And it seems hard to predict exactly what the result of that is. On the other side, there will be these kind of huge economic forces in favor of if it costs say like $20 an hour to have a cleaner clean your house now, but a robot could do it for 50 cents an hour. You know, people are going to really really prefer the robot. I mean, also with the robot, you don't have to worry about like privacy and they can be available 24/7 and there's like many other advantages potentially. I mean there are also some other disadvantages like I it does seem like cyber attacks become much more dangerous when there's robots everywhere because like if someone actually can take over your robots then you know they could they could kidnap you in your own house while you while you sleep. That sounds absolutely horrifying and but I'm I'm I'm this is actually so of course of course they would be useful to have around the house but a human factor like that that's if that's a real worry that you might be murdered by your own household robot. I mean this seems like like a scene from Black Mirror or something. This is do you think do you think factors like that worries like that maybe legitimate worries from people could hold back adoption of of robots? I mean definitely yeah it's going to hold a lot of people probably would be creeped out or worried about that but again I think it is very hard to say how it will go because it might just mean that people take cyber security way more seriously and maybe if there's very few instances of this happening or just people you know they just get used to the robots being around and take it for granted and you do also have to consider the other sides because you know like humans are not perfectly safe either like there's a chance that a human cleaner steals from you or like yeah very, you know, all kinds of other stuff. So, eventually people just have to make the make an overall trade-off. It seems like with self-driving cars, people get used to them pretty fast, and it's a bit weird at the start, and that of course they can malfunction and accidentally kill you, but statistically, they're already about 10x safer than human drivers, and that would just increase over time. And so, it seems like, at least in that case, it's a pretty clear win in favor of the self-driving, I think. And that that's perhaps a case study where adoption of technology where people prefer to adopt adopt technology because it's just so convenient to ride in in a in a self-driving uh cab. And if it's if it's also more safer, that's that's a win-win. I'm I'm thinking I've become more interested in in human factors limiting adoption of technology. Just think of something like augmented reality glasses or kind of early stage glasses that you wear around. One thing that I think has prevented adoption of such glasses is just that it's perhaps it looks weird to wear them. It's not it's not fashionable. Perhaps people are concerned about being recorded. Very kind of down to earth human factors that are not predicted by a model of the the kind of pure economics of the thing. Um so yeah I am becoming more interested in whether adoption of technology would be limited by by human factors but I think I think as you mentioned that the economic incentive to adopt robots would be so enormous that that these kind of concerns would be swept aside especially in in manufacturing right especially in robots for for manufacturing goods. Yeah, we'll think about an industry like mining or like oil wells or something like that. That's quite dangerous. I mean, but I do just to step back a bit, I do agree with the general point that in so many jobs and industries, deployment of AI and robotics will be very slowed down by a lot of these types of concerns. And this is actually why I think I think we might be headed for again quite a weird world where AI actually advances to very capable levels before most of the economy has actually changed at all. Especially if you can get this algorithmic feedback loop like you could have several years where most of the AI is being used to do AI research and so suddenly you've gone in a couple of years you've gone to kind of like super intelligent levels of AI but most jobs are just continuing as they were before and I that this is one reason why people might wake up quite late to go back to our earlier earlier point like your daily life might seem exactly the same but open over in open AI's lab you know maybe within coding maybe a lot of that's automated and that's generating enough revenue you need to pay for the training runs. But yeah, but then suddenly like open AI is basically like super intelligent levels of of AI just hasn't been deployed yet. And then deployment could be very fast because you now have extremely capable AIs that can help do the deployment. That's it seems like a scary world, right? It seems like we would want to have public information about the quality of the best available AI models so that we have at least some time to react. But if everything is becoming more internal to the AI companies, that's maybe that's not happening. I was not even imagining that it's not it's not that they're not being transparent. It's just that it's so hard to internalize until you see things hitting you in the face. And in this world, because like when I walk around the street, everyone's still doing their jobs just like before. But super intelligence exists somewhere. I know that intellectually but many people won't take it seriously till still until they actually see the the real world impacts. Yeah, there are also many jobs and you're thinking something like lawyers, uh doctors and so on. It might be the case that I can diagnose myself using a an AI model quite well, but I still need a doctor to prescribe a me or I still I still need a lawyer to do go through the the kind of formal steps of of having a a document delivered to court and that can only be done by a human and only be judged by human a human judge and so on. How much do you think factors like that will play in into adoption of AI? I mean I think I think a lot. Yeah, I think that we'll be in there'll be some significant transition period where people are using AI advisers, but regulation and just people not wanting to have AIs making decisions will mean there's still like humans in the loop from a lot of things. But then yeah over time the main pressure on that is if if you say so you could imagine a world where you say well every company still has to have a human like set of board of directors who can officially like veto things that the AIs do but then that means that those those human decision makers become the key bottleneck in the economy because that's like the one bit that it can't be sped up by AI and so then you end up with huge economic pressure to take them out of the loop of more and more things so that you can unblock the whole cycle of production. Competitors will be thinking about replacing their board and so maybe you now need to think about whether you need to replace your board and so on. So kind of standard competitive pressures weigh in on less and less things. Yeah. And like it's same if you think about the lawyer case you mentioned well that you know paying that human will not only slow things down but that's kind of an extra expense compared to like the AI lawyer will be basically free but if like a human wants so free but without actual power in the legal system and that's that might be a key issue. I'm not expecting this to be the case over the the long term and here long term might be 20 years, right? But I could expect some, as you talked about, I could imagine the AI economy, so to speak, moving in at incredible speed and the human economy to be limited by by by kind of the way we've been doing things by law or by convention for for for quite some time where it is, you know, in many countries, it is simply legal to have an AI hand in documents to to a court and and and definitely have an AI judge a case in a court and and I just don't see a change to something like that would have to go through parliament and that takes years and yeah and there are this is just this is just one example and there are in in many industries there are many examples such as this. So if you agree with that picture, do do you see a world? Yeah. What does a world look like where we still have kind of the legacy human systems but AI is moving uh very fast? Well, yeah, like a lot of it was I think it really remains to be seen how long that type of situation would actually persist because like like I was saying there would be these huge economic incentives to take humans out of the loop from more and more things. if if one country is able to do that better than another, the that country could quickly get ahead economically. So, I don't know whether that would actually be stable for like a 20 year period. It might just be more like a couple of years. Is that is that your your the way you expect things to go that that's a system like that is is unstable and it collapses under competitive pressures rather quickly? I mean I yeah I do think it is really hard to say because I guess the point on the other side would be there does seem to be quite a homogeneous global elite culture in some ways and it so the idea that say pretty much all countries would just not want to go down this path of letting the AIS make all the decisions that shouldn't be totally off the table. So maybe even though it's like not a kind of competitive it's not an equilibrium from a kind of strictly game theoretic point of view it does seem like the world does sometimes manage to just coordinate into situations like that. Agreed. Okay. I want to talk about how an ordinary person can prepare for AGI. You have an excellent essay about this uh on your Substack. So, first of all, let's get some some issues with this question out of the way because uh people in my audience will think that, you know, does this even make sense to ask that question? preparing for a world of AGI is is is like preparing for the industrial revolution, but the industrial revolution happens in in in three years instead of however long it took. The worry is that the world is going to be trans transformed to such an extent that your actions simply don't matter. Do you think yeah why why isn't that the right frame to think about this question? Yeah, the main thing I would say is that that might be correct. It might be that we're just completely powerless in the face of this. And just just to clarify, I'm talking about here from a kind of a personal perspective, not from a socially what should we do to tackle this? There's a lot that society could do to better prepare. But yeah, from an individual point of view, like one way this sometimes gets summed up is like death or abundance. like either there's an X-risk and we all die and that I couldn't there's nothing I can do to not not die in that extrisk or it's just some massive abundance utopia where everyone just has more than more than everything they need. So nothing I do really makes any difference to that. But yeah, the main kind of my main push back against that is yes, there might be nothing we can do, but in terms of what from a personal preparation point of view, what what you should focus on is scenarios where what you do now can make a difference. And you can kind of ignore unless you think they're like 99% of the probability mass, you can kind of ignore the scenarios where you just can't affect the outcomes. And like all your chips should be put into preparing for the scenarios where what you do now can have some effect. Also by personally preparing you might be able to put yourself in a better situation to help the world. So this is not this is not exactly an an exclusively kind of egotistical idea, right? This is this is also about kind of creating people that are that are able to adapt to to the changes ahead and and might be able to to help the world adapt to. So yeah, let's dig into your advice here. Your first piece of advice is to find the people that are in the know. Seek out the people who have some clue what's going on. How do how do you how do you do that? And where are those people that the first problem of course is, you know, there's disagreement about who's in the know here? How do you go about finding those people? Not exactly, you know, setting aside who exactly those people are. Yeah. I mean, in in some ways there's a very deep question there about like who should you trust, but you know, I do think there are a lot of people who are tracking AI very closely. There's people who've been preient in the past and it makes sense to at least read what those people are saying. But I think you know it's even better if you actually have some people you know personally who are more in the loop about this type of thing. Many of say like the past guests on your podcast would be qualified here. But I mean I guess some of the things I I some of I read I mean obviously I listen to Dorcash. I read Slate Code Astro Astrocodex 10 V's newsletter the 80,000 hours podcast. So yeah, I mean there's actually a lot of great substacks now that are tracking are tracking AI and then and then yeah, knowing some people in the industry, knowing some people especially people who take the more transformative scenarios seriously because I think that's still a big thing kind of lacking from the broader AI discourse. Even if you can look at these trends and see big changes coming, it it might be difficult to act on that information. I I wrote to you in preparation for this conversation about the fact that I learned about COVID somewhat earlier than than society at large, but that I felt like I couldn't really act on the information. Maybe that's a that's just a failure on my part to kind of act with conviction when I when I have some information. But it seems it seems to me that there are a lot of people at least I get emails from people like this that think big changes are about to arrive but feel like there's probably there's probably nothing to do about it, right? There's probably there's probably nothing to nothing to act on here. Yeah, I mean it could be with co it just was a case where you kind of got unlucky where you had the information but it didn't turn out to be useful. But I think we should have a very strong prior that in general more information is better even if it's particular case doesn't work out. And then I you know I think I think there was some stuff that people could do in co and I mean I know I didn't manage to do this myself but a lot of people managed to hedge their investments and save a lot of money before the the downturn. I did manage to move to the countryside which then made the next year much more pleasant for me than I think if id stayed in London and and got that done in time. And I I actually think if I had been able to act on CO say even a week earlier, it would have been it would have been valuable. I was running 80,000 hours at that point and we we prepared a lot of material about uh what's going on with CO and how you can personally help with it. But it kind of like we didn't quite get it out early enough to really get as much attention and be as useful as it could have been. But if we could have done that a week earlier, I think that would have it would have been much more useful to people. So I I almost kind of wish I'd actually acted a bit sooner in the COVID case, though that's from a social a social impact perspective rather than a personal crap one. Yeah. Another piece of advice is to save as much money as you can. Why is that useful? You often hear the opposite advice. Sorry to break in, but you often hear some some something like the opposite advice. If we get AGI, if we get perhaps even super intelligence, money will become irrelevant, right? we we'll live in such abundance that money isn't isn't the problem anymore. Maybe talk about why that that is not exactly the case. There's a few things to say about this. One is just if you put some more uncertainty into the equation that pushes you back in favor of saving again. So if you imagine say if you're like okay well it's definitely death or abundance then yes obviously spend all your money now but we're not sure that AGI will arrive with 100% certainty soon. And if you spend all your savings, but AGI doesn't happen soon, then you're, you know, you're significantly worse off because now you don't have a pension and so on. But if I've just spent 20% more money per year in the next 5 years, that's not going to make that much difference to my well-being, like maybe I go on an extra holiday or something. But so there's a kind of that if you consider that asymmetry, it actually means and you can model this formally. There's there's this thing called the the Merton share which Mertton's portfolio problem which is about like how much to save given your discount rates and so on and it you can model this out and if you even if you put in quite a big discount rate because of the chance of money not being useful anymore. It doesn't say you should spend all your money. It's just like you should spend a little bit more like 1% or 2% more compared to what you would have done normally. But then yeah, I think the even bigger issue is there could be a third scenario where money is still useful in the future and well firstly AI will probably make returns on investment go up hugely because there's going to be this like mother of all investment booms as we build out all the infrastructure to run this AI and robotics. So capital will be really scarce for a while and that means the returns on capital will be really high. So if you save money now that could turn into like 100 times more money in you know post post intelligence explosion maybe maybe a lot more. So you have to consider you're firstly getting way more money and then on the other hand you'll then be able to buy things that you just can't buy now. So I can't however much money I have I can't buy life extension like I can't buy life extension technology but maybe that will be possible to buy in 10 or 20 years and if you consider yeah the the a key way of seeing the situation is does your does for all of your goals that you might have in life do they just completely flatten off with a certain amount of money or can you keep buying more of what you value with with additional resources and Like you know right now there is quite a big difference between being a millionaire and being a billionaire in terms of your lifestyle or your ability to achieve your values more broadly like considering that it's not just about your comfort but you might have other preferences for things like social goods or well life extension I think is maybe the best example where if you can just buy more years of healthy life I think you know many people would just want to buy as many of those as they could. It's interesting now in today's world there's there are some goods where I can basic I can probably have the same smartphone as the the richest people in the world. I can read the same books. I can watch the same TV shows and there of course they can have much more influence on the world than I can. But I it is true that something like life extension is is a technology that might require more money in the future and and and might kind of like there might be a a separation between the rich and the not rich in in our ability to afford it unless it becomes much cheaper over time as it's as it's more widely available. The hope on the other side would be that things will just be exponentially getting cheaper and so even if you're poor you just wait a few more extra years. But yeah, I mean there's some things that are just scarce. So like land on the earth there's a fixed amount. So however much money you have is like how much land you you would be able to have in the future. And this could be where land is becoming much more expensive as well because land could be used for robot factories. Yeah, I I agree. I've heard this line of reasoning before that that land this is kind of like very quite simple economic argument that that land is is scarce and and therefore you know it's limited in the way that many other many other things you can buy aren't and so it'll become much more expensive. Uh but will it is it the case that for example something like farmland would be much more valuable exactly because you can use farmland to build robot factories whereas in today's world we have certain cities where land is incredibly expensive because of various social factors because of regulations and so on limits on how much you can build. Would you expect real estate in in San Francisco to to skyrocket or is it more something like land in the middle of Arizona where you can get a lot of sun and you can build factories? I've been thinking about the buying versus renting question in light of AI, but also in general recently and I think I mean yeah I would treat these as two separate markets. So the kind of the rural land one that would be ultimately driven by how much solar energy is falling on that land and then also could that land be converted into some much more productive use in the future. the land in the center of city that's that's essentially a luxury consumption good and so the question there is will people with future AI wealth want to be able to have a house in the center of the cities the cities we have today and that I mean that seems quite likely to me I do think there could be a move out to the country out to be more spaced out when it becomes like much more cheap to get around with transport and people have much more they and build like really fancy new houses very cheaply out in the countryside and there's no economic reason to be in the city anymore like your work isn't there but yeah but there would still be social reasons and you know especially if you think of land say like along the river Sen in Paris well you could see why people would still want to visit that and spend time in that type of environment even if we were way way more wealthy I mean may maybe even more so than today because as you get wealthier you probably value leisure time and social time more than we do now. That that makes sense. So, so land in very desirable cities would become like a luxury good like luxury clothing or handbags or expensive cars or something like that. It's also finite though in a way. Yeah, exactly. It's finite in a way that that isn't true. Okay. Which which skills would you expect to have most value going forward? Of course, this is like almost an impossible question and and this is bound this is bound to change radically over time but do you have a guess as to as to which skills would be valuable? Yeah, I mean this is a yeah like you say a very big question if we are actually heading towards this world of intelligence explosion then it could be eventually that pretty much everything gets automated from a personal planning point of view. Now the question is more about like how do you stay one step ahead of the current wave of automation and then earn a bunch of money while that's happening which you then save and then you can live off even if all the other skills get even if all the skills get automated. So just yeah the question is like what's the over the next say transition period what will be most valuable and I think one way to sum it nice way to sum it up is like you either want to get as close to AI as possible or as far away from AI as possible and as close to AI is you're working on improving AI or deploying it and you can see now those skills already extremely well compensated and yeah they're the pretty difficult skills to have but clearly they're going to be very valuable and that's basically because they're a complement with this AI automation and then on the other side is just things that the AI is going to be worse at and those those things will become the bottleneck in production and so their value will also go up and this is like a thing that people often don't appreciate like all the stuff that AI is bad at those things will increase in value over time as AI gets better because those will be the the the things that yeah are still needed for humans to do. Figuring out what those are is harder, but we have touched on this already in the conversation. Any task that's amanable to reinforcement learning, we're going to see AI get a lot better at that in the next couple of years. Also, any task where you can take a big data set of examples and then use that to pre-train a model. Those those will be the things that are best covered. And then the things which will be hardest will be the cases that are least like that. So basically these much more vague like long time long time horizon undefined type tasks and what would be examples there? I think a lot of like entrepreneurship management type kind of high level planning coordinating lots of things figuring out what to build in the first place and then like setting up lots of AI systems to actually do all the well- definfined chunks but yeah basically breaking the things into the well- definfined chunks in the first place. But I mean a lot of like kind of social relationship stuff could also be more like this. That's also an area where we have a lot of we might have a lot of preferences to do it with a person for for a while. So any jobs where just like relationships is really a key part of it. Examples I often hear about include something like jobs that are in the physical world dealing with people and that is that involves a lot of variety of tasks. So something like physiootherapist or nurse or tour guide, something like that where it's where it's a mix of people skills and moving around physically and you're doing a bunch of you don't know what you're which moves you're going to do when you come into work that day. Yeah. Like unpredictable environments. I mean partly you're also pointing out there that robotics is is lagging a cognitive knowledge works type stuff. So anything involving physical manipulation would also it'll be good. The trouble with that one is like as we discussing that could change quite fast but there could be this transition period. I think like Carl Schulman talks about this I think in his episode with Dorash about there could be this transition period where loads of people are just employed and they have like an AI just telling them what to do but they're being they're like building a factory. So they're kind of being used as it's like for their physical manipulation skills becomes the most valuable thing they they offer for a while. But that that only lasts until really good robotics is developed. And if you had something like that, you would probably also be able to record their movements in specific ways, use that as training data. So So again, it doesn't seem like a a situation that will hold for a long time. Yeah, I think I think that seems right. Whereas I mean yeah something kind of like someone who does like a luxury travel experience where they take you to like a private kitchen and you taste lots of food with them in like Moroccan tent in the desert or something like maybe you know people would really like that value that type of experience and they'd really want the human the human touch would be a big part of it. Yeah. And this might also be the case for something like this is not really a career path that's available for many people but being a famous person right there famous people can often get paid because they are themselves and that's something that that's can't really be out can't really be outsourced to AI you are beginning to see kind of automated influencers where they will lend their their kind of physical appearance and voice to to be recreated and then fans can interact with a model of them. But I still think there's there's probably tremendous value in being a person that's that's known and people want to want to meet the actual the actually famous person. Yeah. And there's a that's a really interesting kind of general phenomenon which is if you think of AI is making labor and then eventually robotics makes like physical manipulation less valuable. it then it then makes all the kind of other resources that you could have more valuable because they remain those remain like important resources that aren't being cheapened by AI and that yeah that we've talked about capital as one because like we'll still we'll still need the capital to build all the robots and the factories and chips but then another is like these other resources like relationships or fame which yeah potentially become like a bigger part of the economy over time. Yeah. And and remain valuable. Yeah. Yeah. And another one you mention is citizenship which is also something that's you recommend that people get citizenship of a country that that has a lot of AI wealth or that will have a lot of AI wealth. My first question there is isn't isn't the process of becoming a US citizen say extremely slow? I often hear about people who have been living there for for you know 15 years and contributed to the US economy but aren't actually US citizens yet. And so is this is this something that that matters on the time scales we're talking about? I think it's quite a bit faster than that. I I forget the exact time horizon, but I think you know if you enter now on a work visa, I thought it's more like a kind of a five five sevenyear period and then you can apply for citizenship. Yeah, you're probably right about that. I the 15 year is like an extreme example. I mean immigration is terrible. So there could I'm sure there could be things that have knocked someone off that that timeline. That's if everything goes well. And then yeah, it comes down to what you think you're what what are your timelines, but it would only be 2030 by the time you might be able to start applying. So I think there could there could still be time. I mean I also think if there's an intelligence explosion happening and you're already a work you already have work permission in the US, the intelligence explosion itself will take several years. So you firstly have to get to AGI or that can do AI research and then you have to have the whole intelligence explosion and would you be thrown out of the country? Like hopefully not after you've been there that long. So I think there could still be there could still be time. I mean I'm not doing this personally but that's partly I just like I'm I'm too it's like I'm too lazy. It's too much of a personal sacrifice to move to the US now. But I don't know maybe I will regret this. There's also I mean the question of citizenship is is interesting because your citizenship determines your piece of the the cake in in the in kind of like a national economy and countries that will do well during an an kind of runup to AI or AGI will be able to redistribute more in absolute terms just because they will be so much more wealthy. And again, this is of course speculative, but do you do you expect kind of welfare programs to hold doing a a transition into into into AGI or do you expect that they won't be able to honor their the obligations that they have to their citizens these programs? In this world, the economy is growing very fast. So, I think it actually becomes easier to honor your obligations. Yeah. I mean, my best guess is there will there would still be significant welfare. One factor is just there's there's inertia like the US I forget the exact figure but you know I think it taxes something like 30% of GDP and then a lot of that essentially ends up in welfare programs that's like the biggest federal expense. So if that just carries on as it is you actually end up with a lot of a lot of redistribution. But then the even more important point is just there would be enormous political pressure for this because if everyone is like having their wages pushed down by AI and then there's like a couple of a small number of tech elites becoming trillionaires, people are really going to want to tax that AI wealth and not just let everyone starve. I think so. I think that that would only really you'd only really get the bad scenarios where no one's getting any redistribution if there was some very like well locked in type of like authoritarian government which was able to just ignore the will of its population. But I think with say a country like the US currently it would be it would be politically untenable to do that. I mean I I suppose if the change was fast enough maybe it could be. Yeah. Yeah. or if power was concentrated enough in in say one or two perhaps even one company reaching uh super intelligence first and then you know being able to be becoming basically masters of the universe before the government is able to respond. Yeah, then we have a lot of problems that of course you advise that we should make ourselves more resilient to to crazy times. This is something that's more easily said than done. I think it's I mean we have we've now lived through the co times that were somewhat crazy but not anywhere near as crazy as I would expect an intelligence explo explosion to be. What are the lessons you've taken from from kind of trying to be resilient during CO? Yeah, someone once described to me as well with the intelligence explosion. You could imagine it being a bit like you know that in two years CO is going to start like the first few weeks of CO and then it will just never stop because it's not like a one or two year thing. It's like no, it actually gets faster and faster maybe until everything is totally unrecognizable. So as a kind of frame for the how to spend the next couple of years that can be quite useful. Yeah, I think I don't I don't have anything super in innovative to say about how to be more resilient. I I just say do the kind of normal basic things. So have some kind of like healthy routines that help you be less stressed. Like make sure you get lots of time with friends. You do exercise. I think finding a good therapist is helpful or some type of coach type person who you can talk to about things. Yeah. Finding things that help relax you, whatever they are. Yeah. having like an environment a nice I mean yeah personally I kind of like the idea of being based in the countryside through a a lot of this stuff because I feel like I would be less stressed because there would be nature and I would be able to tell myself that if there was like a bio threat or a nuclear threat I'm I'm I'm a bit safer. Yeah, I think I think those those types of things would be the main ones on my mind. there's kind of maybe a tradeoff between how good you feel in your everyday life and how relaxed you're able to be and then your your level of engagement with the world. So, one way of relaxing is to disengage. Right now, I I want to walk around in my garden. I want to talk to my friends in in real life. I want to take walks. And it's it's too stressful to follow along what's happening in AI and it's too stressful to follow along the news even. Is there is there a strategy for kind of strategically engaging with the world, getting the information you want, getting all of the actionable information and then perhaps disengaging also or having periods of of disengagement with the world so that you're not in this loop of scrolling social media and getting, you know, trying to follow along and having this feeling of you're you're productively kind of uh getting new information, but really you're just stressing yourself up. Yeah. Yeah, I mean I think how to do that practically will differ from each person. But I think thinking about exactly the things you're saying there seems like very useful to think about like how do how do you get information efficiently? Like rather than just generally scrolling Twitter, can you find five sources that you think cover the basics and just read those once a week or once at the end of each day? some type of Yeah, I think batching is a really big I mean it's hard to do in practice because this stuff is so addictive. But yeah, the more you can be like have periods of true rest where you're actually unplugged and then periods where you engage and how to do that will vary a lot by person. Like do you want to have a kind of a Sabbath type day where you take one day fully off the phone each week or do you prefer to say like you know I sometimes go on meditation retreats and then take a whole week off totally unplugged? Like I think what type of routine works that will vary a lot by person. Yeah. I guess one issue here is that as I expect things to go, many things will feel like this is the one exception. This is the one emergency that you will you absolutely need to follow. But there'll always be three of those things happening at the same time. And so you know there's a question of how do you stick to the systems and how do you have a sense of proportion of how big of a deal many of many of the issues uh maybe I should be more concrete here. What I'm imagining is something like you know OpenAI announces a new breakthrough. You try the model ex it's exceptional. Two weeks later China decides to invade Taiwan. Now there's a new open source model that's that's perhaps better than the model from open AAI and it's just you're not able to sit down and understand what's going on before the next thing is happening. It seems that we are we're just not equipped to think about the amount of information we're getting at the speeds that that we're getting them in a productive way. So do you do you have to limit your the information you get to an extreme degree in order to be productive? I mean, I think even you just saying all this out loud is already helpful for people. Just imagine this is the world we're going into and then like think about how you might respond to that at the time and also what could you do now to make yourself better prepared to navigate it. And I think in the thing you were just saying there, I think having some type of good information network would be very helpful. Like ideally you want to be able to just ask someone, okay, how good is the open AI model really? and then they can basically tell you and then that's that I think that's one big piece of navigating that type of thing. Yeah. I think the other one would just be you need I mean people do this now where they just follow random crises in the news that they can't do anything about and then they feel bad but it's not actually and I guess this will just become a much bigger issue. But yeah, it's always asking yourself like what am I actually able personally to do for both like my own my own goals and also from a social impact point of view and really trying to focus on figuring out those questions rather than just generally following things. I think that's very good advice. I I had an experience like you just described with Russia's invasion of Ukraine where I'm following along. I I'm unable to do anything, but I feel like I need to follow along, and that's that's quite unpleasant and and also just not productive for the world, not actually helping by following along. There's a lot of information out there about everything now. So, it's easier to follow along by the minute uh in the in these kinds of situations. One particular thing on that is I I find metaculus very useful for these types of things because often there's just like some kind of key parameter that matters. So like I think during Ukraine I was trying to figure out like what's the chance that London gets nuke and there were kind of like forecasts that would look at this and I could see like if that was spiking up then maybe I should leave town and but I could kind of like not follow the news besides tracking that forecast. And there's this really cool group called Sentinel run by Nuno who basically track a bunch of different potential catastrophes and then do a do a roughly weekly update on them. Yeah, that also seems useful to follow. So you get like the all the everything you need to know. You don't need to to read headlines. You just you look at this number that's that's at least in theory has kind of condensed all of the available information into one actionable number about how big of a deal something is. That one was actionable to me, but yeah, it would it would depend on, you know, if I was if I was Trump, then I would be tracking like very different metrics because I'd have different goals. Yeah. Yeah. Of course. Of course. You also advise us to prioritize things you want to have in place before we get to ADI. I'm actually as we're speaking I'm still having trouble understanding what it is exactly you mean there. Is it that you want to have certain experiences that are only available before ADI or what are what are the types of things that that you would advise us to have in place before ADI? Yeah, I'm trying to just put point at a very high level heristic which is if there's something that AI would be able to do much better than you in five years time then you should delay doing that thing until those five years. That's I think that's a big thing. So I mean an example from my own career planning is I was you know I was wondering should I write more about AI or should I write more about effect altruism is like an example you might have and I thought well clearly I should write about AI now because if we're about on the brink of intelligence explosion that'll be super valuable and if we're not then I can always write about effect altruism later in the more normal timeline and I so that that was like a case where I thought yeah it was better to delay the effect altruism case. I I think maybe another personal life example. This one's a little bit controversial, but if you if you think in a normal world, if you would be kind of indifferent between having a family now and starting a family in 5 years, so many people aren't in that situation, like waiting 5 years would actually be a big cost. But supposing that you're in one where you're relatively neutral about that then it does seem quite tempting to me to then delay delay make that delay because if we're in the AI soon world there could be all these like very urgent things you want to do to prepare like earn more money or maybe you want to just work on AI safety and like help it could be the most impactful time in history so it really makes sense to focus on social impact the next five years and you might also want to see what's going to happen before having a family because if try and get get a better sense of whether it's a good or a bad scenario. So that that was one where I thought that type of thinking you can think yeah like what what stuff is like urgent will put me in a better position before AI versus things that theoretically could be done later I think is an interesting thing to reflect on. And in that vein there there are also projects that that it might make sense to abandon. So for example, I think you mentioned this to me in preparation for this conversation about whether you should write a book or spend spend years writing a book. Maybe maybe some of the same reasoning goes for whether whether it makes sense to start out right now trying to become a mathematician. I don't actually know whether the situation there is so extreme, but I could imagine a world in which AI in a couple of years is just fundamentally better than humans are at at mathematics. And so so this this is also about abandoning the projects. Correct. Yes. Exactly. Yeah. Yeah. I mean there could also be a role for the thing you just said the kind of bucket list thing where if you think well maybe there is a chance that it does all go badly and these are the last five years. Maybe there's also some things you want to do before that. But yeah that that wasn't I think there's like a bunch of different framings here that are all useful to think about. Last question here. You write about how the intelligence explosion is likely to begin in the next seven years and if it doesn't do that it will take much longer and that we will have much more information about which world we're in in in the next 3 years. Why is it why is it we can we can say we can make statements like that with with such precision? Which curves or which trends are you looking at? A key thing is most fundamentally AI progress is being driven by there being more compute because more compute means you can run more AIs. You can train can do bigger training runs. It also means you can do more experiments to improve the algorithms. And then secondly by more labor going into AI research. So more AI researchers, human ones. Both of these things are increasing very fast now and we're getting very fast AI progress. But if you look at projecting these trends forward basically around 2030 the exact time is you know depends on the the bottleneck but let's say between 2028 and 2032 it just becomes very hard to maintain the current pace of increase of both of those things. So basically the amount of compute and algorithmic progress we have will start to flatten off around that point. It could be quite a gradual speed slowdown in which case it could last well into the 2030s but at a slower rate or it could be a relatively like quick diminishing say if just profits aren't large enough on the AI models people might be like well we're not going to buy not going to buy the next round of chips to scale so we're stopping here that could also happen but yeah just it's kind it's a bit of a weird bit weird in a way that all of these things seem to all of the bottlenecks seem to roughly line up around 2030 I think current rates can be sustained for the next four years relatively confidently and then the kind of four years after that so 28 to 2032 less clear probably slowing and yeah there's some precision around that I mean there is also maybe we just get another paradmatic breakthrough like deep learning itself and that that maybe that's better thought of as something that could happen at any time so maybe if if we got that in 2030 then maybe everything carries on for another while but in a new paradigm so so so it's basically either the the current paradigm stagnates uh and we can see that it's not sustainable to keep giving giving the inputs to the to the scaling that we're doing now for for many many more years. So either the either the current paradigm stagnates or we get something like an intelligence explosion rather soon. The chance of finding a new paradigm depends on like how many people are doing AI research. So to some extent that just fits into this model where if we have exponentially increasing AI research workforce then we're the chance of finding a new paradigm is roughly constant per year. But if the workforce stops increasing then also the chance of finding a new paradigm decreases a lot too. And um yeah just to and also to make the point about compute more concrete GPT6 probably costs about maybe 10 or 30 billion to train. It will it will cost that in 2028. That seems like we're pretty much quite close to having chip clusters that will be able to do that training run just given what's already in the pipeline. But then, you know, going to GPT7 would then cost another 10x more. So then we're talking about over hundred billion, which is like still affordable, but is getting much harder. That's kind of like a whole year of profits from Google to fund that one training run. in the in the scale of like the future of human civilization, it's it's not a it's not that much money. Yeah. I mean, interestingly, it would be like bigger than the Apollo program and like as a percentage of GDP, maybe like it's kind of getting up to Yeah. Apollo and Manhattan program levels. The thing that maybe there's there's a few other things that could stop you. So by that point, pretty much all of TSM's leading Taiwan semiconductor, their leading nodes will be used for AI chips around by then. And then that means we can't create more AI chips unless they actually build new factories, which isn't the case now. The case now they're just replacing mobile phone chips for AI chips. So that can be done very easily. You you'd also be going like we we'll be at something like 4% of US electricity would be used on data centers say in 2028. But then if you want to go another 10x you have to go to 40% of US electricity on. So you have to build a lot of power stations which is is is totally doable. Like you can just build gas power stations in two or three years. Yeah. And there'll be huge economic incentives to do it if we're on this trajectory. But it's like it's definitely becoming a lot harder than it it is now each each um order of magnitude of scaling. It's it's exciting and it's scary to to to see what's what's going to happen here. Do you want to refer listeners to your Substack? How can they find out more about what what you're thinking about? Yeah, following my Substack or or on Twitter is the best place to to stay up to date and the what you can do about AI guide I'm writing will be published, but also I'll be writing about a lot of the other topics we've we've talked about. Fantastic. Thanks for chatting with me. It's it's been a lot of uh fun. Great. Yeah, thanks for having me.