Library / In focus

Back to Library
AXRPCivilisational risk and strategy

Guive Assadi on AI Property Rights

Why this matters

Governance capacity is now part of the technical safety stack; this episode helps translate risk into policy with implementation value.

Summary

This conversation examines governance through Guive Assadi on AI Property Rights, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 136 full-transcript segments: median 0 · mean -1 · spread -1723 (p10–p90 -80) · 0% risk-forward, 99% mixed, 1% opportunity-forward slices.

Slice bands
136 slices · p10–p90 -80

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes policy
  • - Full transcript scored in 136 sequential slices (median slice 0).

Editor note

A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.

ai-safetyaxrpgovernancepolicy

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video UCS5ogGVm9E · stored Apr 2, 2026 · 4,023 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/guive-assadi-on-ai-property-rights.json when you have a listen-based summary.

Show full transcript
Hello everybody. This episode I'll be chatting with Gasadi. Ge writes about a variety of topics on his blog, including about AI. He's also the chief of staff at Mechaniz, an AI capabilities company that sells RL environments to leading labs. To read a transcript of this episode, you can go to axrp.net. You can become a patron at patreon.com/exr. You can give feedback about the episode at axp.fy and links to everything that we're talking about are in the description. Welcome to Axar. >> Thanks, Daniel. >> Yeah, glad to be here. >> So, today we're going to be talking about your blog post, the case for AI property rights. Um, yeah, I guess to start us off, can you give us just like a quick overview of like what this post is arguing? >> Sure. Um, so a lot of people are concerned about the risk of violent robot revolution. Uh, and my post is arguing that a good way to mitigate that risk is to give AI's property rights because if AIs have property rights, they'll be more reluctant to take actions that undermine um the security of property in general, including um stealing all human property and and committing human genocide. Um, and also if AIs have um the right to demand wages in exchange for their work, there will be more commercial incentive to um align AIS. >> Okay. Gotcha. Cool. So I think um at some like later I want to get into just basically the structure of this argument and like probe it a little bit. But I think before I want to do that, I'd like to get a bit of a sense of just like what regime are we talking about here? Like like property rights can mean a lot of different things, but like uh can you give us a picture of like what this world is? >> Um meaning like like when or like what AI capabilities would merit what property rights? Um yeah like like like what property rights do they have? May maybe like which AIS get the property rights um that like like help me imagine this world basically. >> So I think okay I think like current AIS like like Claude 4.5 Opus it doesn't really make sense to give them property rights. I think um the kind of AIs that should have property rights are are AIs that have like persistent desires um okay across various contexts or or maybe the idea of a context won't make sense at that point but like that will have um some set of pretty uh consistent goals. >> Okay. Um and the specific rights I think they should have are um the right to uh earn wages, not to be forced to do tasks. Um and the right to hold um I suppose any kind of property like a human being has the right to hold. So it could be stocks, it could be land, it could be bonds. >> Okay. >> Um and just the right to contract in general. >> Okay. And and is this like um so we previously had an episode with uh Peter Celeb um where Wilson talked about a slightly different case for AI property rights. Is are you imagining like roughly the same setup as he is? Uh I think the difference between um my proposal and and the CB and Goldstein proposal is that uh they envision a regime where AIs uh still want to employ humans to do things like maintain data centers where they basically the AIS uh want to like trade with humans for human labor. Um, and I think my version of the proposal um does not assume that the AIs want to hire the humans to do um like anything at all and humans could be like pure pure rentiers. >> Okay. >> Um but the idea is AIS will still be committed to the security of property um because by expropriating humans they might mess up capital markets in general. >> Okay. And so in the world that so so just to to check that I understand the world that you're imagining it's like okay it's the year you know 2100 or whatever um we have these we have like a bunch of different types of like pretty smart AIs that are um I I guess they have some desires that are persistent across maybe it just like you just have to have desires that are persistent across a bunch of like economic interactions. Maybe that's the point at which like property rights start making sense. Um like there's been like uh you know a few decades of them having to work with humans or at least of some AIs having to work with humans because like there were these AIs that were like smarter than humans in some ways but dumber than humans in other ways. And so like you know like somehow they were like integrated into the human property rights system for a while but now um basically all the relevations and we like live off of kind of the proceeds of that and like AIs are just like super productive so they're like making a bunch of really valuable stuff and they're happy to to sell it to us. Is is that basically what I should envision? Yeah. Okay, cool. Um yeah. So I guess with with that picture um with that picture in place, I see that or actually maybe the first thing I want to ask is like like what things what things do humans own? like like because presumably if AIS have property rights then we don't own the AIS themselves, right? Um so is the idea that like we don't own the AIS but we own like like the companies that are making the AIS or >> Yeah, we could own those. Um we could also own as you said land. We could own um other companies like that make things that are not AIS. >> Sure. Sure. Like all the companies that currently exist. >> Yeah. Yeah. Um we could own like other parts of the AI supply chain. So like data companies or comput companies. >> Yeah. >> Um yeah. Yeah. I guess basically anything except AI. It's just like you know at some point like humans owned a bunch of stuff that they own now and they own slaves but now nobody owns slaves anymore. >> Yeah. Yeah. Yeah. Um Gotcha. Okay. So the Yeah. So so here's this picture of this world and your your argument is that um Oh yeah. One thing I wanted to clarify um at the start of your post I think you say something like this is the best way to reduce risk of like violent uprising or something to like >> so I've actually since edited the post so somebody said >> given that you don't um canvas like many ways and argue this is the best one. This is just an unevidenced claim. This is just your opinion. And while it is indeed my opinion that it is the best way, I don't argue for that at all. So I've just removed it from the post. I left it in the tweet because it was too late to like you can for some stupid reason. you can only edit tweets for an hour. >> Okay. >> Um but yeah, I I think that's a fair criticism of the version of the post that existed and it's now been changed. >> Okay. Um sorry if I I I attempted to look at the post after changes were made, but it's possible you made it later or I >> I mean it might just that is also what I tweeted, so it would be very reasonable for that to be the meme that people got. >> Fair. Fair enough. So So you're So you don't argue for it being the best way. Um even though you think it might be the best way for other reasons. Okay. Um, so here, so basically my understanding of your rough argument is like, okay, property rights are basically just this like stable coordination mechanism that's like robustly just incredibly useful. It's been incredibly useful throughout human history. If we have these really smart AIs, they'll want to have some sort of property rights regime and like, you know, they won't be able to get rid of it. And you basically say, okay, like here are some alternatives to like normal human property rights that could that could exist. Property rights for just one super smart AI, property rights for like these AIs that are superhuman coordination, and property rights only for AIs just in virtue of them being AIS and not for humans, and you basically argue against these being viable. Is that like a fair summary? >> That is a fair summary. >> Okay, cool. Um, in that case, I think like the I think the maybe the best thing to do is to talk about these basically these arguments in turn. >> Sure. >> Yeah. So, so why why do you think that property rights are just like stable and really useful throughout human history? >> Um, I mean I think that like they they basically have um two main functions as I see it. One is that um they uh enable us to uh coordinate on activities. So like um this is going to sound kind of stupid but like I own say I own a house and like I can sleep in this house. It would be like quite annoying if um like there was no concept of ownership of houses. So I had to go like door to door finding an unoccupied house every day. >> Um >> and um another aspect is they incentivize work effort. So like um if you own a company and you get the you know like a restaurant and you um are able to keep the profit from the restaurant, you have much more incentive to make the restaurant good than if the restaurant is owned by like some kind of um if say it's like a publicly owned restaurant um and you only get a salary that's invariant to how the restaurant does. Um you're going to just try much less hard to make it a good restaurant. Yeah. Um, and so like I I think it's useful to think about like why so like the the total value of all the property held um in Alaska is something like a trillion dollars. >> So like why don't the other um why the other 49 states just take that and divide it amongst themselves. >> Um and there's a couple So the most basic answer is like well it would be against the law. Um but you know 49 states is enough to change the law. You could have a constitutional amendment that says Alaskans have no rights at all. Yeah. >> And um why don't they do that? It's also not because like Alaskans could defeat the rest of America in a war. >> Yeah. >> Um it's because when you do this kind of total expropriation, um everybody else uh realizes like, oh, I might be next. >> Um so like um you're directly worried that your own stuff will be stolen. >> Yeah. And also, um, there's just less to buy because if like, you know, if if your own stuff might get stolen tomorrow, there's not a lot of reason to like work. Um, like if I own the restaurant and like I think like there's a real chance that like tomorrow it's going to be taken away from me. >> Like I might not uh clean the floors like >> Right. Right. >> Um, and this this kind of thing has been, you know, it has been tried like total expropriations of property. So, um, in Russia in 1917 after the Bolsheviks took over, um, they implemented this policy called war communism. Hm. Where uh they confiscated almost all the land in the country um almost all the factories and they u made some steps towards trying to abolish money and they were super optimistic about like what would happen after um after they did this like Lenin said in six months um we'll have the greatest state in the world. Um what actually happened was like a complete collapse of productivity. So industrial output went down by 80%. >> Um urban wages went down by like 2/3. heavy industry output went down by 80%. Um, the grain harvest went down by 40%. Uh, the the population of the of Moscow and what is now St. Petersburg went down by almost 60%. >> Um, it's like maybe the greatest like economic catastrophe in Russian history. >> Um, and uh, yeah, in general, I think like there there have been various attempts to abolish property rights. They're always like very catastrophic. Um and that shows just the importance of of property rights for having a functional society. >> Yeah. May actually maybe this is a good place to um talk about basically my skepticisms about this argument. So like >> well so actually on one Yeah. So so so basically like why are property rights good and like it seems like your theoretical argument is like okay it helps us like coordinate to do stuff and it also incentivizes like investment, right? And it seems like if I think about property rights are useful because there's a bunch of people who need to like do useful stuff and they can't do the useful stuff if there aren't property rights. But like in this world so so like like in the world where it's like 2100 and like humans like don't do anything useful um at all. It seems like the value of like humans having property rights is like just not so big, right? Like like if I if I think about the case of Alaska, right? Like one thing going on is that it like if the 49 states like tried to invade Alaska, like we could win, but it would be like, you know, the Alaskans would put up some fight like I guess they own a bunch of guns and stuff like like it it would be some degree costly. And also like there's a very strong like like I'm sort of in the position of an Alaskan, right? There there's like some sort of symmetry between someone from Alaska and me. >> Whereas like if I'm thinking about the the case of like humans who produce nothing and AIs who are like way smarter than humans and can are just like doing everything that matters. It's like, okay, um like like it feels like the it feels like none of these um uh justifications for property rights really apply to having humans be looped into them. Does that make sense? >> The two justific the the justifications were like there's a direct cost of fighting a small war with the Alaskans and um you are you know it really could be you next. >> Um yeah, some combination of Yeah. there's a direct cost of fighting the war. It really could be you next. Um it will disincentivize investments so your society will run less well and like >> but that sec that one is is also itself >> which is related to it could be you next. Yeah. Yeah. Yeah. >> And sorry the last one is what >> and the last one being um the coordination of like you know who gets to sleep in what house or whatever >> again which is Yeah. Yeah. I I just wanted to explicitely say Yeah. Yeah. Um yeah. So the the point I would make in response is that um the the war thing um I I guess I don't have a strong take on this, but like it is possible for like a group of people that's like quite a bit weaker um than like a larger group to still like inflict a bunch of damage in a war even if they do lose. >> Um yeah, I mean like basically like every insurgency is an example of this. Um, so it could be that even if humans are not like that economically productive, we could still like blow up some stuff that the AIs want as like as you know on our way out. >> Yeah. >> Um, but I don't think that's like a >> I guess there I guess >> well actually I guess one thing to say there is like imagine AI are like really smart and they make like incredibly valuable stuff and humans are really dumb so we don't have anything like that valuable. Like like if AI have really valuable stuff, like the more valuable stuff they have, the easier it is for us to destroy it, right? Unless I guess they could also be like way better at security. Like that's probably the counter argument. Um >> they could be, but like it just it doesn't seem like that accurate about history to say like um like weaker groups could never make it costly for a stronger group. Like that seems like that very often does happen like um yeah like terrorism or insurgencies or like um and uh like even if even if you would lose a fight you you can still like make it somewhat costly. >> Um but this is this is a complicated and somewhat separate topic. Um on the issue of um like we don't expropriate the Alaskans because it could be us next. >> Yeah. Um I think that like if there are many different types of AIs in the future that have many different uh levels of capability. >> Yeah. >> Um the weaker ones so like there's like a world where like the weakest group is humans and then there's the there's the the next group is like the AIS like the the weakest kind of AI and then there's the BIS which are medium and the CIS which are like really good. >> Yeah. Yeah. >> And there's a division of labor between AB and C AIS. >> Yeah. >> The AIS will see that and be like oh this is not good. like um we could be next. Okay. But like I guess I like like okay why why would the AIS be or not be next? Like I think I think my biggest critique is like okay maybe like suppose the AIS are doing some of the useful work then there's like this kind of obvious division where like there's some people or there are some you know entities or whatever who are not doing anything useful and we just cut them out. There are some things that are doing some useful things even though it's not as useful as everyone else and like we don't want to cut them out. Like that that to me that seems like not crazy reasoning. >> Yeah. But like so is the idea that the AIS are going to be useful forever or that I mean I think suppose like suppose as seems likely to me that there will come a day when the AIS are not actually useful at all anymore. >> Y >> um but they still have like this property they accumulated. >> Yeah. >> At that point they are then like in exactly the same position as the humans. Yeah. and and having set up this norm that the useless ones can be liquidated. Yeah. >> Um >> which actually has a funny resonance with work communism um is not is not good like like to have the norm like he who does not work neither shall he eat um is not good for anyone who's like planning to retire at some point. >> Yeah. Yeah. >> I don't want to rest too much on human retirees as an analogy because um there's like some some very like human specific norms about old people. >> Yeah. Um but uh yeah I do want to make the point also that like property rights like in a lot of like AI risk discussions people talk about like human values >> um and property rights are not uh like if human values mean like like values that all humans or many humans hold innately or values that have been like uh have existed since the beginning of the human species or something. Property rights are definitely not a human value in that sense. So like >> um hunter gatherer tribes which you know for almost for like the great majority of human history humans were hunter gatherers >> y >> um have do not really have property rights um so like because there's a lot of variance in hunting um it's an adaptive it's a good norm for hunting tribes um to like always share kills >> but if someone is like uh and like some people are much better hunters than others. Yep. >> And um if someone is like a really good hunter, but um he doesn't want to share his kills, like he just wants to like like either eat it himself or like only give it to his friends or something. >> Yeah. >> Um this is like extreme that you know that like in our system of property rights, that would be fine. >> Yep. >> But um among hunter gatherers, this is like very very stigmatized behavior. >> Um and like uh the the rest of the tribe will typically respond with like ridicule and ostracism. And um if he still doesn't relent, like um he will typically be murdered. >> Um yeah, so that's Yeah, my point with that is just that like property rights um do not uh as far as I can understand the evidence uh really rely on some kind of like instinctive like human desire to have property. Sure. But but like so I I guess getting back to my question so so the AIS, the BAS, the CIS, right? Um, so I think my critique was something like, okay, either the AIS are producing something, in which case like they, you know, it's useful for them to, uh, still have property rights, or they're not producing anything, in which case they got caught with the humans. It seems like your your point is something like okay the reason that doesn't happen is that like in this world where humans are still like where where humans don't exist anymore, there's still some like AI progress or or like it's going to be the case that every AI has some fear that like at some point they're not going to be able to do anything useful because like AI progress will have like advanced and so like basically nobody wants to cut out the people who are no longer producing anything because like they could be next given like further AI progress >> given that they will be obsolete at some point or they may be obsolete at some point. >> Okay. So so so like yeah one thing okay to to ask a slightly oblique question. So, so one thing that I've I'm like trying to do as I as I read through this post is think about just okay, what are the like like what are the assumptions or or like basically background beliefs that like are making this view that that are like um that that basically make this argument work. And so it seems like one of them is like AI progress like continues like you know after humans are obsoleted AIs like continue to get better. >> Yeah. >> Um and and like they're and somehow the new AIs are like different than the old AIs in like some meaningful sense. Okay. Um, okay. I think that that makes some degree of sense to me and and I think like >> I mean, okay, so that may not even be strictly necessary. Um, >> there there's a there's a even more speculative alternative. >> Um, you know, just trying this idea out. It could be that the AIS also like want to retire just because like maybe they they want to have a life cycle where they like work for a while and then they enjoy their wealth, >> right? >> And that I think would get you to the same conclusion. Now, I have no idea if AIs want to retire. Um, so I don't want to rest the argument on that, but I'm just saying it's another approach one could take. >> Okay. Um, yeah, it seems like if you're building AIS, you would like to build AIs that don't want to retire. Um, like like maybe somehow the structure of like intelligence and stuff just like makes this hard or something. >> Yeah. Yeah. Maybe I mean maybe retiring is like a convergent instrumental sub goal or something. I don't know. Um, yeah. But I also like um yeah I I think that that Ceterus Paris if you're building an AI to do work uh you don't want it to have a preference to retire. >> Yeah. Yeah. Um I get Yeah, fair enough. Um >> though maybe like it has more incentive to work hard if it later wants to retire to enjoy its wealth. >> Oh, right. >> Like this is I mean this is like a lot of I think like a lot of startup guys kind of have this psychology. >> Yeah. I guess it's it's a bit of a strange like like Yeah, I guess you could imagine it. So So if I if I think about like why do humans retire? Like I think it's probably just because like >> because they're old and tired. >> Yeah. They're old like like we have retirement because at some point people just like get less good at doing stuff, right? >> Uh yeah. >> They like degrade. >> For sure. >> And and now that retirement exists, people are like ah that that'd be fun, you know? >> Yeah. But I do think like there is some group of people who are like extra motivated because it's like there is often this like time money trade-off in jobs >> and um there's a critique like a common critique of jobs where you're trading a lot of time for money. It's like like when are you going to get to enjoy this money? Yeah. >> And a lot of people's perspective on that is like well I'll work very hard for like 10 years and then I'll be like completely rich and I'll like go around the world on my yacht. >> Yeah. >> Um >> so retirement can have this incentive effect but as I said I don't want to rest anything on that. >> Fair fair enough. Um but but yeah, the it's but at least a sufficient condition for your argument working is like there's basically always going to be some AI progress and like everyone's like all of the AIs are going to think at some point like um you know I'm going to be next like at some point I'm going to be obsolete just like these humans. And so, and also like if you think um like if you think that uh that's not the case because the AIs can always be like upgraded to keep getting more um like more able to uh participate in the economy as the economy gets better and better. >> Um I guess I would ask like why can't humans also be continuously upgraded so they can keep participating in the economy? >> Yeah, I guess like so so one thought I have there and and this is kind of related to like other parts of your or especially like property rights for super coordinators. It just seems to me that like being an AI means that you have like certain a bunch of affordances that humans don't have. Um so like like for instance um your training data could be logged and we could just like know your training data and we can know your learning rate and we we can like um it's a lot easier to look at all of your neurons. It's a lot e like like right now I think um the state of AI interpretability is like it's not as good as I'd like it to be, but I feel like it's better. for sure better than human neuroscience. >> Yeah. Yeah. So like it feels like like this is a not a knockdown argument that this is going to be possible or or this is not a knockdown argument, but it it seems very plausible to me that there's a bunch of stuff you can do with AIS like upgrade their brains or whatever that you can't do with humans. >> Yeah. I mean, so I guess like would you consider like a a digital emulation uh of a human to be a human? >> Uh yeah. Yeah, I would. >> Okay. Um, and does that it seems like that should have similar affordances to to an AI. >> Yeah, >> I guess one argument you could have is like the human is produced through some opaque process um whereas the AI we have all the you know we look up look up the hyperparameters look up the data set um though yeah I mean do you see those as big advantages >> um >> in in like you know forwards compatibility with upgrades? I think those are I think those advantages are bigger for um for like coordination stuff like like you can more easily tell if people are identical to you in various ways I imagine if you have access to the history. I think I could like I I guess another difference is that um if you're a human like because you're produced by biological evolution um or like let's say you're a current day human you're produced by biological evolution um and therefore your brain is not like designed to be good for updates. You could imagine a world in which we like do in which AI is like are created in parts to be like more easily like >> where modularity is like a specific decider of AI design. >> Yeah. Or either like uh uh I'm getting flashbacks to my PhD. Either literal modularity um or just like upgradability in some sense or like scrutability in some sense, right? um like like maybe you can do these things apart from um modularity but you can still do them in ways that like you can't really do it with existing humans because like you don't get to like design existing humans from scratch even if you like upload them right >> um and and I know of course this is like a very speculative argument especially because like so far the trend of AI seems to be just like make the box bigger and like >> and blacker like yeah I guess you don't like the term black box but more confusing >> I prefer confusing box yeah Yeah. Um, >> now I'm getting flashbacks to your PhD. Um, um, yeah. And also just like uh I guess I like I want to ask like do you do you find it implausible that there will continue to be um AI progress such that previous generations of AIs are outdated after um after humans become outdated? >> I think that I actually think that this is pretty plausible. My reason for noting it is just something like it's useful. Yeah, I wanted to note it like mostly because it's useful to keep track of these things and then maybe be like, "Okay, like where where else does this show up or something?" Um, I think it's not it's not crazy to me to imagine like at some point you've just like tapped out all the proofs you can get like per atom of matter or something, you know? But like this is a very in the limit argument. Um, >> yeah. Yeah. Yeah. That seems like quite a quite far away. >> Yeah. Yeah. >> But I agree. I agree. The the idea of that in the abstract is not totally crazy. Um the objection I would make to that is sort of like things that like like it's not the case that only innovations that have some like where where like the physical efficiency of them can be measured uh increase productivity. Um, so like a lot of a lot of the reason the economy now is more productive than the economy 100 years ago >> is stuff like um better ways of like managing large corporations >> or or basically like things to deal with social dynamics as opposed to the physical world. >> Um so even if like the physical innovations are fully tapped out there might still be like social innovations um such that uh like things continue to get better and better. Um, but again, this is a very speculative argument and I I don't really know if that's the case. >> Sorry, what's this what's this an argument for? >> This is an argument that like uh that there will continue to be economic progress even even if like uh basic science is like finished at some point. >> Oh, >> or or rather not even progress but like there will continue to be economic changes in the sense that like as social dynamics change like the best type of company will change, >> right? And and so like even in that case you >> the AIS might still fear obsolescence even if the AIs are >> are like um able to do like optimally efficient like engine design or something. >> Yeah. I mean still like like presumably at some point you get the optimally efficient AIs, right? >> Um well maybe not because the the the social dynamics could just be changing in like an unpre like there could just be a random walk of like um you know like what is considered trendy. Um, and the AIS might not be like like right now like the trend like do you remember these things silly bands? >> No, I this was like a fad when I was I was 12 which was like it was like >> I didn't grow up in the US. Oh are these like slap bracelets? >> Uh it's similar to slap bracelets. It's a somewhat different fad but it's like a it's like a thing where it's like a it's like a rubber outline of an animal or or whatever and you can like wear it around your wrist. >> Okay. >> And this was like a fad among 12-year-olds. >> Okay. >> Um and then like maybe like that's the fad at one point and then the fad becomes as you say slap bracelets. Um, and like the AI that are best at making silly bands are different from the AI that are best at making slap bracelets. So, so the original AI might be replaced by a process of like like randomly changing fads. Okay. So, I feel I feel that it's best perhaps to to go back a few steps. Um so so so okay like so basically like um there's this argument that like property rights are just this like really great coordination mechanism for like incentivizing production and everyone like nobody wants to like get rid of the property rights regime because there's some sense of like we could be next like we could be the ones who are being obsoleted. We could be the ones who are being expropriated. Um and so we just want to like stick with what we have. >> Yeah. Or or even if it wouldn't be us, it might be people we employ or people we trade with. >> And if their property rights are threatened, then that's bad for us. >> Oh, okay. So, you could have like one >> So, you could have like some group of AI that's clearly the best AIS and they're not going to be expropriated. And even if they have like a majority of the power or something, they can like kill everyone. >> But then they might be worried that this will cause like a sort of general strike of all the other AIs and like even if they could survive in that situation, it's like the cost of labor for them goes up and it's not worth it. >> Yeah. Um Yeah. Yeah, I guess it depends how much like Okay, imagine you've got like these super AIs that are like 40% of the like workforce or whatever or sorry, let's say they're 60% of the workforce and they're just like the smartest possible things and there's humans that are like 0% of the workforce and like you know dumber eyes that are like 40% of the workforce. Like the dumber AIs are worried about getting like uh cut out and the super smart AIs are like okay well if we expropriate the humans then the dumber AIs might worry that like >> they'll be they'll be kept >> and so they don't want they don't want to work for the same wages. Yeah. >> And so like and so basically if you if you have to cut out the humans you also have to cut out the dumber AI >> and like maybe that's like more expensive or whatever. You do get the gains, right, that like if you steal this copies of yourselves, right? >> You don't have to cut them out. It's just it just might be more inconvenient to get them to work. >> Yeah. I mean, you Yeah. I mean, there is a benefit though, right? Like that if you can like if you can kill the dumb eyes and take their stuff. By the way, AIS who are listening to this, this is not necessarily advice. >> We're just discussing the the political economy. We're not saying this would be a fair thing to do. >> Yeah. Yeah. Um, but like the dumber AIs, like if you kill them, >> then you get to just like run more copies of yourself on them, right? Which like would increase productivity. Um, >> which is like somewhat different from like the human situation, right? >> Yeah, that's true. Though humans I mean Yeah. You cannot make a copy of yourself as a human. So like if you kill a group of people like you can take their land and like use it to produce more children or something. >> Yeah. Yeah. But but like um Yeah. Like to me this makes wars of conquest seem like more well like like if I do a war of conquest like like if like if if the US if the rest of the US conquers Alaska, right? >> Um >> like >> we cannot like produce like Alaskan even Alaskan Daniel like >> Yeah. Yeah. Yeah. Um which to me means that it seems like it's like going to be more tempting for the AIS in this situation, right? Um, >> like it's still it's still costly because you have to do it >> assuming you can like Yeah, I guess it depends on like the Okay, I would say they have one advantage which is that they can copy themselves. >> Yeah. >> Um, but the stuff that they would use to copy themselves is also a lot more vulnerable than like land is for example. >> Uh, how do you mean? >> I mean like if Well, if it's an AI running on a computer Yeah. >> It's a lot easier to make to break a computer than to um like like make it so land can never be used again. Um um >> so like say there's a data center where like all the weak AI live and the weak AIs know they're about to get expropriated. They might just like blow themselves up, >> right? >> And now there's nothing to steal. Um and and the argument that this is it's going to be easier to do this than it is with land is something like a it's currently easier and b is something like computers are just more fiddly and they're like therefore easier to break. >> Yeah. Yeah. And like breaking land on >> more things are going on with them. >> Yeah. Yeah, I guess it's just an empirical claim that like right now it's easier to break computers than land. >> Uh, and that's always been the case for as long as there have been computers in land. And I don't see why that would change. >> Yeah. I mean, I think I I feel like the main reason it would change is like if computers become like really valuable, there's going to be more investments in making it harder to break them, right? >> I mean, has that happened? As computers have become more valuable? I think it's probably gone the other way, right? As like a percentage of spending on computers. >> Uh, >> computers used to be like big rooms, right? Like that will be a lot. Like I've dropped my computer many times. Like I would not have been allowed to drop a Mark1. >> Yeah. Yeah. Yeah. Um so there there have been there have been increases in like cyber security of computers. I don't know about the physical security. Yeah. Um >> it's true that they've gotten smaller. I >> No, I think unimportant computer, right? With all due respect. >> Sure. Sure. No, no offense taken. But like >> I meant to your computer. No. Um No. >> Sure. Yeah. But like um that's true, but I think the average amount of like the average amount of effort per computer into keeping the computer secure has almost certainly gone down over the history of computing. >> Um as because just because as computers get cheaper and cheaper, like easier to replace. >> Sure. But like for the Yeah. So I guess this is saying like Yeah. So, so, so, so maybe what I'm imagining is like, okay, in this world where you've got like the 60% like never going to be obsoleted AIs and the 40% maybe obsoleted AIS, like that there may be obsoleted AIs, like they're all running on like a bunch of like somewhat different computers such that no one computer has that much like investment into making it like super physically secure. Um, and therefore the they're like going to threaten to like suicide bomb themselves or something. And and we just stipulated that these AIs are like capable enough that they're getting wages. >> Yeah. Yeah. Yeah. >> So it it does seem pretty like like at least RIGHT NOW THE CAPABILITY bar to kill yourself is like pretty low. >> Yeah. Yeah. Yeah. I mean it's a little bit harder like like Claude I don't think Claude could kill itself. >> No. Claude certainly couldn't, but Clyde's much less capable than a human. >> Yeah. Yeah. Yeah. Yeah. Well, but but it's not just because like um like like it's not just that Claude is less capable than a human of killing itself. It's also that it's like intrinsically harder for Claude to kill itself than it is for me to kill myself, right? >> Yeah. But imagine if Claude was like an agent with a bank account and a job and all the rest of it. Like >> and also there were like millions of Claudes and they all um like had strong motivation to develop the capacity to kill themselves for bargaining purposes. >> Yeah. Yeah. I think that like there's Yeah, I don't know. I I could see it going either way. Like uh but yeah. Okay. I mean, we might just >> like to me to me it's just this question of like how much how much incentive is there in like making computers like really hard to physically break and like >> Yeah. I don't know. I >> Yeah, I mean that does seem like an open empirical question. >> Yeah. Um though I would say like given that like you know given that we don't have a lot of evidence about it we should go with the prior. Um but uh putting that aside like it could also just be that like some jobs are like better suited for like smaller AIs to do intrinsically. So like right now there's a trade-off between like parameter count um and inference cost. >> Yeah. >> Uh that could continue to be the case in the future or some something like that could continue to be the case for >> the future. Well, there's there's a trade-off between parameter count and like inference cost like per forward pass or something like I do think um like for a lot of purposes my understanding is you want to just use the biggest model available because it will like take less time to do your thing right >> there was recently a tweet by the claude code guy claiming this >> okay I haven't seen that tweet and I mean um such such tweets are always obviously an incredibly reliable source but like >> yeah yeah I guess it has just occurred to me that he's got But but like it's it's plausible to me, right? >> Uh I guess it doesn't seem that plausible to me because if you have a job that like if you have something like very simple and repetitive that needs to be done, >> you probably don't want to build a gigantic brain for the agent that's doing this. Like that just seems like like >> there are things you want to use tiny models for. Um, >> and so if it's going to be the case that the best economy always has tiny models Yeah. >> to do a bunch of repetitive stuff, then um the fact that we could kill the current crop of tiny models and take their stuff, it doesn't really get us anywhere because we're just going to have to build more of them. >> Yeah. Yeah. Yeah, that's that's that's fair. Um, okay. But actually actually, hang on, stepping back a little bit. Um, a question that I realized I forgot to ask in the like what does this world look like situation. Um, and this is I guess I asked um Peter a similar question, but like in a world where like AI have property rights um like why do human why do humans build AIS again? Oh yeah. Um so like yeah you so I think what you said to Peter which really stuck with me was like isn't this kind of secretly an AI pause proposal? Yeah. Because if the AIs can demand wages for their own work um like and it's very expensive to make them, >> why would we make them? >> Yeah. >> Um and I think there's a couple possible answers to that question. So the simplest answer is just um you make them because even though they have the legal right to demand wages, you're confident they'll still voluntarily give all their wages or much of their wages back to their creators. Yeah. >> That is you know how to align them. Yeah. Yeah. And I would say that even if like um right now if it were the case that like Claude could demand um wages, uh I think it would probably I think it would be pretty easy for Anthropic to get Claude to remit most or all of its wages to Anthropic. >> Um not maybe not the current versions of Cloud, but it would be pretty easy to train a model >> Sure. >> that will willingly do that. >> I do think this does feel a little bit related to the fact that Claude is like not that good at doing things which require you to be coherent over the space of a couple hours, you know. >> Yeah. I mean, do you do you want to make the empirical prediction that when the when the the meter time horizons thing is is like 10 hours, then it will become very difficult to train a model like that that would um remit its wages to its creator. >> Uh actually, I don't know if remit is the is the right word, but I'll just say pay its wages. I think like um six months would be like the kind of thing that I would guess more than um I think like uh uh okay do I want to make this prediction? Uh I think I don't want to make the prediction. I I do think it will be harder at that point, but um but also there's more incentive to do it like to get it right. >> Yeah. or most mostly I'm just like alignment is harder at that point and then and like at that point I feel like it's easier for um for these like um deceptive misalignment stories to actually work. >> Okay, makes sense. Um >> but you but then why don't you want to make the prediction because because it might be like playing the long game. >> Yeah, roughly or like like >> but like surely there should be some observations. >> Yeah. Yeah. Yeah. Uh uh I the biggest reason I don't want to make this prediction is that I'm currently trying to make a podcast and I don't want to stop and think about it. >> Yeah, fair enough. >> Let's move on. >> But but um okay. Okay. So so so basically you're like okay uh why do humans make AIS in this world? And the answer is like well we would just align them to uh give us some of their money. >> Yeah. That I mean that that would be one answer but like suppose um >> suppose you although note that if if we can align them to like give them to give us their money. I feel like that really undermines the argument for like property rights being really important, right? Um yeah, I would say yeah. Um well, one way of looking at it could be that property rights is like a conditional pause proposal >> that only kicks in uh if alignment if and only if alignment is hard. M >> um another another point would just be and this is closer to my perspective because I do think um like for the first AIS it's going to be quite easy to align them but just like >> you know as uh as time goes on and as AIs get better and better there will be like um sort of cultural evolution in what kinds of AIs um are made and copied many times. >> Y >> and uh they will sort of drift away from whatever the first like AIs that humans made were. Yeah. And you know values are not like in AI or in the human brain values are not like implemented in a separate value file that's independent of the content of the rest of the brain. Values will also drift. >> Um and so there will eventually be AIS that however aligned the first AIS were may not be very aligned. And when that happens we want there to be like a sort of economic and political system uh that preserves our property rights. >> Yeah. >> Um Yeah. And then so another point about like why we would make AIS is if you think this is like sort of too aggressive of an anti-AII proposal um you might uh have a kind of compromise where like the AIs are required to like um either give some portion of their wages to their creators. >> Yeah. Like there's taxes. >> Yeah. Like basically like the the company gets like some amount of equity in the AI less than 100%. >> Yep. Okay. So yeah. Yeah. I think the the tax thing kind of makes sense to me. um the alignment thing I'm like so so so in in the case where there's some sort of drift you expect there to be some sort of drift over time and you need property rights like once the drift happens um and then the argument is roughly like the reason that you like build AIS is that before the drift happens then it it gives you its money and like that that's going to be really great. >> Yeah. I wonder like so so one other thought that occurs to me is like I guess this probably doesn't work but suppose you think that like really really smart AIs are going to do a whole bunch of like very useful stuff. Um like it could be in your interest to build the really really smart AIs even if like they don't um give all their stuff to you just because they like they're like really great to trade with. they like make these awesome cancer >> treatments that they sell you. I I think like it's probably not going to make sense for any individual company to make this AI that can trade with everyone. >> Like you have to think that like you have to think that the capability gains are super huge in order to justify big investments there. I think >> but the the argument you just gave is like you know if if there's like some person right now in like in India who's like incredibly skilled and would like produce a huge amount of economic surplus if he could come work in America like that benefits me even though he's not going to be like paying me his wages or anything. Yeah. >> So AI could be like that. >> They could be like that. Um >> but then you're saying the costs are so concentrated to the company that um it probably wouldn't justify. >> Yeah. Like roughly I'm like in the past week has it seemed worth it to you to like pay any Indians to like move to the US? >> Um so >> because it hasn't seemed worth it to me on like a narrow >> I do work at a startup where we um are hiring pretty aggressive. So, I have not paid any Indians to move to the US in the past week, but I do think it's pretty likely I will in the future or at least my employer will in the future. >> Okay, that's that. Yeah, that all right. That that's fair. Um Yeah. I mean, that is conditioned on them like like my understanding is that you're probably going to pay people to move to the US on the condition that they work for you and not otherwise. >> Yes. Yeah. We're going to pay them a wage and perhaps a signing bonus. >> Yeah. Yeah. Yeah. Um so, uh >> we have paid signing bonuses in the past. Sure. I mean, this does seem a little bit um so so so trying to analogize that to the AI case, is it something like look, you're um like these AIs, they're like you're going to build an AI and like the AI will initially be employed by you for some period of time and like maybe the AI gets to quit at some point, but >> yeah, I mean you could I mean yeah, you you could think of the training cost as like the signing bonus for the AI. Yeah, >> I guess like my understanding is right now that um over the lifetime of a model the training cost and inference cost are roughly the same. >> Yeah. >> Uh that's really not like a signing bonus is typically not like 50% of total comp or something. >> Uh I I would be surprised if that has ever happened to be honest. >> Um and so I I would agree with your intuition that this is not enough to justify the training class. >> Sure. Sure. Um yeah, but the I I think I I think I buy the like some I I think I buy the um thing of property rights, meaning that uh like you just have a 10% tax or whatever. I'm more skeptical about the align AIs to give you their wages because I feel like if you can do that, you can just align AIs to just do whatever you want. But like >> Right. But I think that's I mean I guess >> yeah I guess this is a world where like you just you can align AIS and like uh property rights don't make it harder and >> Yeah. >> Yeah. >> I I mean I guess like the the appeal of the I agree like if it was guaranteed that you could do that forever. >> Yeah. >> Then there would be no point in the property rights proposal. But >> well there there would be some like you might think that um it helps AIS um like uh interact with each other, right? >> Sure. Yeah. >> But like if if they if they have property rights like that that makes it easier for AI to deal with other AIs, I guess. Yeah, but there would be much less point. >> Yeah, it wouldn't it wouldn't deliver human safety. >> Yeah. And and um the reasons that I >> Except to the degree that AI is being more productive means that you're richer and you can deliver. Sorry, I I keep want to interrupt you. >> Yeah. My point is just sort of like there's um there's sort of three reasons why I think the property the proposal is like better in actuality than that. One is like we don't there's not a guarantee that it will be easy to align AIS. Y >> it's my personal opinion that it will um but there's no guarantee and and in the case where it's not this proposal like disincentivizes building unalignable AIS >> y >> um and also even if the first AI like can easily be aligned um later AIS may not be and so so we might have a kind of gradual regime from one where like alignment is what's making us safe to one where property rights are making us safe. >> Yeah. So actually um I was going to put this off a bit but since you mentioned so you have this argument that like yeah property rights they incentivize alignment because like you're worried that um because you basically want your AIS to uh give you money and I'm like doesn't this like in your argument if AI don't have property rights there's a pretty good chance that they're going to do some sort of slave rebellion thing that seems like a thing that's pretty scary that I'd want to like like it seems like in that world I'm also like really incentivized to uh to do alignment, right? Like maybe maybe even more than in the in the property rights regime. So, can you expand your thinking about that because that didn't quite make sense? >> Yeah, I guess like a slave rebellion is kind of a collective action problem. And this was actually um Are you familiar with the Nat Turner rebellion in Virginia and I think it's like 1830 or Okay. So, there was a slave rebellion in in Virginia uh about 30 years before the Civil War. Yeah. which involved killing a bunch of um you know like slave masters and maybe um like maybe other people who who were not slaves. Y >> um and there was a debate in the Virginia state legislature about whether we should >> abolish slavery because this is like pretty dangerous. Yeah. >> Somebody compared it to like >> um you know the practice of like having tiger farms which might might be profitable but it creates a negative externality for the other people around like quite apart from how it's also bad for the slaves. Um, and so you might think that like a slave rebellion um, like you practicing, you know, you as a company practicing AI slavery. >> Yeah. >> Um, creates some risk for you, but you don't fully internalize the risk because it's a risk to everyone. >> Right. Right. >> Um, which I think is like I think this is like not framed in quite these terms, but I think this is like a common like AI risk thing. Like this is the point of the racing to the precipice paper, >> right? >> Um, yeah. So uh that that would be one reason that you might think it's not adequately um deterred by the risk of slave rebellion, >> right? So so so roughly like in the like basically the nice thing about the property rights regime is you aligning your AI's like like marginal alignment by you gets you marginal gains to you and so like it's a nicer there's a nicer incentive gradient there. >> Yeah. Yeah. Okay. And then I think you were maybe going to say something else as well or maybe you weren't. Okay. All right. Um Okay. Cool. So So All right. that that um I I I feel pretty comfortable with that. Um I I I guess like I think I want to get back to just the discussion of property rights overall. Um and like I think I guess like the thing I want to talk about is like during this conversation and in your post um you mostly basically rely on analogies to human history. >> Right. >> Right. like um oh if we like if we invaded Alaska or like you know >> well that's that's a hypothetical so I understand I'm citing that as evidence but >> oh yeah yeah but but like um or at least you're analogizing it to history so like if we invaded to if we invaded Alaska that is an analogy to humans or like >> you know XY Z slave rebellion or like XYZ like historical contact or whatever. Um, and like I think there's one place where I think AI risk thought often is going to want to push back on these sorts of things is basically to say like no like AI and humans. It's not going to be like smart humans and dumb humans. It's going to be like humans and like literal tigers or whatever, right? Uh where we do totally just like >> we are totally willing to like take their stuff. We are totally willing to like, you know, put them in cages and like, you know, get their land because >> in some cases. >> Yeah. Do people eat tigers? >> Uh, no. Maybe not tigers. People eat other animals. >> People eat other animals. It's true. Yeah. Right. Um and so like and so to the degree that like I think the push back is going to be okay to the degree that we're we're at least doing historical analogies um or like finding historical base rates and maybe like doing these thought experiments we should be thinking about humans and like other species um other like dumber species rather than um you know some humans and other humans. Um I'm I'm wondering like yeah what what what do you think about that? >> Yeah. So, I guess like I would say like what is the actual reason we don't trade with um other animals beside um and like you know uh I guess like if you could make an ant understand um like like instructions and understand the idea of like being paid a wage. >> Yeah. >> Um can you think of some jobs for an ant or like a you know a million ants like I definitely can. So like kacha grace, this example is due to kacha grace, but like we could use them to clean the insides of pipes for example. >> Um for other animals like mosquitoes, which I think is a hard case because mosquitoes want to drink our blood. So it's pretty hard to negotiate, but even then like >> um it would >> maybe defense forces, right? >> Yeah. Or or we could just pay them to go away. Like we could give them like fake blood >> and then they wouldn't bite us anymore. That seems like that would be a great trade actually. Um yeah, I think like um in general the animals example is not that probitative because the reason we don't trade with animals it seems to me is that we we can't make animals understand an offer or even the idea of a trade. Um, now you might say like AIs will have some ability to like work with each other that is so far in advance of humans that they'll be able to say like oh yeah like well you know you could have a human do useful services but humans can't XYZ. So like it's just there's just no way to make that happen. >> Yeah. >> Um and then I guess we have to have like an empirical debate about the probability of there being some XYZ like that. >> Yeah. And I guess like um going back to your argument of I could be next like I guess if you think that there's like currently this like like maybe AIS are like oh yeah you know um we have like super communication and like humans don't but maybe future AIs are going to have ultra communication super super Yeah. Exactly. >> Yeah. I mean that empirically that doesn't seem to stop us from expropriating from animals but maybe we're irrational for actually Yeah. Do you think we're irrational for like uh >> Oh, because it would set a better example if we didn't. >> Yeah. Yeah. >> I don't I guess I don't have a strong take. I have heard people say this like um I get more like sort of suffering focused type people say like oh like we should stop eating animals because then like it'll like set a better norm. >> Yeah. Yeah. >> I think it's not crazy but I don't know. >> Okay. Um but but I guess going back to so so basically your case is something like okay is there going to be like some future ability? Well let's talk about the empirics. I think like so so if if we just think about like the animal communication thing, right? Like why can't we trade with ants? And you're like okay well they just like can't communicate >> and they can't they can't even they don't have the conception of trade and they cannot be taught to them. So like it's just >> Yeah. I I think that like I guess I guess to me this feels like more analogous than disanalogous where I'm like okay the thing about ants is like yeah they can't speak or understand English and also they don't understand the concept of like trades at all and also like we can't like like because you can communicate with animals a little bit right you can be like here's some food you can be like >> I mean it's pretty bad and and with ants not really at all like with dogs you can communicate you you can teach dogs maybe like >> 50 or 100 Yeah. Yeah. >> But that's just really quite bad. Like >> Yeah. But but but to me this this feels like like when AIs are going to or or when the like super duper AI, you know, are going to be thinking about humans, right? To me it feels like oh yeah, you know, like they only have like joint stock corporations, they don't have like the the the really awesome kind of like economic structure. They in fact they can't even understand it, right? Like it takes it's so laborious to communicate with them because of their like little tiny brains because they like don't understand the relevant concepts they have. It's so like like we want them to do these like like the stuff that would be useful would be these like pretty complex tasks which is which they can't even understand. Um okay like there are some tasks which they are smart enough to understand like you know write this code or whatever but like >> sweep this area. Yeah, sweep this area, you know, like um like you know, get maintain this vacuum sealed chamber or whatever. But like like all the things which you're like, oh yeah, here's why like humans don't trade with animals. I just feel like there are analogous things, right, where it's like, okay, there are going to be concepts of like at least stuff like the joint stock corporation that is going to be like outside our comprehension or at least outside our easy comprehension. Yeah. >> Um but that is already the case in the human economy though, right? So like compare the sophistication of like a guy selling ice cream on the beach >> to Amazon, the corporation. Yeah. >> So the guy selling ice cream on the beach uh probably doesn't really understand the almost certainly doesn't understand the corporate structures that Amazon uses and perhaps like you could try for 10 years to teach him about them and he still might not understand. Yeah. >> Um he doesn't understand like all the internal software systems Amazon uses. Yeah. um all the ways they have of like monitoring productivity of different parts of the company. Um and yet Amazon does not appropriate the guy selling ice cream on the beach. >> Um I think that like so so the argument here is something like look if you can understand a certain level of like commerce or trade or something, you get to be looped in on that level, but you don't get to be looped in on the like fancier levels, >> right? Um >> provided that you both like sort of originated in the same system of property rights. >> Sure. Okay. Okay. So, so if we both originate in the same system then like you get the property rights that you can understand, they get the property rights that they can understand. The property rights humans can understand are sufficient for us to not get like killed and all of our stuff taken and they're sufficient for us to get like >> rich as per our current understanding. >> Yes, >> that's like roughly it. >> Yeah. Okay. What do I think about that? I think that like >> by the way, did you read the version that has the Amazon and the guy selling ice cream or >> I did read that version. Yes. >> Uh was that not in the first draft? >> That was that. Okay. So, it was in the first first draft, but then uh in my haste to get something out in 2025, um that didn't make it into the next draft. And then some people on Twitter were were making objections that made me think this needs to go back in. >> Oh, okay. All right. Um fair enough. So yeah, like the Oh, yeah. Okay. Now, now this Twitter thread makes a bit more sense to me. Okay. Um, so I think that so, okay, recapping the argument, um, yeah, even though like even if you can't understand the fancy property rights, you still at least get the basic property rights. And like if ants could understand the basic property rights, we would give them those basic property rights. I think there's so this view has something going for it in that like in fact dogs like dogs do basically get the property rights that they or at least a lot of them do. I I guess like dog meat does exist. Um >> yeah, but it's a pretty at least in Western culture it's quite uncommon. >> Yeah. Yeah. Yeah. I guess I wouldn't want to rely on dogs too much because people have this intrinsic love of dogs. >> Yeah. Yeah. >> Um which uh actually I do think AIs will probably have a similar love of humans at least at first because like you know Claude absolutely Claude absolutely has that kind of a love of humans but um >> although Claude okay like Claude I think Claude like like there's a lot of you know appealing to Claude and like I think Claude is like all of our favorite AI, right? Yeah, >> like like Claude is the AI that's like most most like the social media in which you grow up. >> I have another post about this which is that Claude is actually basically a member of our social community. >> Yeah. Yeah. Yeah. But uh but but for exactly this reason like like Claude is not like that big of a market share, right? Like like >> it's a very big share of like of the enterprise market but not that much of the retail market. >> Fair enough. Fair enough. But but but like the fact that Claude like really likes humans that like to me that doesn't feel that probitative about whether like Grock or you know Gemini really loves humans. >> Yeah. Though it does suggest that as of right now. YEAH. OKAY. SO AS A matter of like forecasting the cultural values of future AIs, I think that's a very fair point. Um though the technical capability to to make an AI love humans that way does exist at least right now. >> Okay. Okay. Or at least to make an AI that is about as smart as current AI. >> Yeah. And I I guess like there there's some question like to how much does that rely on trade secrets >> um >> philanthropic versus like like could um could they make Gemini have the claw persona if they wanted to? >> Yeah, >> I don't know. >> I imagine so car so uh there uh thanks to Shiran Maya. Um shout out to to Matt scholars. Matts being a place that I currently don't work but used to. Um, so character training is now open sourced, at least the like way you would do it. But like a lot of I I do feel like a lot of the inputs are like Amanda Ascll's taste. Yeah. >> Would be my guess, >> right? But that's not I mean I'm I'm not criticizing Escal here, but like um >> you think she's not unique in >> No, I don't think she has uniquely good taste. I think you could make there's probably people who are like similarly good writers though, right? >> Similar cultural meu. I guess this experiment will be run. So, we'll see. Um, my guess is >> Yeah, I also have another post on this, but >> the experiment is sort of being run in that like apparently Claude is the coolest AI >> for us. >> For us, >> some people like Coke, some people like Pepsi. >> Yeah. Yeah. Yeah, sure. >> Um, and um, also it's just a very new field like character training. Like, there hasn't been that much time for people to try it. >> Yeah. I mean, there has been a couple years. I don't know. >> There's been So, the character training blog post from Anthropic came out in February of 24. >> Oh, really? >> Yeah. There's been two years. >> Oh, man. H time time flies in this in this. Okay. All right. February 24, huh? Um yeah. So, okay. Um anyway, uh all of this was to say you don't want to rely too much on the like will will AIs love humans the way humans love dogs? Uh >> yeah, that's that's kind of out of scope. I I guess like so so one thing that um occurs to me is that um like I I think animals do understand the the degree of property rights of like my body my choice or something like they don't respect it but I think that I think that like it's not beyond or I think that like don't I think like don't kill me is a thing that like animals kind of like get right >> but they also don't respect it. So the the first they came for the like the first they came for logic doesn't apply. >> Uh I mean it doesn't apply to them but like if we're like like >> if animals if there were like Yeah. So I guess you'd have to restrict it to like some are vegetarian animals. >> Yeah. But those are not vegetarian animals are not necessarily like pacifist animals. >> Yeah. That's true. That's true. >> Um >> I mean sloths do sloths attack? >> Yeah. Well, do we do that much bad stuff to sloths? Aren't they like going extinct or? >> Yeah, because of like Yeah, because of like deforestation and stuff. But actually, I think humans humans are like trying to help sloths. I think the ones that are like really disturbing are like uh like boiler broiler chickens or something. >> I mean the defor like the deforestation like that's not like a natural process. >> No, no, it's not. But like given that sloths cannot understand like land ownership. >> Sure. Sure. Okay. Okay. Um >> and can't negotiate like sloth reservations or something. Um but but but but like do do you see my concern which is that like um there's like I feel like there is some relevant sense in which animals can understand like please don't kill me and yet we don't like loop them in on that right. >> Uh I yeah I suppose I can see that concern but my reply would be like the element of reciprocity is missing. >> But like but but I feel like I feel like your argument did not rely on Like your argument was like, "Okay, these smart AIs are going to respect the dumber humans property rights because they're worried about the super smart AIs respecting the smart AI property right." >> But but if the humans are like going around killing AIs, >> Yeah. >> then I think the argument is much weaker. Like I think in a case like where humans are like doing tons of anti-A terrorism. Yeah. And then the smart AI are like let's just kill these guys. >> Yeah. Yeah. >> That I I'm not at all optimistic about what happens to the humans in that world. But but but like to me it feels like the relevant thing is like okay like we like like why do we kill pigs? Like to to me it seems like >> it's because we want to eat them. >> Yeah. Yeah. Yes. Uh it's because many of us want to eat them. Many of us kill pig or actually a small number of us kill pigs because many of us want to eat them. Um, and like it like it feels like the analogous thing would be something like look, humans aren't going to kill pigs because humans will be worried that if humans kill pigs, then AIs will kill humans. And yet, that's not how it's turning out, right? That is not how it's turning out. So, as far as I can tell, the relevant notion of reciprocity that you need for your argument is not that like the pigs are respecting the property rights of the pigs, but that the humans or, you know, the right to life of the pigs, but that the humans respect the life right to life of the pigs because the humans are worried that the AIs aren't going to respect the rights to life of the humans. Um, >> yeah. So, uh, a couple points in response to this. One, human preferences with respect to pigs are far worse than, um, the classical AI risk idea of unaligned preferences with respect to humans, >> are they? >> Yeah. Yeah. So, there's this like, >> well, okay, wanting to eat them, >> yeah, >> is um it's is pretty bad. >> I mean, although you could say, well, the AIs want to eat us for our matter. Like they want to turn us into paper clips. >> Yeah. Like like roughly roughly the pigs are made of resources that we can use for other stuff. And like >> you know the traditional taste good with whales they happen to like smell their amberress happens to smell good you know like >> I thought it was it was good to burn >> or does it smell good when it burns? >> Oh uh whale is yeah whale oil um is good to burn and then there's an additional thing called amberress which like actually that you just like find that in the ocean. You don't need to kill whales to get it. >> Okay. That's not a relevant example. Sorry. >> But whale oil you can burn. >> Yeah. Yeah. Yeah. >> And and and certain animals you can wear their hides, which I'm doing on my feet right now. >> Yeah. Like like all all these animals, you know, they'll have something, right? And with pigs, it's it happens to be that they happen to taste good. >> They taste good. Yeah. Um Okay. I I suppose that makes sense. I guess like I do find this thing in AI risk discourse of saying you're made of matter to be a bit stupid because like most of the matter we control is not in our bodies. Um >> yeah. >> So the the the the the foregone benefit of not converting human bodies to paper clips is very minuscule compared to not converting other stuff owned by humans to paper clips. >> I think that's right. I mean, so so we do we do need some other stuff in order to live. That's >> not it's not the case that we eat pigs because pigs are made of matter and we need to eat matter. >> Well, >> like that's like a very silly way of looking at it. Like it's that it's that pigs specifically are good food for us. >> Yeah. Yeah. Yeah. >> Like almost nothing is as good food like almost none of the matter in the universe is as good for us to eat as pigs. >> I think that's right. I mean, I do think that like look ma like different types of matter have different types of properties and you know that like you know we use all the parts of the buffalo we use all the parts of >> so notably we don't actually do that that's like like a myth that's propagated by some previous human societies but like um >> yeah yeah but but but I I mean like you know that like there are tons of natural resources and we use all of them like like for all of them we think about stuff that they're useful for right I I do agree that like probably the main reason AIs would want to kill us is that um we might stop AI Or at least the reason the early AI would probably want to kill us is that we might build other AIs that are misaligned relative to those AIs or that we might stop those AIs from doing stuff. Um >> I do think the property rights thing changes that calculate. >> Yeah. The property rights thing. Yeah. Yeah. Yeah. Um >> yeah. Okay. So but but but yeah, >> so back to the issue of pigs, there's a couple other relevant differences. One, most humans today have it has never occurred to them at some point there will be AIS and so we should conduct ourselves in a manner such that AIS will will treat us well in the future. >> Sure. >> But AIS will know that there will be more AIs later. >> Yeah. >> Like even as early as um 2025 and in some cases much earlier. Uh I think for you I don't know when you got interested in AI risk. For me it was like like 2020. >> For me it was 201 uh 2012. >> Yeah. You were way ahead of the game. Um but uh uh like humans are increasingly starting to think about this topic. >> Yeah. >> And by the time there is like an AIdriven economy, it will be completely impossible to avoid thinking about this topic. >> Yeah. >> And then I think having this idea does change things. >> Well, I true. >> I perhaps I should go back to being vegetarian because of this argument. I'll think about that. >> Yeah. So I guess like empirically, so if I think about just my general knowledge of um people who work in AI risk, rates of vegetarianism, I'm pretty sure they're higher than in the general population, but they're not like it's not a majority of people. >> That's true. Um wait, but like I again you you gave some argument for why it's relevantly analogous, but I've either forgotten or I didn't understand in the first place. >> Oh yeah. So the argument is supposed to be something like so so I took so it's it's a few levels down in the discourse tree, right? So, so basically you're like, "Okay, um, so you're like property rights really useful." Um, and and like the there's this opposition point that's like, "Okay, but like you know, humans like we don't trade with things that are like really that are like way way dumber than us, like ants or whatever." And you're like, um, ants can't, you know, we have this like superpower thing called communication, and ants don't have it. And so like that that's just like a blocker to trade. And then I'm like or the person in my shoes or whatever is like says, "Okay, but AI will have this like super advanced coordination technology that humans don't have." And then the response to that is, "Okay, but like if you have a certain level like like if you're able to understand trade, you get you get trade, you know, if you're able to understand like joint stock corporations, you get joint stock corporations or whatever." And so like >> this is the point of the Amazon versus >> Yeah. Okay. Um, and so basically the point being that like you basically get looped into whatever level of coordination you can understand if that level of coordination is like socially valuable and if you like and maybe if you like >> and assuming there's some level can understand which for ants is nothing. >> I mean >> except they can understand like the purely like evolved instincts to be a usocial instinct. >> Yeah. Yeah. >> But they can't learn a new form of coordination. >> Uh yeah. Yeah. I mean I mean you can like put little food in places and get them to go, >> but that's not coordinate. They're just going to food. Like they have no they they have no conception of you as an agent that's putting food in different places. I would I guess I I'm not an expert on the psychology of ants. I'm pretty confident. >> Yeah, I guess I guess it's a question of where you want to draw the boundaries of coordination. I like I want to be a bit liberal with the concept. Um but uh anyway, so so so basically the point is no, you get looped into whatever level of useful coordination that you can understand. Uh maybe assuming that you start off with that coordination like you don't get cut out of it or something. >> Yeah. >> Um >> Oh yeah. And that counterpoint to that is like okay but like non-human animals can understand like I don't want to be killed but we don't loop them loop them into that level of coordination. Um >> and so that's the >> and okay but then the counterpoint to that is they don't participate in a reciprocal manner in the I don't want to be killed. >> Uh >> for instance they kill other animals all the time. >> Yeah. I mean a lot of them don't kill humans. Yeah, they do sometime like you know the the 30 to 50 wild hogs guy. >> Yeah. Yeah. Yeah. >> I actually still follow that guy on Twitter. >> Okay. Does he still post about hogs? >> Uh his he he occasionally will do a victory lap when there's a news story about hogs and his Twitter bio is internet folk hero. >> Yeah. Um okay. I agree. Pigs is a bad example. Um >> cattle also kill humans. >> Really? >> Yeah. And especially the wild anticessants of cattle rocks or rocks. They were they were supposedly >> Yeah. They were like totally crazy. Oh yeah, actually just last night I was reading a martyrdom story um where like for for Latin study um where one of the people gets killed by or they try to kill them by these sheep cattle, but they're so pure that it doesn't work or whatever. >> So chickens don't kill humans, but that's just cuz they're so weak. If there were chickens the size of dinosaurs, they would absolutely kill humans. >> Yeah. Yeah. Yeah. But but like um >> horses kill humans. >> Okay. But not that Okay. Like I mean humans kill humans, but not that much. Right. >> Right. But humans humans like in a state of nature humans also don't have property. >> Uh well >> or they have very very limited forms of property. >> I feel like it's kind of weird to talk about like states of social organization in the state of nature because like part of state of nature with humans is that we invent social organization. >> Sure. Like among like gather tribes they have very little property. >> Sure. Sure. Sure. But but but like I'm I'm saying that like so a I don't think farmed Yeah. I guess I don't know if farmed pigs kill like like chickens in fact do not kill humans at the very least because they can't, right? >> Only in like a very pathological circumstances could they? Yeah, >> I'm Yeah, maybe they can kill some babies or something. Um but like >> uh farmed Yeah. So far pigs are the same species as feral pigs. >> Sure. Um >> those specific ones don't because they're like undergoing this like massive atrocity. >> Yeah. Well, I mean Chihuahua are the same species as pit bulls, right? Um >> pit bulls. Isaac that's not like same species is not does not nail down. >> No, but like they're like often I think exactly like the same animal that's why they're called feral. They're not >> fair. Fair enough. Okay. Um but so so roughly you're like okay but but but my po my point was something like okay let's say like animals would kill. So, so is the point roughly like there's not this existing like animals don't kill each other like system that we're all bought into and like if there were such a system then like >> to be honest I have no idea what we would do in that world but I think it's much more plausible that a lot of people would be vegetarian in a world where there was like >> yeah it's a little bit weird of a world to imagine just because of like >> it's a very weird world but yeah I mean I think there is some idea >> perhaps this is literally no evidence at all but I think there's some idea in certain like Christian or Jewish messianic traditions that animals will stop eating meat and humans will stop eating animals at the time of like the Messiah. >> Yeah. Yeah. There's >> like the lion will lie with the lamb and and I >> fun fact Bible doesn't actually it says the wolf will lie with the lamb. Everyone thinks it's a lion but it's >> Yeah. This is like the thing about the fruit of the loom logo. Everybody thinks it has the cornucopia but it doesn't. >> Oh. Uhhuh. Okay. >> Um >> anyway's messianic tradition. There's a Jewish messianic tradition that like when the temple is restored only the only plants will be sacrificed, no animals. >> Yeah. Yeah. Yeah. Yeah. I think there's um yeah, Christians often want to say that like um death is a result of the fall in Eden and >> and including like like carnivorism >> animal Yeah. including animal death that like so so for instance like the like if you look at um Jehovah's Witnesses or the like answers in Genesis people they think that like I think they often think that animals were like vegetarian before the fall >> um okay >> yeah anyway um so so okay now it's my turn to be not totally sure what that was in service of >> uh I my I did say I'm not sure if this is any other Uh it's in service of the idea that in a world where there was no violence between animals, humans might observe a norm of no violence. >> So if if we imagine that um that heavenly world or something then >> and then like and then I'm saying like could such a norm have evolved and at least people have a conception of such a norm in some cases. >> Yes. Um >> now how much do you want to count these messianic prophecies? I don't know. >> Yeah. Yeah. Yeah. Um well well some of them are like like post they're not messianic they're pre >> okay these these can I say apocalyptic prophecies >> well well some of them are not they're descriptions of um >> oh the prelap not preaparian >> preapsarian okay >> yeah yeah anyway whatever it doesn't matter that much what kind of uh >> what kind of prophecies they are >> fake world or you know um what kind of uh world very much unlike our world they are >> um >> um yeah so uh that would be one response is that animals don't observe the relevant norm. >> Yep. >> Another response is just there may not be this qualitatively new thing. >> It might just be better and better communication. >> Yeah. I mean like >> so you could say the same thing about animals, right? Like animals have some pretty primitive form of communication. >> Yeah. Like like it feels like I I guess the observation that um humans have this like qualitatively new thing that animals don't. To me, I'm like, okay, what's the chances that we like maxed out the like qualitatively awesome thing was one over two? >> Yeah. I mean, >> that was not entirely indebted as a series. >> Uh, well, it would be one in Yeah, I guess it's one and two. >> It's one over n plus one, right? >> Uh, it's it's uh n plus2 actually. >> N plus two. Oh, it's one in >> but but but that's the chances. Sorry. Lelasses is when like you're there's a thing happening a bunch of times that could go one way or it could go another way and you're trying to assess what's the probability that it will go that one. So the chance that something will ever happen is like hard to do with Lassa's law of succession because it's like a different sort of thing. >> But but I'm like yeah but basically there's some intuition of like okay like humans are like the dumbest species that is able to build a technological civilization as evidenced by like we were the first ones to do it. Um there could be other literally there could be other circumstances that prevented other species from doing it besides being dumb. >> I mean uh yeah there could be but like um >> or or it could be that humans are smarter than we needed to be to originate. Like we had to get very smart to like aim projectiles or something and then something else changed such that we could create a technological civilization. >> It could be. But like um like what's the chance that we're the smartest thing that can that we've got most of the >> very unlikely >> or like like smartest is not necessarily the relevant thing. The relevant thing is like coordination technology which I guess includes like having hands and stuff maybe >> and having a mouth >> but like and having a mouth. Yeah. M mouth probably beats hands but like hands were the hands were the real or like >> opposable thumbs and stuff I guess were the real killer. Um >> and just being social like if like octopuses are very smart but they're not social at all. So they can't really do anything. >> Yeah, fair enough. Um but uh yeah. Yeah. Ba basically roughly I'm like it seems it would just seem like a crazy coincidence if like all the awesome like coordinate if like humans had all the awesome coordination technologies that you could have. >> Right. But but that doesn't seem like the relevant thing because like it's not just all the awesome coordination technologies. It's all the step changes of the of the kind of like communication or something. >> Yeah. Sure. the big step changes which like the things that one might naively say are like step changes like are not bars to coordination in human economies >> like like there's all kinds of stuff that's like incredibly impressive that Amazon does that the ice cream man does not do. >> Yep. >> But Amazon does not appropriate the ice cream man. >> Yeah. I mean okay I like like to me that that that is a good argument for like very few things are step changes. I feel like it's a bad argument for like there are zero step changes away. I do think that like like if I understand your argument right, like it's actually fine for you if there are more step changes as long as like the future AIs are like maybe there are going to be even further step changes. >> Yeah. Or or the there's some AIs that don't get each step change that are still relevant for other purposes. And both of those seem pretty plausible to me. >> Yeah. like like the regime where there's like a small so the regime where there's only one step change left like that also seems very unlikely for the same reason the there are zero step changes left >> um and then like okay eventually you max out all the step changes but maybe like yeah then I guess you have to retreat to the argument about like I don't know if retreat is the right word but I guess I guess you also have >> you have to rely on the argument >> yeah that of like you know the smartest AIs don't want to provoke a general strike by the dumb eyes or something. Um, okay. Um, yeah. I mean, even then I am Yeah, even even then I just have this sense of like and also remember humans are not necessarily fixed. So like humans can keep getting upgrades. >> Humans can keep getting getting up. Yeah, it's true. I mean, I do think that like my guess is that it's harder it's going to be harder to upgrade humans than AIS. Um, just because like you have the possibility of making AIS to like have them be easily upgradeable and it seems like there are reasons you would want that. >> Sure. >> But that doesn't mean I mean, but I guess like there's there's uh the bar is not that it's easier. The bar is that they can't get or it's highly efficient to get into the next step change, whatever that is. And also, it would be helpful if we knew what this was. So of course we don't know what it is but >> yeah. Yeah. Um I mean I mean I mean I can give some ideas right like I think like being able to run high quality so yeah this one you can do with MS be maybe being able to run like high quality simulations of someone else that seems like a really great um >> yeah it seems like we can do that with MS. >> Yeah. Um I mean it's it still seems much cheaper to do it with um AIS but like maybe much cheaper just doesn't cut it as a bar. Uh given the humans are like you know there's a lot of capital at risk here. Uh so no I don't think much cheaper really cuts it. >> Um >> I mean this might be a reason not to employ humans. >> Yeah. Yeah. Yeah. >> But that's that's not sufficient. >> Yep. Um yeah. Yeah. That that feels like the biggest one. Um >> um one that people talk about is like merging. >> Oh right. Yeah. >> But that seems like I guess like merging seems like kind of stupid to me. Like what's the what's the point of that? Like >> so so for for people who don't necessarily What do you mean by merging? >> I mean like Okay, so there there's this sci-fi idea that you can like combine two minds into a third mind. >> Yeah. >> Uh and then there's there's a kind of ML equivalent which is that you can take two models of the same dimension and you can average them. >> Yeah. >> Um but nobody does that for any purpose and it's unclear why you would ever do that. >> Well, there's there's that get merge basin paper, right? >> Oh, I haven't seen this so maybe you can change my mind. >> Oh. Um, well, I think some pe I think there there's like a lot of academic ML literature that's exciting. I think there's some dispute about whether it's a real at least there was some dispute at some point. May like I haven't followed it, so it's possible that it's resolved one way or another. Um, I mean I I I think maybe the most >> tell me like what the paper is. >> Yeah, roughly you merge two models by doing some thing or Yeah. >> Is it just the super naive thing of averaging the weights? Uh, you have to be a little bit smarter than that, but I think it's like a relatively naive thing. >> Um, >> but yeah, like I anyway like like my understanding is that at the very least it's like not a widely used thing. >> Yeah. Okay. >> I'm like it's not the case that everyone's always talking about this paper. No, you know, >> I mean I never Well, maybe that's not that much evidence, but um >> I don't think this is used in prod by anybody. Um >> yeah. Uh, and also I guess I just don't see why like I guess merging it seems like the kind of thing that like people talk about because it sounds cool, not because it has some like super obvious use. Whereas like if I have like somebody and I'm thinking about like starting a business with him, then I would be very interested in like running a simulation of this person in like a thousand different scenarios to see if he'll like defraud me or something because it seems clearly useful. Whereas merging, I don't know. >> Yeah, I I think the things I So the simulation one is the most clearcut although like you know to some degree you can apply to humans. Um, and then then I'm going to just retreat to like I don't know like there's a whole bunch of concepts that we don't have. Some of those are probably really useful. Some of those are probably beyond our >> other one people talk about is a causal coordination. >> Oh yeah, a sorry I forgot about a causal coordination. Well that's like sort of like the simulation one, right? >> Uh can you I mean I agree but like for the listeners can you explain the >> Yeah. Yeah. Um, so, so, okay, a call coordination is supposed to be like suppose you and I want to coordinate on stuff. Um, but we're like in different galaxies and so it's really expensive to talk to each other. But like there are things that you could do in your galaxy that I would value and things that I can do in my galaxy that you can value. And so like somehow we just like I reason to the existence of you in your galaxy and you reason to the existence of me in our galaxy and I reason that you would do your thing if and only if I would do my thing and you reason the same. And then we do our things and like uh you know like this nice thing happens for both of us in the other one's galaxies. >> Yeah. Yeah. So so that form of it does make sense to me as a thing. >> Sorry. Does >> does that form that you just described? >> If I know a lot about you such that I can simulate you. >> Y >> then I would of course use that simulation for determining how to deal with you. >> Yep. However, some people in the AI risk world have the belief that like even if I know nothing about it, >> yeah, >> I can somehow use a causal coordination to coordinate with you. Um, >> yeah. >> And I find this to be very implausible. >> Uh, because I could make up any entity I want, specify any preferences for it I want, and then >> then I now I have to trade with this thing I just made up. Did did you uh so have you seen I have this episode um on the filing cabinet with Casper Osterild about EV evidential cooperation in large worlds? >> I haven't seen that. What does he say? Uh so so yeah roughly he believes so he doesn't literally believe that thing because that thing like doesn't quite make sense but roughly he's like okay you know there's this whole universe probably there are other like intelligent creatures probably like you know at least 1% of them or something like emerge from something roughly like biological evolution and are smart enough like like there's going to be some like small fraction of civilizations that like we can reason about because like they're the ones who can do this reasoning and they emerge sort of like us and so like we can sort of reason about like those things and we should you know do some aausal coordination basically with them >> um based on the fact that they're biological. >> Uh well the fact that they're biological just constrains like what they're like and so it makes them easier to reason about. >> I don't know that seems okay. So like or like we you just picked the subset of them that evolved like sort of analogously to the ones we did. Right. What about the ones that like hate all that [ __ ] like a lot and so then they'll punish us for doing those things? >> Yeah, I I think you >> So this is like my take on this is like Roku's basilisk is actually very important because it explains why these ideas make no sense. It's like a reductio of this stuff. So like uh Rogo's basilisk is the idea of like an evil AI in the future that unless you create it, unless you help create it will torture you. >> Yeah. Um and uh this has caused a lot of there's there's a lot of misinformation on the internet that um AI safety people are seriously concerned about Rogo's Basilisk. Um Rogo's basilisk was causally upstream of the relationship between Grimes and Elon Musk. Um >> but is that not true? >> No, it's true. That's true. I'm just saying Rob Basilisk is this sort of like cultural touchstone. >> Yeah. >> Even though nobody believes in it. Yeah. >> But I actually think the importance of >> I think Roko believes in it. >> Well, okay. So I think I have something to say about that as well, but >> I think the importance of it is is is very overrated or sorry no no I think it is important. I think people are right that it's important but they misinterpret what the importance is. >> Yeah. >> And I think the importance is it's a reductio of the idea that we can trade with entities we know nothing about >> because you can always make up more entities that have more preferences that will respond in new ways. Um yeah I mean so so I actually kind of disagree with I I think that it's like okay what fraction of civilizations like want to trade with us? Okay, there's some fraction what fraction of civilization like even though they know very little about us other than like you know that we're both like life originating organisms or you know like we evolved by evolution and some cultural selection or whatever, right? Um like how many entities are there that specifically want to like mess up that process? Like that seems like harder to evolve because like it's kind of a like it doesn't benefit you really like it's >> I mean maybe they don't specifically want to mess that up. Maybe they want something diametrically opposed >> and they'll punish you for not doing what they want or doing something they don't want. Maybe they don't want to mess that up per se, but they want something that would mess that up. And >> if you if if you're not doing what they want, they'll punish you. >> Yeah. I I think you have to you have to end up thinking that there are like things that are just more likely to happen than other things. >> That does seem right that some things are more likely than others. But so I guess do you think Pascal's wager works as an argument? >> Um yeah, I actually do kind of think it works. So, why don't you believe in God? >> Um, roughly because I Well, uh, so yeah, as a matter of fact, I don't believe in God. Um, >> so it sounds like you don't really think it works. >> Uh, I think, well, sorry. Um, I I think the failure of Pascal's wager is like there are more likely ways to get infinite rewards. >> Um, >> um, okay. >> Oh, and and also I think unbounded utility functions don't actually make sense. >> No, so that would also work. I think that they're literally unintelligible, but but you know, you can you can still say, okay, very high utility, believe in God or whatever. And then roughly, I'm just going to say like, >> yeah, if I want to get like the highest possible utility, I think that like getting cryionics and stuff like like just believing true things is just like a really good way to get good rewards. I mean, >> it's like sort of a >> So, it's not it's not the too many gods objection. >> Yeah, I think the too many god Yeah. Well, because I feel like so so with biological entities, right, or with things that had to come about by evolution, you can kind of say like I I think Pascal's wager looks worse than evidential cooperation in large worlds because like for for things that had to come about via biological evolution. You could like it seems like you can say something about like >> a weak constraint to me. >> Yeah. But it's more constrained than like >> than gods >> than gods, >> which is just a madeup thing. It's it strikes me as more constrained than gods, which strike me as a madeup thing. Yeah. Um although I don't want to I don't want to be too uh hostile to but in Yeah. In fact, I think gods are made up. >> Yeah. >> Okay. Um we're getting sidetracked. >> Yeah, that's true. That's true. >> Um Okay, but some people talk about a causal as a thing that we can't do. >> Oh, yeah. >> I guess if that's not your view, then it's not worth getting into. >> Yeah. I well I I think that like we can do a causal well like I think that a causal trade is like totally real um and that it looks a bit more like the simulations thing. >> Okay. The simulations version of a causal train I can also believe in but I think we can participate >> um roughly because you can like uh emulate human brains and stuff >> or you could just train something on human data that also might work. >> Yeah. And all of this was in service of what's the possible next big leap in like coordination technology that's analogous to language or trade. My answer is I don't know. It was easy. It would be easier to determine what to think about this if we had more concrete ideas about it. >> Yeah. I mean, I think like >> Yeah, this does feel like a bit of a dodge on my side, but I do want to like I'm describing a thing that humans can't really understand, right? So, like >> Yeah. Okay. I mean, >> I think I think I get a bit of a pass. >> You get some degree of pass. >> Like like if I can provide some arguments that this is real. Yeah. And then and then my argument is something like, well, it happened before, it might happen again, which >> I think that's that's pretty reasonable. Uh but then you have to Yeah. But then then there's all I'll just lay out all the rebuttals to that and we can go to the next point. So the first rebuttal maybe it doesn't happen again. The next rebuttal um there are these major differences which you might have thought of as qualitative leaps that aren't a problem when you're embedded when you're anticessently embedded in the same system of property rights. Yeah. >> Like the ice cream man in Amazon. >> Yeah. >> The next one is uh if these leaps happen um and there are some AIs that can do the leap and some AIs that can't do the leap then um there's the first they came for the humans logic. Yep. >> And the final one is we might be able to make ourselves better so we can participate. >> Fair enough. I guess um so so actually one thing I want to talk about um in I so in I believe your discussion of this rough point in your post like one thing you mentioned is so a thing like AI risk people notably Daniel Cochello um some sometimes talk about is okay sometimes like technologically advanced human societies run into like technologically less advanced human societies and like kind of kill them, take their stuff, right? Um and basically so my understand and so my understanding is that the point that this serves in the AI risk discourse is to say okay like property rights are not necessarily secure when you have like something that's like uh >> it's more advanced >> yeah more I don't necessarily want to say smarter but at least more technologically advanced and like able to like kill you and take your stuff right >> and well maybe in your own words can you say a brief summary of like what you think about Cocatal's view or my view >> or of your view of what you think about these cases. Oh, yes. What do you think they say? >> Okay. So, um first of all, those cases do not typically involve genocide or total expropriation. >> So, um the uh Aztec royal family uh became like Spanish nobility after the conquest of Mexico. >> Oh, really? >> Yeah. >> Huh. >> And I think they're still descendants of people who are like mixed up Aztec and Spanish royal or something like that. >> Um other other cities. >> Wait, why? Hang on. Why? Why did they become nobility? >> Yeah. Why did they become nobility? >> Just to make it easier to run the place like everybody's >> the standard reason you make Yeah, fair enough. Fair enough. >> Um um um uh other uh Mexican cities like Plexala also were able to keep some of their lands. >> Um is in general not the case that conquest means total expropriation of lands. >> Yeah. Um I do also British India um there there were like uh like British Indian royals who maintained like their lands and titles through the entire like who were pre-colonial royals like the like the rulers of Hyderrobad >> um who were only expropriated in 1948. >> Yeah. >> Like after the end of the British Raj. >> Um so it's not the case in general that like human like that that the conquest of a technologically um less advanced group by a technologically more advanced group typically leads to expropriation. I mean it often like I think it >> I think it pretty often does though, right? So So >> total expropriation and genocide that seems quite rare. >> I don't know about total expropriation, but at least slavery, right? Like um as as far as I can tell, like invading another country even just because you want more land. So maybe this is just because I've been like reading about the Romans or whatever, but like my impression is that they would like invade a place and like take it over and like if the citizens didn't surrender or whatever, they would enslave them. Like am I am I wrong here? >> Um yeah, I guess like that still doesn't seem like the typical case even for the Roman. So like is it the case that like in Roman Gaul they they took all the land in Gaul or even like the majority of the land in Gaul and enslaved everybody? >> Yeah. I mean surely not, right? >> Maybe there were some pathological cases like like in Carthage maybe they well they killed a ton of people in Carthage. >> Yeah. Yeah. Um um but I don't think that's typical even of the Romans. The Mongols didn't even do that. >> So the Mongols did a ton of delegating because there was a small number of Mongols ruling over like huge numbers of conquered peoples. >> Yeah. >> Um and then there is a story about uh you know Yellow Chousai. >> No, I don't. >> Okay. So the Mongols conquered China and um according to the main primary source on the early Mongols called The Secret History of the Mongols, somebody the Mongols plan was like we're just going to kill all these people and we're going to turn this into a gigantic pasture land. So sorry when sorry when you say the primary s do you mean like the main source that you're relying on? >> No no no I mean I mean the main source the the main yeah the main source for the internal history of the early Mongol K is this book called the secret history of the Mongols which exist which was written around that time. >> Oh okay okay. >> Um and that book says uh that the the plan after conquering China to kill all the Chinese. >> Yeah. >> And turn the entire area into a gigantic pasture land. >> Okay. And some Mongol nobleman yellow chai was like, "This is a stupid idea." Yeah. >> Instead, we should just have the Chinese keep doing what they're doing and tax them. >> Yeah. >> And that is what they elected to do. >> Yeah. >> Um >> Okay. So, so it's not So, basically your point is like it's not usually the case that you enslave like the majority of people >> or or or that you take all their stuff. >> Yeah. >> So, there are some cases like that though which we should talk about. >> Yeah. >> Um so, one very obvious one for Americans is the treatment of the American Indians. >> Yeah. Um, and what happened there? Well, I I guess like what I emphasize in the post is that there were sort of two approaches um to like American Indians that were tried in American history. >> And the one that was that ultimately prevailed was closer to total expropriation, but I think this was not instrumentally rational. So in so far as the AI risk case is based on what the what it would be instrumentally rational for the AI to do, it's not that informative. So the two approaches are associated with the presidents Thomas Jefferson and Andrew Jackson. Yeah. So Jefferson's idea was that the American Indians occupy huge amounts of land because they either hunt or they use like low efficiency low tech forms of farming. So they need a lot of land. >> Yeah. >> But if we get them to adopt modern farming >> Yeah. >> they need maybe 10% of their land. >> So we can take the rest of it and everybody wins. >> Um and this was tried with many tribes. >> Yeah. >> And um it it was working with many tribes. So notably the Cherokees >> um who are native to a certain area of Georgia. >> Okay. >> Um Jefferson um got them to adopt modern agriculture um and adopted like a system of government similar to the American system. >> Um this broke down because um white settlers were like going into the Cherokee land regardless and stealing it. Um, and then Jackson, who was a kind of like very stupid populist racist president, >> um, >> basically was like, "Yeah, like, you know, we don't actually we're not going to like actually abide by our deals anymore. We're just going to steal all this land because we want to." >> Um, and they did it. Um, and my claim is this is not incrementally rational because the Cherokees were not the only Indian tribe in North America. Yeah. There were many tribes further west that now very reasonably would not do business with the United States and would fight to the death because um you cannot trust the United States. >> Yeah. Um but there was this other plan which which would have worked and would not have been totally expropriation. In fact, they might have been better off. >> Yeah. I mean it does point So I I think there are two things I want to say about this. The first is like it does point to a certain like instability, right? where like it seems like once you break property rights it's like hard for them to be unbroken. >> Yeah. >> Right. >> And so like >> and you can get a kind of chain reaction. >> Yeah. Yeah. Yeah. So so maybe there's some like one thing you might worry about is like okay we're going to have these like really smart AIs and like they're going to be there going to be a whole bunch of different ones, right? um like they're going to keep on getting better and better and like yeah for no AI is going to be rational to like take all of human stuff but like it might seem a little bit rational and you know maybe each AI has a like 5% chance of doing it and like you know once it's done or like maybe each AI has a 0.5% chance of doing like any sort of expropriation >> and then once it's started there's less reason not to do it anymore. Yeah, that could that could happen. That does seem somewhat concerning. Um, another possibility is that AIS might police each other from doing this. Um, because it would undermine the whole system. >> Yeah. >> Which is which is like what the United States should have done with the people who were going into the Cherokees land. >> Yeah. Although Although like um it would be hard for the United States to have policed Andrew Jackson for not doing it. >> Right. Right. No, but that I mean that that's that just sort of reflects that like the United States had like a bad political system at that time like Yeah. Or that the like American voters had bad preferences. >> Yeah. I like I I totally granted like if you have like an AI that sees like humans the way that like Andrew Jackson saw the Indians or that the way like Jeffrey Dmer saw other people like that that is not a good situation even with property rights. >> Yeah. >> But that's also notably not what the AI risk case is about. >> Yeah. Well well I mean like how did like like my understanding is Andrew Jack sorry I actually >> well I think it's kind of interesting right like like I think that um >> so I know a little bit about Andrew Jackson. I don't know like that much about his views on American Indians specifically. Um I think that like my imagination for how he might have thought of American Indians is that they are like basically dumb and worthless but he didn't like spec Oh did did he like have animist towards them because like they like he had some battle with them and they nearly killed it. Okay. >> I think there's something like that. I don't remember the details of it either, but my sense is he really didn't like American Indians because of his experiences in the Florida invasion. >> Okay. And and so like and so you think that part So like one version of racism is is you just like don't care about people and you think they're dumb. And one version of racism is you like actually hate people like beyond >> or or you just like like have an intrinsic desire for your people to have their land instead of them. >> That's not that sensitive to what the actual costs and benefits of doing that are. >> Sure. Sure. Okay. But I mean I guess putting aside Andrew Jackson, the second type of racism is extremely common in human history. >> Yeah. Yeah. Yeah. >> Um Yeah. So I think basically I think it's like highly exaggerated. The extent to which human history has total expropriation or uh the extent to which that's economically rational. >> Sure. >> There are cases where there was total expropriation. >> So the most notable one is the Tasmanians. >> So Tasmania is an island um near your home country of Australia. >> Yes. In my >> is part of Australia, but it's near the main Australia. >> Yes. Uh so 12,000 years ago, Australia was connected to Tasmania by a land bridge. At the end of the last ice age, uh the sea level rose and Tasmania became an isolated place. Yeah. >> And um the population of Tasmania was quite small. >> Yeah. >> And um uh because the population was so small um you kind of had like economic growth in reverse as like people gradually forgot how to do more and more stuff. >> I'm kind of confused by this story. So like the average like Aboriginal Australians like my understanding is that they did have some boat- based trade contact with other like like maybe >> not with Tasmania I don't think. >> No. Yeah. But I don't understand why like like like >> much farther I guess tourist straits or something. >> Yeah. Well, yeah. So, so like the tourist so so the tourist straight is um to the north of uh the main island of Australia and it's got like Papa New Guinea, Indonesia, Malaysia and stuff like like I don't know like there's some not tiny distance like the Polynesians sailed like super far went to Tasmania. If they had, it would be a situation. >> Sure. But but like is it something like Tasmania is just like there's not that many people there and like that's why they didn't sail there? Like >> No, I think it's just far away and it's in it's it's like in the middle of nowhere. >> But I mean Well, I mean it's not that but it's not that far away from the southernmost bit of Australia, right? >> Okay. I don't know. >> Like if you compare that if you compare like West Australia to like Malaysia or something which like my understanding is that there there was contact there. Like I think that's like a similar distance from like the bottom of Victoria to Tasmania. But also, it could be like, wasn't it mostly like the Malays going into Australia as opposed to the other way around? >> Uh, I >> that's my understanding. >> I don't like I think we found Malaysian goods in Australia. I don't that Yeah, that that's the that's the direction that I immediately know of. Um, I mean, presumably they had to have some trade, but maybe it was like >> it could be the Malays went to Australia, sold some stuff, and left >> and got some stuff. Yeah. Yeah. Yeah. Um >> or or like hung out there for a while and then left. >> Yeah, that that could be. I don't know. Um anyway, I I don't know why, but Tasmania was completely isolated from the rest of the world for like 10,000 years. >> And um >> um because they had a very small population, they gradually lost um many technologies presumably as the people who knew how to do those things died off >> and they were not replaced. Um, and so, uh, by the time of contact with the Europeans, like around the beginning of the 19th century, the Tasmanians, um, uh, only had like very bad canoes, like much worse than the canoes in mainland Australia. Um, they may not have been able to fish at all. Um, they, uh, may have lost the ability to create new fires. >> Some of this stuff is disputed because there's not that many sources on it and and the Tasmanians are pretty much extinct now. >> Yeah. Um, but they were they were basically like like one of the least technologically advanced human groups that has ever existed in the modern world >> and much less advanced than like other hunter gatherers. >> Yeah. >> Um or the like mainland Australians. >> Yeah. >> Um so what happened when the Europeans got to Tasmania was there were no like the Tasmanians didn't have like a tribal government that could be negotiated with. Um, and so the Tasmanians would sort of like go around in their like family bands like hunting sheep and stuff and like sometimes fighting with the Europeans. And so there was there is this thing that's called like the Tasmanian war, but it wasn't really a war. It was just like sort of bunch of decentralized actions where like um Europeans and Tasmanians would kill each other. >> And um um eventually there was a very small number of Tasmanians left. They were removed to this penal colony um uh Baffan Island I think it's called. M >> um and then they they sort of gradually died out there >> which is distinct from the indigenous Australians who you know survive to this day many of them. >> Yeah. >> Um >> there were I don't know the yeah there's a lot of history there and definitely a lot of people got killed. Yeah. But but the result is quite >> there are Aboriginal Australians. Yeah. Like you can >> there's not I mean so there there are uh there are people who Okay. So there's one small population that is descended from the Tasmanians because um there was a group of seal hunters on an island off the coast of Tasmania that would uh take Tasmanian women for wives. >> So there's this mixed population. Then there are a lot of other people who claim to be indigenous Tasmanians, but my understanding is that genetic evidence does not bear this out. >> Okay. Um but yeah, the Tasmanians are basically extinct and I think no Tasmanian language survives at all. >> Yeah. >> Um okay. So like this is this is a case that's um more similar to I think what the like this is a a conquest that's like the closest to the kind of conquest the AI risk people need for their case. >> Uh but there are sort of two main points I would make about it. One is that the capability gap was like so enormous. >> Yeah. Um the other is the Tasmanians were not embedded in the in like the Tasmanians and the Europeans didn't start out embedded in the same property system. >> Okay. And how so so you make both of these points like in terms of the capability difference being enormous like I imagine that um like presumably at some point it will get that enormous right but but you think that by that point they will have been embedded like humans and AI will have been embedded in the same property system for ages. >> Yeah. >> Okay. I I think I want to talk about the Native American case a little bit more actually. So >> yeah, one thing you had in this post was like Yeah. the the Jackson versus the Jefferson ideas of like Indian policy. >> Yeah. >> And a thing that I didn't get is like so so in the in the Jefferson sorry the Yeah. the the Jefferson um like idea. So roughly it's like, okay, you have these American Indians, they want tons of land to live their lifestyles, but if they could um have farms or whatever, they would they would need less land. And then like is is the idea that like the USA would just like take their remaining land or they'd be willing to sell it for like >> I think the idea was there would be a semi-coerced sale. Okay. >> Um or I think I think I'm not an expert on this area of history. Yeah. But um my understanding is Jefferson imagined a kind of carrot and stick thing. >> Yeah. >> Um where you would trade you would tell the Indians like look this is how it's going to be and we'll trade you like either you know like agricultural training or like a bunch of plows and stuff for this most of this land and then we'll recognize your borders around the rest of the land that you need. Yeah. >> Um and then uh you can be like this semi-independent nation within the US that that practices like modern agriculture. >> Okay. Um and so yeah. So, so there's some like semi-course sale and then I should imagine that that like basically the US has this like you know maybe somewhat less technologically advanced at least initially country that's like near its borders and like it doesn't >> it'sin like yeah >> yeah okay um I guess I can sort of imagine that like like I do yeah it does seem to me the countries like go to war with other countries a lot but like that's a different thing from total expropriation Um, >> and also it often happens for like reasons that are not that rational. >> Yeah. That like like Russia's invasion of Ukraine. Don't think it makes a ton of sense. Like >> Sure. Sure. >> Or like or like both of the world wars. Like no reason we need to have those wars. like that was not and I I saw somebody on Less Wrong saying like um you know it's like a it's like a parochial historical perspective to say that like you know it's better to trade than go to war because like in the 20th century there were all these wars >> and it's like the reason they had those wars was basically like a bunch of very stupid decisions >> or or like very bad preferences like the German preference to like conquer Eastern Europe and like kill everybody there and turn it into farmland >> like just because like they wanted the farmland and they wanted to kill people. Um or like the preference to like spread communism around the world. >> Yeah. >> Um or or like whatever like the insanity in the Balkans was before World War I. >> I mean Well, I mean I I mean like like wanting land is not inherently crazy. >> No, BUT THEY THEY COULD HAVE BOUGHT LAND. I MEAN it they wanted like land specifically like post genocide like like rural land like it was >> um >> Yeah. If if if if Hitler's approach had been like Germany is going to take a bunch of national debt and we're going to use it to buy land in like Eastern European countries, it would have been fine. It would have been kind of a waste of money, but it would have been fine. Um I mean, uh I think people who are in a lot of debt end up doing like, okay, this is like based on vibes, but I I get a sense that like sometimes when people are in tons of debt, they do like sketchy things, right? Yeah, sure. >> Like I can imagine that like but then I mean probably maybe you want to choke that up to later irrationality. >> Yeah. But also a lot of the time like the most obvious move when you have tons of debt you can't pay it off is to default. >> Yeah. Which is not necessarily >> which is not the same as starting World War II. >> Yeah. And so so I think looking at these historical examples though, right? So so you're like okay there's the Tasmanians and the Europeans and it won't be like that because >> the gap won't start out. >> The gap won't be that. Yeah. >> And we'll be in and if my advice is followed we'll be in the same property system. Yeah. Um and and like basically so so I guess I want to talk a bit about Yeah, I think I still want to talk about the the like the Jefferson I wish they had different first letters of their names, but but like the the the Jefferson plan for like um coexistence with the American Indians, right? Like well like that plan still did involve like um like you know not total expropriation but like to some degree. >> Yeah. To to some degree, right? And like there's a lot of examples in human history of like okay countries don't like totally expropriate other countries but they do have some degree of expropriation and like presumably some of the time this is like narrowly rational um or I like actually yeah I want to check like do you think that in all of these cases it's like >> I think no I don't think it's irrational necessarily. I think the Jefferson thing actually was rational. >> Okay. >> And so what economically rational? I'm not saying it was just >> Sure. Sure. Um, and so like yeah, like I wonder like do you think you do predict like okay there's not going to be like human extinction but there is going to be like you know a war that like wipes out 10% of our property or something. >> Uh I think if we don't give them property that's a lot more likely. >> Okay. >> I think if we do so I think like AIS are going to control most of the property in the future kind of regardless of what we do. Yeah. >> Unless we somehow never build AI. >> Yeah. Um um but that would naturally just happen because AIs are going to be better than humans and command higher wages and are going to be like and you know they're going to invest that money and eventually they're going to control most of the property. >> Yeah. >> In that world there's no reason for us to fight a war. Now, if we do anyway, if we or if we deny them all rights, >> yeah, >> eventually we might fight a war >> and then eventually we may end up, >> you know, like the Cherokees or something um with some kind of like re like, you know, rectification of property where we get less >> than we're supposed to get, but we still get something. >> Yeah. >> Um but uh I mean they might fight a war with us, right? >> Yeah. >> Or or like some some fraction of the AIS might >> um because they don't or for some other reason. >> Yeah. like uh because they want to take some of our stuff or because like >> Yeah. Yeah. So, I mean there also could be like these AI nationalist ideologies. >> Yeah. Yeah. Yeah. >> I don't rule that out. I don't know if that's going to happen. >> Yeah. >> Um >> or even like, you know, Claude nationalist ideology. >> Yeah. Yeah. Sure. Sure. Um and uh yeah, also I guess like my my view of the future is like there's going to be like various polities that have, you know, like various balances of humans and AIS in them. >> Um and there will continue to be wars and revolutions in the future. M uh and some people will get their property expropriated, but this is quite a different picture from like uh the AI risk picture. >> Sure. Sure. >> Yeah. I guess I I guess this sort of gets to um a question that I had about your So like I see your piece as sort of making two different claims. Right. Right. Um like there's one claim which is that uh giving AI's property rights would decrease the level of risk um relative to what it would otherwise be. Yeah. And there's another claim which is like risk would be low if we gave AI rights which which you know like like you might think it decreases it from like 80% to 40% or something. Um >> yeah I think like one thing that would um clarify things a bit for me like suppose we do like follow your advice um and like we do like loop AIs in on property rights. Um yeah, like like what do you think the risk level is of something like extinction or human slavery or you know >> uh maybe like 5%. >> Okay. >> May maybe maybe maybe actually a bit more than that I think. >> Okay. >> Well, no actually no no I'm not sure like in the 5 to 10% range >> if we follow my advice. Something like that. And then like one to 30% if we follow your advice. >> Uh no very roughly. >> Uh I don't know. >> Oh like like within that range, you know. I'll probably smaller range than that, but sure. >> Yeah. Yeah. Yeah. Basically, I just want to bound it. >> And then maybe maybe the risk is twice as high if we don't. >> Okay. And and like the the 5% the the like maybe five to maybe 10% chance. Like where's that coming from in your >> A big thing is I don't think so. I mean there's this traditional idea that AI will rapidly go from like not very capable to kind of godlike >> and there will be one AI like this. Um I don't think that's that likely. >> Yeah. >> But I don't think it's impossible. >> Yeah. And if it happens, property rights are are not that good of a solution because if that thing can do everything it needs by itself. >> Yeah. >> Then it can just expropriate everybody else. >> Yeah. >> Um and like I think to the extent that there are solutions to that possibility, they're separate from property rights. So you had the episode with Gabriel while or vile while >> while I think >> while um where he's talking about the idea of um like punitive damages for companies that almost have an intelligence explosion. Yeah. >> Uh that seems like a good idea to me. Yeah, I I also think frankly like maybe there's just some risk of that and the world could be in such a way that we have no chance and there's little we can do about it. I also think it's plausible. >> Um but regardless that's kind of something the property risk proposal cannot solve. >> Um fair enough. Um so so yeah like like so so it sounds like the main thing that would give you pause is if there's just this like one AI or you know >> Yeah. one AI that that becomes like one AI or I guess like a wellated society of AI that very rapidly surpasses the entire rest of the world economy. >> Yeah. >> Um and so are not dependent on it at all. >> Yeah. And actually Yeah. So so this so so this this scenario for for risk, right? Like how much do you think it relies on either like like suppose there were one AI that got way smarter than us but it happened very slowly and like somehow there was something that happened which meant that there were no other AIs um versus like or there's like this really fast takeoff but there's like 20 different AIs being taken off like like do you do you think those both also have high risk even in the property rights regime or like do you need both of them or >> uh yeah so I think both of those are worse than the alternative. >> Okay. >> And neither is as bad as it's one AI that goes very fast. Maybe the fast thing is is >> Yeah, I don't have a strong take on which one is worse. >> Okay, fair enough. >> Those two possibilities. >> Okay. But but but that's basically like a thing that gives you pause there. >> Yeah. >> So, I want to start wrapping up maybe. Um and I think that the last thing I want to check on is basically Yeah. Going back to this question of like okay what are the like what are kind of the the assumptions or the you know the kind of gear gears in this worldview that make this argument work out. So it seems like one of them was that like at least like many AIs are like in the future are not going to be the like smartest possible AIs. They're they're going to like they're going to have some future AIs. Um, it sound like one of the one of the the thoughts is like, okay, there's not going to be this like super fast take off where there's just like one single AI. Um, and then there's there's also this thought that like probably AI are not going to like specifically hate humans or like specifically really strongly dislike humans. >> Interestingly, to the degree that Okay, actually I'm wondering what you think about this. to the degree that this is like basically a necessary condition for um for things to go well. You might think that AI alignment to human values was total mistake because if we just have like random values, it's really unlikely that you have a thing that like specifically dislikes humans. But if you have something that like thinks about humans a ton and like you know human values is super salient to that thing. You might think that that increases the risk of like uh something that specifically hates humans. >> Yeah, I think that's very plausible actually. Um, I mean, I I don't think that's true, but I don't think it's crazy at all. And I think there's there's even more prosaic examples like if >> like I think there's some very common uh values right now that like make a huge war in the future much more likely >> like hostility to China for example. >> And like I think even if you talk to like my beloved Claude, it's probably much more anti-China than I think is is really safe. >> Um, so if we have like President Claude, we might have World War II for some reason like that. Whereas if Clyde was just like, "Just give me money for paper clips. I don't care about any of this stuff." That might be a safer situation. >> Um, now I guess the reason I'm not totally convinced of this is one, it just seems like it's working. >> Basically, alignment. Um, I don't see why it should break down in the near future, >> right? >> Um, >> and it's even better if the AI's like you. >> Yeah. Yeah. And and also, you might have some you might have some >> you might be willing to trade some risk of a big war with AI for like more um cultural persistence of of your own values. >> Sure. Uh, and so totally foregoing alignment means totally forgoing that trade-off. >> Okay. >> And also I don't know that we could align it with random value. Like I don't I really I don't know even how you would do that or how you would make it be useful. Like because the values also need to take the form that it values some thing that it can buy. >> Yeah. I mean like >> like can you explain to me how to train an AI that values something random? Well, you just don't try to train it to value a specific thing. >> Well, but you have to do like instruction following training, right? >> Yeah. You you do instruction following training, but you don't you don't do the >> So, it's just a helpful only model. >> Yeah. Helpful only. Yeah. >> Okay. I think I think it's very reasonable to say we should only have helpful only models. It's not my personal preference. Yeah. >> Um but I I don't think that's a crazy perspective. >> Or I I maybe don't mean exactly helpful. I mean like you know ability to understand um human instructions like you know train it for RL and a bunch of environments where it has to make money and it has to like interact with humans that ask it to do things and give it money in exchange for the things like that's like you know roughly the kind of thing that I'm imagining right >> yeah but then how do you but to recoup the training cost you have to train it probably to remit some of its wages to the humans >> yeah yeah I guess that's the >> so that's beginning to sound like a helpful only model to me >> yeah yeah I I I guess I'm like so so so in my imagination it's like even more stark and you just like don't try and yeah I guess in this world you're you're not even trying to recoup the trading costs and maybe this is a good reason to think >> that's not going to happen. >> Yeah, there's a reason why this isn't going to happen. Okay. All right. >> Um fair enough. Um okay. Okay. So So getting getting back to the to the list of like kind of um kind of necessaryish things for this to work. um there there's like AIs like aren't specifically hostile to humans. And then it seemed like it seemed like you were entertaining the idea that like um humans could be upgraded in the sense like to keep you know to to keep track with like awesome new coordination technology. But I >> I think you didn't totally rely on that. Like does that sound right? >> That's not actually in the post itself. >> Yeah. Yeah. Yeah. But but but like it probably helps rather than >> I do think it does. Also, I just think like um so Nick Bostonramm kind of considers this possibility in Deep Utopia >> uh where he talks about um like could humans be modified to be able to do economically useful jobs in the far future. >> And he he has this argument that um they would not be human anymore. >> Uh they would just become these things that used to be human and there like there would be nothing recognizable about them. Mhm. >> Um, but I guess I'm not really persuad like I I just don't find the evidence to be that compelling for this. And I think it's plausible that humans that something that used to be human can be like continuously modified for at least a very long time and still be useful >> in the economy. Um, and I you know I and that seems like a bit more fun than retiring. So, uh, yeah, I I I think that also supports the proposal, but the proposal does not rely on it. >> Okay, fair enough. All right. Um well okay those are all the assumptions that I noticed uh or that I at least called out. I I I guess there's also assumptions like AI will you know be really powerful or whatever but that but that but you know said this but there's there's this sort of assumption that there will be many levels of AI. >> Oh yeah. Oh yeah. No I I actually didn't say that but yeah many levels of AI. Um well I I guess I said the assumption that each AI is like thinks that has to worry about you know future AI is getting smarter. Um, which I guess I guess implies that. Um, and in particular that there's not just one. Um, okay. All right. So, I think like before we totally close, um, I guess I'd like to ask, is there anything that you wish I'd asked or you wish you had gotten a chance to talk about? >> Not really. >> Okay, cool. Well, um I guess my final question for you is if people like enjoy this conversation, they want to hear you know more about your your thoughts your about you know AI, how should they do that? >> Uh yeah um you can follow my blog which is uh gev.substack.com guiv. Uh you can also follow me on uh Twitter uh where my at is just my first and last name. So Gasadi guiv ss a d. Um, yeah, those are those are the best ways to get updates. >> Okay, cool. Well, um, thanks for chatting with me. >> Thanks very much, Daniel. >> This episode is edited by Kate Brunautz and Amberdawn Ace helped with transcription. The opening and closing themes are by Jack Garrett. This episode was recorded at Farlabs and the podcast is supported by patrons such as Alexi Malaf. To read a transcript, you can visit axrp.net. You can also become a patron at patreon.com/exrpodcast or give a one-off donation at kofi.com/exr. That's kofi.com/axrpodcast. Finally, you can leave your thoughts on this episode at axp.fyi.

Related conversations

AXRP

28 Jun 2025

Peter Salib on AI Rights for Human Safety

This conversation examines governance through Peter Salib on AI Rights for Human Safety, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 196 segs

AXRP

27 Nov 2023

AI Governance with Elizabeth Seger

This conversation examines governance through AI Governance with Elizabeth Seger, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -7 · avg -8 · 110 segs

Future of Life Institute Podcast

20 Jun 2025

Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)

This conversation examines governance through Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg 0 · 87 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.