Library / In focus

Back to Library
Future of Life Institute PodcastCivilisational risk and strategy

How to Rebuild the Social Contract After AGI (with Deric Cheng)

Why this matters

Safety is not only about model behavior; this episode highlights second-order effects on people, institutions, and labor markets.

Summary

This conversation examines society and jobs through How to Rebuild the Social Contract After AGI (with Deric Cheng), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedSocietyHigh confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 60 full-transcript segments: median 0 · mean -3 · spread -2317 (p10–p90 -145) · 2% risk-forward, 98% mixed, 0% opportunity-forward slices.

Slice bands
60 slices · p10–p90 -145

Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.

  • - Emphasizes safety
  • - Emphasizes labor market
  • - Full transcript scored in 60 sequential slices (median slice 0).

Editor note

A high-leverage addition to the AI Safety Map that clarifies one important safety bottleneck.

ai-safetyflisociety-and-jobssocietyintropublic-understanding

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video aOh2cqTUlKk · stored Apr 2, 2026 · 1,634 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/how-to-rebuild-the-social-contract-after-agi-with-deric-cheng.json when you have a listen-based summary.

Show full transcript
It is very clear that the major AI companies have all expressed that their focus is to move towards full automation that they are building tools and that they have the express interest in developing these tools to the degree that they can fully replace human workers. What would be really concerning is the development of say superstar firms is a way to call them in which those firms have maybe a 100 people or 500 people but are augmented and supported by thousands of AI agents that allow them to function as much larger and much scalable corporations and eventually start to capture say a majority of the economic wealth. What does it look like if the richest person in the world has a wealth of the same GDP as all of Africa for example? We have seen significant periods of instability, unrest, even revolution when inequality and when the gains from the economy are not well distributed enough. And we might see similar outcomes if we see that the wealth gets overly concentrated within a small set of people. >> Derek, welcome to the future of life institute podcast. >> Thanks for having me. >> All right. Do you want to introduce yourself? >> Sure. I'm Derek. I'm the director of the Windfall Trust, which is a new nonprofit focused on responding to AI economic disruption from transformative AI scenarios. And I also run a consortium of experts looking into what a new social contract might look like after AGI enters society. And we're calling that the AGI social contract. We've been publishing essays on a regular basis about these topics and trying to explore how to reshape the social contract, how to reshape governments and society in a way that benefits society as a whole. >> Yeah. And and this is actually what I'm planning for us to talk about in this conversation sort of what what does it mean to build a social contract after we have AGI or after AGI is is widespread in society. So let's let's start there with what what is the actual problem here? How do you see the world evolving over over the next decade say? >> Yeah, I see that we are on a trajectory where corporations are going to be gaining more and more power overall. I think that as we've all seen, there are maybe four to six tech giants that will be leading the wave of developing frontier AI and that it takes a significant amount of capital and investment in order to build these frontier systems. I believe that as the value of these frontier AI systems starts to transition and diffuse into the rest of the economy, we're probably going to see a lot of big players in specific industries. Say for example, Whimo with transportation or consolidation of healthcare companies. And what that really looks like is that you might see maybe three or four major companies or maybe even just one or two in each industry dominating industries that used to be employing millions of people, right? Uh so if you have Whimo and maybe two or three other competitors, these might eventually replace a lot of millions of human drivers. for example, over the course of a decade or so. And so the real concern that we have at the AGI social contract is really about disempowerment of human labor. We're really worried that if we lose labor's ability to have leverage in the marketplace, we lose their ability to advocate for stronger wages, to have a say in the direction of our economy. then we lose a lot of beneficial aspects of how our society is currently set up in terms of maintaining agency, maintaining bargaining power for humans and then in the long term in the grand scheme of things really trying to improve human economic outcomes. Right? So, we're hoping that the strategies that we take in the next decade or so work towards empowering humans and empowering particularly human labor to still be relevant when a lot of value starts shifting from labor to capital. >> Yeah. And this shift from shift in in power from labor to capital, this is something you might expect if it turns out that AI actually replaces jobs or automates certain industries. Why should we expect that as opposed to AI being tools for workers which makes them more productive such that they they might have even more bargaining power which is what we've seen at least in some industries in the past. >> Yeah, it's a great question and frankly I don't think that there is any way to know. There's a lot of debate among economists and people working in this space about what will happen down the line. But I think it is very clear that the major AI companies have all expressed that their focus is to move towards full automation that they are building tools and that they have the express interest in developing these tools to the degree that they can fully replace human workers. uh it might not be explicit in the mantras of all of these companies in terms of what they're saying but it is underlying in their quest for AGI and I personally don't see concrete or solid reasons that they wouldn't be able to get there in the next decade. I think it's very very possible and as a result at the very least we should be trying to prepare for that outcome in the case that it does happen. >> Mhm. And so what does this world of extreme power concentration among a handful of corporations what does it what does that look like if say we we stay on the default trajectory we do nothing to intervene here. >> Yeah I think the best analogy would be to compare the major corporations of decades ago to the major corporations now and to maybe what we envision major corporations will look at in the future. uh decades ago like the largest corporations had hundreds of thousands, millions of people working for them. It took a lot of manual labor. It took a lot of people up and down the entire stack of industry in order to keep keep these corporations working. Say GM would be a good example or the railroad companies or healthcare. Uh these modern corporations, we're looking at 10,000 to 100,000 people running the largest tech firms, right? And now OpenAI and Anthropic are coming up with mass evaluations with only a few thousand people. [snorts] What would be really concerning is the development of say superstar firms is a way to call them in which those people have or those firms have maybe a 100 people or 500 people but are augmented and supported by thousands of AI agents that allow them to function as much larger and much scalable corporations. And that those superstar firms might eventually start to capture say a majority of the economic wealth or a majority of the economic gains or productivity increases and over time that they might capture more and more of these existing industries uh while funneling their resources practically to only a very very small subset of people. >> Yeah. And why why would worrying about inequality driven by AI? Why why is that different from worrying about equality as as we as say the leftwing in politics might regularly worry about inequality inequality? Why is there something to worry about for both sides of the political spectrum here? Yeah, I think that the trends implicated by AI are probably exacerbations or accelerations of the existing trends. And so I think many of the positions that we might have thinking about the AI economic space are really just taking the current trends and speeding them up, exponentially increasing them, right? like what does it look like if the richest person in the world has a wealth of the same GDP as all of Africa for example or all of an entire other continent? I think from the political perspective there hasn't yet been really a strong leaning to from one end or the other of this. Certainly it could still be captured by both the conservative or the liberal factions. I think the one thing both parties and all people can agree on is that we do want good outcomes for human workers. For example, in the US, we want good outcomes for the workers and the economy. and that this might be something that could transcend political parties when and if we see significant labor disempowerment leading to weaker wages, lower consumption, a lower share of labor capturing the gains, and therefore just kind of a destabilization of our economy in ways that don't necessarily align with the prosperity that we would hope the AI would give us. >> Yeah. So in the past we've seen increasing wealth combined with increasing inequality at least between countries and in certain periods within countries also. Why is it that when my question here is basically does inequality matter if everyone is rich? So we imagine a postagi society in which there's so much wealth that even though there's extreme inequality, the worst off person or the worst off group is still better off than they are today. But is is that the world you're envisioning? And if so, does does inequality matter in that situation? >> Yeah, I think it is entirely possible. Uh I wouldn't say I'm necessarily a doomsayer when it comes to AI economics. I think that there will be massive increases in prosperity and growth and particularly for developing countries a lot of tools will become accessible to them at a much more rapid rate than they otherwise would. Right? They might have access to programmers or to doctors or tools of that nature much more rapidly than they would have in a paradigm without AI. I think that the main concerns around developing countries in particular is the risk of them being cut out of the AI boom. Uh they don't necessarily have a lot of strong tools across most developing countries to be directly involved in the systems that are developing, creating and capturing gains from AI. those the data centers, the systems of power, the researchers, all of that is really concentrated in Western countries and China. And so we're really looking for ways to try to make sure that the trickle down effects of AI do get widely spread. And the real concern if they don't get widely spread has to do with societal unrest. I think that we have seen significant periods of instability, unrest, even revolution when inequality and when the gains from the economy are not well distributed enough. And that we might see similar outcomes when it comes to destabilizing political power, destabilizing democracies, political instability. If we see that the wealth gets overly concentrated within a small set of people. >> Mhm. Yeah. Makes sense. One question I think that that will pop into many many listeners minds as we have this conversation is do we have enough time to do something about this problem. So you're we are you're talking about trends that are accelerated due to AI and increasing inequality. Given the pace of AI, do we have time to intervene here? And uh yeah, do we have enough time? >> Definitely there are a lot of challenges that will come with AI capabilities taking off very very rapidly. The one that I'm think I'm most focused on is really how quickly can society respond and how quickly can we make the right changes to our political systems, to our societal systems, to our economy in order to adapt. And really my belief is that these are problems of our own valition, not necessarily problems that come from AI. We could decide today if we all agreed on a certain path to reshape uh taxation, to reshape welfare systems in order to protect people from the negative outcomes of AI. It's simply that we don't have the political power or the direction or energy yet behind those types of things or the awareness that this needs to happen. And so my belief is really that in the case that we see sudden dramatic changes from AI, say something on the equivalent scope of a COVID pandemic or one of the major paradigm shifting moments from the past few hundred years, say the advent of the bread and wood system after World War II. When you see significant changes in society, you also see a very rapid shift in how governments, societies, and political systems respond. And so that there is the possibility if a lot of things change very rapidly that we might collectively simply decide to move in the right direction. And that is really important for us to be prepared for us to push that movement in the direction that most benefits society. >> Yeah. And so and so that's also where your work fits in so that we have some plans or some scenario planning ready if there if the political will to implement some of these ideas suddenly arise. >> Yeah, that's the hope and we're doing the best we can to try to model different scenarios that might help move people towards understanding that this is a possibility. >> Yeah. Why isn't this conversation more mainstream already? It's at least from uh within the bubble that the both of us are part of and perhaps many listeners to this podcast are part of it seems that AI is a massive deal and it seems like it's moving very fast. Why is is the conversation about what AI will do to our economies why isn't it that mainstream yet? >> Yeah, it's pretty fascinating because on one hand you have the media talking about this sort of idea non-stop. It's a great media story and engaging with it. On the other hand, there's very very few people working concretely on trying to respond to the space. I think it's probably on the order of hundreds of people compared to of course almost millions at this point working on AI capabilities. And I think part of the reason is that there is a big gap between where AI futurists, the people working in Silicon Valley are right now and where the establishment thinkers are. I think that on the AI futurist side that they are very convinced that AI capabilities will take off very rapidly but there isn't a lot of say economics knowledge of policy knowledge of background into these types of like the inner workings of how societal change happens. And then on the establishment side, you have deep credibility in economics, public policy, and organizing change, but they haven't necessarily bought fully into the idea that capabilities could take off at this rate. And so these groups, I think, are almost a little bit talking past each other. On one end, you're talking about 10 or 15% GDP growth year-over-year, and on the other end, a lot of traditional economists are more on the bucket of, oh, this might increase growth by.1% or 2%. And so I think our goal in many ways is really to just try to take these two camps and show the ways in which they both have visibility into key parts of what is happening together and to allow for the conclusions to trickle in together. Right? I think when you ask economists to hypothetically take for example what if AI capabilities could lead to a total automation of certain types of cognitive jobs their conclusions are very different and creating almost a space by which they can engage with their peers collaborate and think about these things is probably the next step just to start moving us in the right direction. >> Yeah. And you of course talked to both groups and and I've interviewed people from both groups too. And I find that it seems to me that the AI optimists about AI capabilities uh could learn a lot from the thought from the corpus of economics and that the economists could could likewise learn a lot from thinking more seriously about the the possibility of really advanced AI. What what do you think these groups could learn from each other? Yeah, I think on the AI futurist side, there is a lot to learn about how economies trickle down. Uh how slow diffusion happens, the systematic structural impediments to simply deploying a very very powerful AI system into many structures and systems. A lot of these things have been built with humans in mind over hundreds uh the hundreds of years, the entire time that they've existed. And a lot of processes exist to slow down and to make sure that humans are in the loop even if they haven't explicitly been designed in that way. And so I think that starts to really pull us back a lot from this idea of 100% automation and sort of the recognition that there will be a world in which powerful AI agents and humans are working together and overlapping and collaborating for honestly a very long period of time. And when you say very long period of time, what do you mean by that? >> Yeah, I mean everyone has different opinions on trajectories, but my opinion is that we have still, you know, decades before humans are fully possibly pushed out of major decision-m positions, right? I think that the conversation on the AI space is that this could happen very, very rapidly. I think I see many of the structural impediments for that happening rapidly. Though I wouldn't disagree that there is the possibility of overall human disempowerment in the long term. >> Yeah. And we should we should say of course so decades sounds like a long time. But for a societal transition of this magnitude, it's not a lot. It's not a long time for basically everything to change for the way we the way our economies work to Yeah. be upended. >> Definitely. Yeah. Yeah, one of my beliefs is that in the moment uh revolutionary changes are happening very very rapidly but on a day-to-day basis in society you it might not feel like that right like um one example would be the internet of course has revolutionized how we have worked in society and how we engage with each other and how we think about technology but its impact on GDP actually was relatively pretty small for almost a decade to two decades uh and we only have really started to see in the last decade the impact of the internet on overall GDP levels starting to creep beyond like a few%. And so even though it has completely changed how we think about society and how we think about work from a structural economic perspective these things happen to be quite delayed in terms of how quickly they change. Although AI could move even faster than the internet, I've we've seen some we've saw the adoption of language models, the number of chat GBT users, the the revenue growth of the frontier companies and so on is faster I think than the internet companies that then those companies were growing in the '90s. Yeah. Do you think it could move it could move faster than the internet? >> Yeah, absolutely. I think when you think about these revolutions happening over the course of human history, each one progressively happens an exponential order of magnitude faster, right? And so uh with the agricultural revolution that took what 5,000 10,000 years the industrial revolution took at least many decades, the internet revolution has occurred in the process of perhaps a couple decades. And that the advent of AI will happen more quickly. But yeah, it will happen much more quickly. But also that the diffusion of the AI technologies throughout all of society will still happen much more slowly than perhaps we can think about in San Francisco or Silicon Valley, right? Like I think many developing countries are only starting to see the internet start to diffuse into their societies and similarly it could be a significant period of time before AI does so for for most much of the developing world. >> Yeah. What should economists learn from people who are very in the know about about AI and people who are de who are creating this technology? >> Yeah, I can definitely empathize with the economists and existing policy makers because when you step into the bubble of people really thinking about AI capabilities, some of the things that they're talking about do seem so extreme or hard to believe, right? But I think that there is the potential right now for a shift from the traditional sort of automation that we have seen over millennia in which individual jobs get automated and people move from those jobs say typewriters to like knowledge workers or say from horse cart drivers to taxi drivers. All of those shifts typically led to more jobs in new industries because there were always uh higher level or more complicated or more advanced roles for people to move into. I think there is the possibility that AI does theoretically have the ability to automate nearly fully uh large amounts of cognitive labor and that as it gets better and better that it can continue to expand across many many knowledge work industries and also compete directly with humans on even the new jobs that are created in the knowledge working industry. Right? Uh they they might be adapted to become AI coordinators themselves. And so I do think that there is the risk of a decrease maybe not in the total number of jobs but in the total number of desirable jobs that we want humans to have. I think a lot of the goals of the economy have been to move people towards knowledge work towards intellectual work towards like leveraging the value of education. And when we take the implications of AI capabilities into account, that starts to move the needle in the opposite direction, almost pushing humans towards more manual labor. And so when you start to take some of these assumptions into account, the shape of how you do economic modeling changes very dramatically. And there is a lot of work, I'm sure, that still needs to be done in terms of understanding what that might look like and what the implications of that would be for our society and our economy. If if we imagine we have uh AIs in the future that can do all knowledge work and humans are pushed into more manual labor type jobs. Uh might those jobs become extremely well compensated just because the the economy is moving or you know economic growth might be very high and there's a big difference between near full automation and full automation. So we can imagine a situation where humans are say fetching training data that's hard to get by moving around in the world. This is this is perhaps not the most interesting job but it might be very very well compensated because it's a bottleneck in the economy. Is that how you would model it? >> Yeah, I think that could be possible. There's certainly worlds in which the productivity of all of our work just increases significantly because of the advent of AI. Um I think the main concerns are that if you have a lot of competition for a different set of jobs that inevitably pushes wages down, right? And so the total amount of desirable or valuable work starts to come into play when it comes into how the economy will respond. And the big question of course is whether or not there will be more like valuable or like desirable jobs or less. And certainly that is completely outstanding and I think our position may be thinking about the AGI social contract as simply there is a sizable chance that it is less right there. I don't think it's possible to roll that out given the implications of what AI capabilities might look like and in those cases that we should be at least at the very best ready to respond and to protect human outcomes. >> Yeah. Could could we move into a situation where the jobs that exist in the future are relates to sort of insane luxuries like having a dog therapist or you know being a professional party go that creates a good atmosphere at parties or something like that? Do do you think do you think that there could be room for jobs that we can't even imagine and that these jobs might not even feel like jobs as perhaps the job that I'm doing right now, the job that that you're doing right now doesn't feel like a job to a person 200 years ago? >> I would love that actually. I think that would be really fun to be a [laughter] a dog therapist or a professional party person. [laughter] Um yeah, that sounds really excellent. I think that one the jobs that I see as deeply deeply resistant to automation really have to do with humanto human connection. Right? I think that uh even with very very powerful AIs we're we still have demand and desire to engage socially with other humans. It's programmed into our nature and that by leaning into that that we will still find meaning, purpose, value in our work, in our lives, but it might happen on a more local community oriented basis, right? instead of doing tasks online for people like millions of miles away, we might be building our communities locally. I think that my general concern perhaps is that there just aren't that many economically desirable outcomes for many of these very desirable jobs. For instance, like being a peer party planner does not pay very well. I actually worked in events and I ran a festival for six years and I can speak very deeply to how the better the event you run the less financially profitable it is. And similarly with artists, musicians, writers, painters, people who are really pursuing the paths that I think bring them a lot of meaning and that they have found to create direction in their lives often times make significant sacrifices when it comes to finances because the economic incentives are not there. And it's not necessarily clear that AI itself will change those economic incentives. I don't think we'll have 10x or 100x demand for music. I think we're all listening to music approximately as much as we would like to listen to music at this time, right? And so it gets complicated and hard to imagine exactly how all of these desirable jobs will be created from this AI boom unless we restructure how we think about like society and how we think about government policies to protect uh human labor. >> Mhm. Yeah. You actually have a good write up on categories of jobs that might be resistant to automation. So So it might be interesting for us to go through the list of some of those. Um yeah I don't know for example intent communicators that's a that's an interesting category of role maybe you can explain what what an intent communicator is. Of course, I think that as long as corporations are run by people that there will be people at the top and that for those people at the top to function very well, they will have to have support to uh understand or to engage deeply with powerful AI agents even in these superstar firms in which AI agents are performing a majority of the tasks in these futuristic uh societies. For example, in order to build a good software product, you still have to have a very deep understanding of the architecture of the software product of when it breaks down what needs to be done. Uh you have to be able to integrate feedback from human consumers, customers, uh and turn that into a product that is well structured uh that can respond appropriately to their needs. Uh and then to communicate all of that up to the CEO who is trying to build a good product and thinking about the high level strategy, right? And so in those ways even if you uh can see a lot of the grunt work automated for software development with tests or even with writing code it's still very clear that there is a role for humans as long as corporations are human-driven in order to uh engage with external parties engage with customers engage with people up the stack and to basically provide kind of that conduit between very very smart uh AI systems and the people who work within the >> [snorts] >> Yeah, sort of the a similar role is as interpersonal specialists, which is sort of greasing the groove of making things work between people. What might an interpersonal specialist do in the future? >> Yeah, I think maybe I see two categories of very resistant jobs to TAI related to interpersonal specialists. For instance, there are many that need to be in person. And frankly, I just don't see that people are going to strongly desire AI nannies or primary school teachers or social workers or sports coaches for a long period of time. There's a lot of in-person movement. There's a lot of eye contact. There's a lot of social experience that is like valuable and frankly I would expect those wages to rise and then number of those individuals to go up as we move into an AI economy. And then there are a number of uh positions that are more virtualizable. For instance, therapists have become very virtualizable. Uh a lot of us now take therapy online or life coaches or travel agents. And for those I see, for example, that you will definitely still have human therapists, but they will like sit as a luxury good or as a premium, right? they'll have to be strictly better than talking to a chat GPT agent, but that there is still value to be taken from the life experience of a real world therapist. That people will still value the maybe the eye contact or the knowledge that somebody is genuinely out there supporting their work or their goals and that that might just become a luxury good. It just might become more expensive than like the chatbt therapist for example. There's also the the category of of people who are legally allowed to make a decision. So, um decision arbittors might be lawyers, might be doctors. How do you how do you think about that group? It seems like they have perhaps perhaps they have they have like a legal protection for their job category and that seems like it could be very resistant to automation. Also perhaps because lawyers are exactly the the group of people that can make laws about what or that that are engaged in making laws about what can and and can't be automated. >> Yeah. I think society we just are going to have a lot of resistance to handing over significant decision-m power to AI systems. I think culturally it just isn't in our nature. We have a lot of negative resistance to AI in many forms right now even with just commercials and audio visual visual content and so I do believe that we are going to have a lot of resistance when it comes to judges to legislators to lawyers definitely to politicians even if you can show that these AI lawyers for example or judges are more impartial or more fair it just really is unlikely to be in human nature to trust fully even these types of agents. And so I anticipate that that will just create frankly years maybe even decades of resistance in which the political systems and the social systems that for instance certify lawyers will just not update to engage like allow for AI lawyers. though AI AI will certainly uh uplevel and upskill the people working in those roles significantly and the best lawyers and the best judges and politicians will certainly be leveraging AI to support their workflows honestly starting probably right now. >> Yeah. Yeah. Okay. So you mentioned that people are unlikely or are sort of reluctant when uh to to trust AI and and to to some people sort of or at least a number of people instinctively dislike AI products when they see something AI generated. You know the comments are that this is I don't want to see this this is AI slob this is you know they don't like it. Uh when it comes to AI code people don't have that preference at least not to the same extent. I see complaints from programmers that the code that's produced is sloppy and it it's it's not working correctly and it's messy and so on. But the end consumer probably just cares about whether the software that they're using whether that software is working or not. And for decision makers, it seems that if a lawyer can give you the right answer to a question or if a judge can be impartial, it seems like people just want the product there and they don't perhaps care so much about how it's produced. So, so what I'm asking is do you think decision makers fall into more the the fall into the category of of responses that we've seen to coding or more to the into the category of responses we've seen to to AI generated media? >> Yeah, that's a great question. I think it has to do with how human or how personally we feel about the roles that AI is taking, right? And perhaps there is something there around maybe that coding is almost a mathematical or physical process. And so it doesn't necessarily need to be human. We can envision AI taking over it. But that art and creativity feel very human. It feels like something that we have always wanted and considered to be a human creation, you know, that other species in the world maybe can't engage with. And so maybe that's where the fear or the resistance comes from. uh judges and lawyers is a good example, right? Like I would certainly just like the right answer when it comes to the interpreting a legal text and saying this is what you should do, right? I think I would be very okay myself handing that over to a lawyer. But I don't know if I would be okay handing over to a judge, an AI judge, the decision of whether or not I end up in prison or how my money ends up being distributed. It seems like that feels almost like a societal a human process that I still want another person to engage with for me even if I don't know that person or I don't have any particular reason to trust that person more than an AI system. Certainly everybody is going to set different places on this but that's maybe perhaps the distinction in my head. >> Mhm. Yeah, that makes a lot of sense. Actually, we also have the category of artists in general. So you mention or you you write about low volume artisans and authentic creatives which is sort of two two ways that art might be able to or that artists might be able to avoid automation. Is what is this about in general? Is this about something being humanmade giving it value so that you know consumers today might go for organic products and in the future they might go for humanmade products. Yeah, I think we're already starting to see those trends in society. For instance, there is some implicit value by a human artisan on Etsy as compared to a cheap product made by someone in China in a massive factory, right? And that we assign some value onto it. We assign value onto art that we know who has painted it or that we know the story behind it. And so that there is certainly still a gap for that human connection in terms of the creation of the product given to the people. And then the second thing that is kind of involved in this is that certain things are just frankly too low volume especially things that require a lot of manual precision creation for AI systems to develop. I don't think AI systems are going to be developing violins for the next uh like few decades because it's low volume. It's very artisan and it requires a lot of nuance that frankly has never been encoded online or onto the internet. And so it just doesn't seem worth it for these corporations that are developing these very complicated or expensive AI systems to be putting in effort towards these until well well well down the line. >> Do you think you could see the product categories related to luxury and status increase as a as a as a fraction of the economy? So for example, one of the things that make luxury goods valuable is that they are limited in production. Not everyone can have them. Maybe they're handmade by by people. Do you see that playing a bigger bigger role in the economy? Yeah, I see. Especially if we have the risks of increasing inequality that we have been discussing that luxury goods will become a larger and larger portion of the economy and that those status symbols could be uh a big driver of many portions of the economy, right? like having more people supporting your lifestyle would be a possible direction or having more expensive and signal signaling goods such as yaks. I do think that that isn't necessarily the case and that we still have a lot of slack in the global economy to be bringing a lot of people out from poverty, out from financial insecurity, out from scarcity. And that the focus of what we should be doing as a society is to move as many of these resources in the direction of lifting as much of the world out of these very difficult conditions. So that we have almost a guarantee of a good lifestyle of good housing of education of health care of financial security and that should be the priority for society. >> Yeah. What do you think happens to the price of land uh once we have a post AGI economy? >> Yeah. I think a really good tool in the economist toolbox is called Bal's price effects which is essentially the recognition that things that are limited in terms of uh availability will go up in price whereas things that are progressively cheaper and cheaper to produce will go down in price. So TVs, consumer goods, phones, our technologies have gotten significantly better. And as a result, we can buy frankly almost anything you want on Amazon or other platforms for a tenth of the price that you might have imagined 20 or 30 years ago. Right? At the same time, our housing costs have gone up. Our education costs have gone up. Our healthcare has gone up. All these for different reasons, but a large underlying problem is that the availability of them is bottlenecked. And so when more wealth is captured or more wealth is created and less goes towards consumer goods, inevitably the cost of some of these limited goods must go up. And land is perhaps the most fundamental limited good. There's only so much beachfront. There's only so much desirable land within a city within the within 5 minutes of the job in the right location. And so I'd imagine that land holds value very well and that you start to see you continue to see an acceleration of challenges such as housing becoming unaffordable. Finally, I think that there are really interesting solutions when it comes to how you might respond to all of these changes uh when it comes to taxation. For example, for instance, some people that I've worked with as fellows for my convergence analysis fellowship have proposed a land value tax as something that is resilient and structured well to respond to an AI economy where if you tax land, it will inevitably capture more and more resources progressively from the owners of land, from landlords, and that might help to restabilize maybe significant changes to tax revenues when you start to see restructurings of the economy. you have this road map for how we can respond to these challenges sort of three levels near-term, medium-term and long-term. Maybe we can talk about how we can respond at different time scales. >> Yeah, I think a lot of the debate between maybe these establishment thinkers and the AI futurists that I've been talking about is really just about what time scale we're looking at and how soon we need to be responding to these things. I think establishment, economists, policy makers, they're really thinking about near-term volatility, right? Like what is going to happen if we quickly lose a lot of jobs in the short term? What is going to happen if a lot of investment happens in certain areas and how do we best capture that? >> Just just to jump in here, do do you think the establishment thinkers, do you think they see this as a break in a in an economic trend and then are they assuming that we'll go back to business as usual afterwards? So are they assuming that this is more like a financial crisis and then we'll return to a more normal economy afterwards? broadly I think they see it as an acceleration of existing trends and so I would say even the traditional economists are very concerned about inequality about the role of developing countries about how to make the AI boom inclusive but certainly that just that their expectations about the radicalness or the size of these trends seem very different from those on the AI futurist or in Silicon Valley side but I do think they're Yeah. And then on the AI futures in Silicon Valley side, I think people are largely just focusing on the end result. And so that might be, oh, it really depends on your timelines, but they're really focused on what do you do when the entire economy is entirely captured by AI? What do you do when you need totally revolutionary structures like UBI for all? And I think that the answer maybe is that all of these are different parts of the same road map. In fact, uh that in the very near term in the next few years, we might need very strong responses to volatility such as wage insurance, unemployment benefits, uh yike. Um and then in or reskilling tools and then in the long term I think that it is almost inevitable that we start to need these very transformative structures for society things like in the direction of UBI or global dividend or things that revamp the role of corporations and governments and how they respond. And so my pitch would be that instead of having this debate about which one of these things are necessary to recognize that these are all different portions of the same path and that we simply are talking about different steps along a timeline towards a successful and flourishing society and that we should just navigate it collectively in by understanding where uh what what policies are most necessary at this time. >> Yeah. Do you think what we'll end up doing will be radically different from what we are imagining now? So do you think for example now we're talking about taxation and redistribution perhaps a UBI later on? Do you think what will actually happen is something we can't really imagine because AI is is transformative in a way where suddenly new possibilities emerge and sort [laughter] of I can't really name what the solution might be because what I'm asking is whether it will be something we don't expect but we to give a an example of what it might be we could imagine that we are now taxing robots that are working for us or something like that. Do you think the solution here is something something outside of the space of possibilities that we are currently thinking in? >> Yeah, I believe that what we can conceive of right now are the same problems and solutions that we will conceive of in decades. Uh in that humans, we still will always want similar things. We will want stability in terms of our housing. We will want a good education. We will want strong community bonds. We all want financial independence and security and so the solutions will tend to be in the same overall direction and what will change is really the political palatability of certain types of ideas. For instance, just as you're talking about, the idea of a robot tax is being has been explored in South Korea briefly and has been touched on by a few different economists and thinkers in the space, but it's just certainly not very politically palatable and frankly quite infeasible to implement as AI diffuses quite naturally into almost every layer of our society. but that it's really just a matter of where are we society to be able to talk about some of these solutions and I see that there is significant amounts of work by very very qualified and thoughtful workers say give directly on the ideas of UBI or the economic security project on the ideas of a universal guarantee and that the best thing that can be done is to continue to give more fuel to the fire for those sorts of projects that are really starting to think about how society should look in a decade or two if we see a significant amount of inequality and power concentration among a small set of capital actors. So, so you what you're saying here if I'm understanding you correctly is that we sort of have a map of the space where we can unless something radically changes about human nature, we understand what what the tools we have on hand are. We understand we we need some form of taxation and redistribution to handle this problem. Is that correct? >> Yeah, I think that we have the tools that we need to be able to conceptualize and to understand to some degree the solutions in the future. Certainly to be able to test them, that's a totally different story. Testing economic beliefs or concepts is traditionally among the hardest things that you could possibly imagine and as a result makes it a relatively softer science because you can't run randomized control trials. The I would say the main aspect that I have not put significant thought towards but that other people in the AI policy space have put significant thought towards is the idea of loss of control over to AIdriven or like economies and political systems owned or controlled by AI AI systems and frankly that is for me very hard to conceptualize and I'm not quite there yet myself. So I think there are uh good resources out there in terms of people who are starting to think of that direction. Yeah, listeners who are interested in that can can scroll back in the feed uh to to hear my episodes where I interview people on that topic. But just you mentioned the difficulty of finding out what works or of testing say testing UBI or testing how solutions to these problems. Why is that so difficult and what does that mean? The difficulty of testing out potential solutions here. Yeah, structurally we just have a lot of impediments to be able to run fully randomized control trials on a lot of the ideas of policies that we want to not withstanding the the largest one is probably political support and capital. Uh economic trials cost millions and millions of dollars. They require buyin from governments. They require significant changes to the lifestyles of like many many many people. and they are confounded because people do not operate or uh live within a vacuum. They operate in direct con contact and con the context of many people who are perhaps not involved with trials. And so actually one of the most impactful ones that I've read recently is control trial by give directly. I think this was from a few years ago in I believe Kenya or an African country where they had certain villages randomly selected to be given a universal basic income for a short period of time and certain villages were not selected and then using frankly very complicated systems to try to identify the differences in outcomes accounting for the interplay between these two uh between villages that are working within a single economy. And so there are tools that allow people to test some of these ideas, but they are challenging and require a lot of capital and require a lot of buy in. >> Yeah. We can think about, for example, say say you're part of a of a UBI trial, but you know that there's an end date for that trial. Then you behave differently than you might have behaved had you known that the UBI would be for life. Or you you know that there are people outside of the trial that are not receiving money. Maybe you you tend to spend more money on on on say family members who are not part of the trial than you would have if if the UBI was was actually universal and sort of indefinite. And so you can't really Yeah, you the economy is too complex to control all all of these variables I think would be would be one way to summarize it. But then if we can't if we can't do run great tests, how do we you know all of the great ideas you have and people in the space might have, how do we test those ideas? How do we can we can we do something in in simulation or can we do something where we have strong theoretical reasons to believe it might work? >> Yeah, I think that's similar to medical testing. there is a series of ladders of trials that might be useful and I know that UBI, the space of UBI has been doing this for almost a decade or two decades at this point where they are progressively running larger and larger trials and trying to collect enough evidence to justify the next level up. Um I think sometimes opportunistically a significant change in society will lead to a policy being passed for a period of time. A great example of this is the child tax credit which has been advocated for by organizations in the US for a decade or two decades and during COVID for a very short period of time about 1 to2 years the child tax credit was just universally passed across the US and from that they saw significant decreases in household poverty for many families for many households with children and many many other beneficial impacts. Of course that was repealed a couple years later but from that you get very very significant evidence that this could be valuable in a certain way and so almost waiting for the right conditions to arise as well as supporting the creation of those conditions by creating political support and creating investment towards researching the right types of ideas so that when the moment comes that the policy makers or decision makers can be influenced with enough evidence to push them in the right direction. I think is the strategy. >> Yeah. You're concerned about both domestic inequality and global inequality and and you have some proposed solutions to deal with both. May maybe you can you can sort of sketch out what you're you're thinking about in in in uh in when it comes to domestic and global inequality. >> Of course, one of the things I'm doing the most research on is taxation in an AIdriven economy and how that might need to be restructured. I think it seems relatively uncontroversial to recognize that if the economy shifts the way that it captures value from labor to capital significantly that we should think about at the very least how to redesign our taxation mechanisms to account for that. And the project that I'm most working on right now is kind of the idea of a progressive income tax. Essentially what that uh is decide what that is intended to respond to is the idea that there might be unprecedented shift from labor to capital income which could dra drastically shift the overall tax base and two that there could be a massive increase in the concentration of profits and economic growth. And so a progressive corporate profit tax would help to respond to that by capturing more value from the largest corporations as opposed to smaller mom mom and pop stores. and also capturing more relative profit from capital labor as opposed to or from capital as opposed to labor. Whereas right now our labor taxes are on the orders of 20 to 40%. Right? They are they are already progressive and they are significantly higher than capital taxation which broadly sits roughly around 20% in the US. And so when a lot of value if a lot of value shifts from labor to capital, how do you make up that 20% gap? a progressive corporate income tax could be one step in that direction as an example. >> And I guess the the main worry here would be that if you tax your corporation, say you're you're you're the US and you implement this progressive corporate income tax. Well, then perhaps these corporations now have have an incentive to operate somewhere else, to incorporate somewhere else. So, how do you think about the balance of keeping taxes low enough so that you have your corporations still functioning within your country but also getting enough tax revenue to to operate the government? >> Yeah, this is speaking to a really broad and persistent problem with global taxation and this is essentially the issues with tax havens with the race to the bottom with corporate taxation that has been going on for the past 20 years. When you've heard about stories such as Apple paying 0 in corporate profit taxes probably about a decade ago, it's changed since then. It's related to all of that. And I would say the most promising structure and solution has to do with the OECD which has proposed a thing called the BEPS 2.0 know base erosion profit shifting which essentially allows for global tax coordination across a wide almost every country involved in the taxation of multinational enterprises such as Google and Apple. It is currently somewhat stalled. Um, but I do believe that it seems to be the most reasonable and practical solution that will help to avoid this race to the bottom that we've seen over the past two decades. And that this is just frankly a fundamental problem that has arisen as a result of our changing economic conditions due to technology. Right? Like 20 years ago, all corporations that deliver products also operated within a country, right? like if you sold products, you had a base in that country to sell the products from and as a result, you could tax that base. These days, digital corporations, they can deliver ads in Belgium, but not have a not have a base in Belgium. And so, as a result, there must be some way for Belgium to capture the profits being created by their citizens spending money. And so that's essentially what OECD beeps is designed to respond to, which is that kind of inequality and the existing gaps in the global taxation mechanisms for corporations. And you mentioned before that we could also tax land that that seems to have a built-in advantage that you can't you can't just move your your solar farm from from Nevada to somewhere else. And so if you tax land, is that perhaps more uh efficient or that does that avoid this problem with corporate income taxes? Yeah, I think land taxation would be much more resilient as wealthy people are very unwilling to shift their land consumption elsewhere. Certainly they are, but it's much more difficult. And then similarly, consumption taxes are quite resilient to this sort of international tax haven, tax loophole type work in that where you consume goods tends to be relatively stable in many industries, not all industries, but many industries. >> Yeah. As I understand it, corporate income or corporate taxes in general and income taxes are sort of the worst for economic growth. This is this is oversimplifying perhaps. But whereas land taxes and consumption taxes are better or the best for for keeping economic growth high. Do we have to factor that in and and do you even agree with that assessment? >> Yeah. Yeah. they they have benefits when it comes to economic efficiency from almost a like a macro perspective and certainly from a theoretical perspective. I think maybe one of the arguments against that is simply that despite being relatively less efficient, we do see significant uh nonzero amounts of corporate taxation and income taxation and for many reasons that it is efficient for societies to have some level of corporate taxation, right? It allows for uh certain levels of stability or capture of um the uh activities of large corporations. And I think most economists would not advocate in favor of nothing uh when it comes to say corporate or capital taxation even though it is less efficient. And so uh it really just becomes a balancing of the trade-offs, right? If you see say a 100x increase in the capital gains from capital investment or corporate profits, uh how much are you willing to increase uh corporate taxation or capital gains taxation in order to counteract that, right? To to stabilize the economy. And so these are very complicated questions. Uh I certainly don't think I'm qualified to decide or to advocate for specific numbers but that like uh a lot of factors need to be considered uh on the governmental and international level as well. also for your concern about a small number of companies growing very large and having a you know basically monopolizing the economy um corporate taxes would be well suited to to counteract that problem and if I guess if you only had land and consumption taxes to deal with this this would be this wouldn't really address the key issue of concentration of economic activity within a small number of companies >> that's right and there are significant uh I guess externalities to uh these large corporations capturing all of the value. For instance, if they have too much economic power, they could rival states in terms of almost political or social power, right? And you can imagine that governments might not want that. They might want to uh maintain their their hold over the corporations that exist within their their their boundaries. So like even beyond economic efficiency, there are many many positive and negative trade-offs to to every form of taxation and that these need to be deeply thought about by the right people in government. >> Yeah. >> Yeah. We've been talking a lot about the future developments that we might be that we're scared of or that we can foresee going wrong and so on. I think we should spend a little time here as a final topic on how the future could be good, right? what what might if we handle this correctly, what is the type of future that we could build? So, can you can you sketch a positive vision for what you would you would like to see the economic future look like? >> Of course. I think that when I think about the very very long term, and I'm not talking about the next few years or even the next decade or two decades, what I can really see the advent of AI creating a lot of this economic wealth, replacing a lot of this toil of economic labor is something along the lines of the decoupling of economic security and purpose. And I think that for a long time, for all of human history, your purpose oftent times was interlaced directly with economic security. And in situations in which they were divergent, say with creating music or with creating art, it becomes a significant hardship for people to have to prioritize one or the other, right? And so my dream is that we could raise the floor on something like Maslaw's hierarchy of needs where the average person no longer has to worry about their physiological needs in terms of housing, in terms of food, in terms of shelter or economic safety. and they no longer have to worry about their safety, but that they can move more towards almost things that they have independently decided for themselves given the space and the lack of scarcity to discover what they feel most compelling in terms of meaning and purpose. And if that does look like something akin to our existing jobs today, I fully support that and that it is for each of us to decide. Right. >> You have this nice phrase where you say the the we can we can have the freedom to cultivate our own garden. I like that phrase. It's a It's sort of a beautiful vision of what might be. Uh and I I I can understand what you mean by it. >> Yeah. Yeah. And I I believe that losing maybe our role in terms of driving forward the core parts of the economy is a possibility from this and that in many ways it could be a privilege like because as people we will still have struggles in many many different ways. will still struggle to find our purpose, to find our bel love and belonging, to find our self-esteem, to raise a family, to determine the right direction for society, but that maybe those are fundamental to the human condition, but that maybe the financial security does not have to be fundamental to our human condition, right? Maybe our physical safety and security, our physiological needs do not have to be things that we have to worry about. And so in my view like if we can move towards a world in which we can create that sort of underlying level of safety that could create maybe an epoch of rest or an epoch of clear direction for our own personal journeys. >> You're right that we might free up time to build cathedrals and that should be understood expansively. What do you mean by cathedrals? Yeah, I think cathedrals is not maybe a literal sense but more so to invest in projects that are meaningful and purposeful for communities to engage in. Right? I think in the past we have built beautiful churches. We have built beautiful art. Um, I would say that there are people who are creating experiences and events for others such as festivals that they find deep meaning and purpose from but perhaps not financial wealth. I think that developing schools, higher places of education is something that feels deeply meaningful for many people and that coming together and being able to put resources towards those things that both feel beneficial for society and for more importantly like our local communities, the people who we live with and who we engage with. maybe is a direction that we could put more energy and resources towards if we free ourselves from almost this rat race of needing to secure economic security for each of us. >> Yeah. For listeners who are interested in in in your ideas, the ideas we've been talking about here, where should they where should they look? What should they search for? Who should they connect with? >> Of course, we have a platform called the AGI social contract uh agisocial contract.org or in which we talk and explore a lot of these ideas. We'll be continuing to publish over the next year about what some of these structures might look like and there are many many good thinkers out there who have already started to write about these sorts of futures. Daniel Suskin is one of them. Aaron Bastani writes about these ideas in this his piece on fully automated luxury communism. There's people who are sitting at all different points of how far you take these sorts of ideas ranging from what we need to do in the next 5 years to almost what society could look like in a 100 or 200 years. And I do believe that understanding how all of they all of them fit together and pulling ideas from each of them to try to create a road map towards the right direction is perhaps one of the most impactful things that we could be working on today. >> Great [snorts] Derek, thanks for chatting with me. It's been great. >> Thank you for having

Related conversations

AXRP

6 Jun 2025

Owain Evans on LLM Psychology

This conversation examines society and jobs through Owain Evans on LLM Psychology, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -6 · 124 segs

Future of Life Institute Podcast

5 Mar 2026

How AI Hacks Your Brain's Attachment System (with Zak Stein)

This conversation examines society and jobs through How AI Hacks Your Brain's Attachment System (with Zak Stein), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -3 · 102 segs

Future of Life Institute Podcast

24 Oct 2025

Can Machines Be Truly Creative? (with Maya Ackerman)

This conversation examines society and jobs through Can Machines Be Truly Creative? (with Maya Ackerman), surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -1 · 60 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.