M2.7 broke the industry | NemoClaw | Grok 4.20 | Jensen calls out Anthropic | Wes & Dylan Pod
Why this matters
Auto-discovered candidate. Editorial positioning to be finalized.
Summary
Auto-discovered from Wes Roth. Editorial summary pending review.
Perspective map
The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.
An explanation of the Perspective Map framework can be found here.
Episode arc by segment
Early → late · height = spectrum position · colour = band
Risk-forwardMixedOpportunity-forward
Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).
Across 123 full-transcript segments: median 0 · mean -1 · spread -23–8 (p10–p90 -6–0) · 1% risk-forward, 99% mixed, 0% opportunity-forward slices.
Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: high.
- - Emphasizes governance
- - Emphasizes safety
- - Full transcript scored in 123 sequential slices (median slice 0).
Editor note
Auto-ingested from daily feed check. Review for editorial curation.
Play on sAIfe Hands
Episode transcript
YouTube captions (auto or uploaded) · video TzZqFkBNnZA · stored Apr 2, 2026 · 3,709 caption segments
Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.
No editorial assessment file yet. Add content/resources/transcript-assessments/m2-7-broke-the-industry-nemoclaw-grok-4-20-jensen-calls-out-anthropic-wes-dylan-pod.json when you have a listen-based summary.
Show full transcript
All right. So, I think now is the time that we went live officially. So, we'll see how people are filing in. So, welcome in everybody. Okay, yeah, I'm getting notifications that we're online, I think. So, Yeah. Usually, I try not to have too much of an intro because people are still piling in, but Yeah, what do you what what have you been up to? Well, just the crazy I wish we could say a week wasn't crazy, but yeah, this week for me, I was I got really into the story about the guy who was using AlphaFold and some large language models and working with scientists to try to cure his dog's cancer. So, I did like I did actually like a deep dive video on that, which is unusual for me, but there was just so many people and I wanted to it seemed so like how do you do that kind of thing? So, I really wanted to deep dive into that, but um besides that, I talked a little bit about Miro fish. It felt like that was really trendy yesterday where you simulate thousands of people with their own memory through agents and kind of make predictions about the future. I have a I have a friend that runs a hedge fund and yeah, I was like talking to him about that. So, I thought that was sort of a fascinating kind of new project and um then I was like looking at some dancing. There was a I don't know if you saw that dancing robot that went kind of crazy in a restaurant in like a Chinese restaurant in California. Like he was just trying to dance and nobody could figure out how to turn it off and he's like knocking over tables and stuff. >> I I was dying laughing. I don't know why it's so funny cuz he's doing what is that the the dab where you go like this and he's like doing this the >> [laughter] >> And it's got like these three like employees like trying to like hold him back by his collar and he's just like yeah, he's like doing that. Like I was like oh god, this is like the future we're building. So, I don't know. Also and more than that, too, but all sorts of stuff this week. I thought it was very entertaining. I I really enjoyed seeing that, but yeah, I was laughing very, very hard. I don't know why something about that is just it's just the way >> Stop dancing. Like you're like I can't. Like >> [laughter] >> It's just the joyous dance moves he was doing versus the absolute terror on the on the on the human's faces as as it's like destroying the restaurant. Um just something about that contrast was absolutely amazing. So, All right, Roman, the primal lifestyle, Ronald Morgan. Welcome, welcome. GM Tarby. Thank you, thank you everybody for being here. So, yeah, we're just like kind of warming up here. So, throw questions in um and we're just chatting. >> [snorts] >> And there was something else. Okay, so, I did not hear I just heard the headlines about the dog. Maybe we can start there cuz somebody cured their own dog's cancer with AlphaFold? Is that what you were saying? That sounds nuts. Yeah, that actually is pretty crazy. So, let me give me 1 second here. I'm going to pull up some facts about that article cuz um even though I like I don't know if it's like this for you. Sometimes I like to do a video on something and people are like, "Dude, don't you remember all the details you did a video on it?" I'm like, "No, I kind of can't remember everybody's name." But um I Yeah, go ahead. Go ahead. Oh, I was just going to say I tend to remember and I think it's it's a thing that I have like my long-term memory is like turned up and my short-term memory is turned down. >> [laughter] >> I think it might be an AD ADD thing, ADHD, because I remember a lot of conversations, a lot of like the stuff that I've read, stuff like that. And I mean, you've done interviews with me. I forget sometimes like what question I'm answering in the middle of answering a question, you know what I mean? So, it's like if I think it's like a trade-off. Right. And especially cuz we're like so up on AI, I feel like we're definitely the first to like lose our brains to it. We're like giving up too much. I don't know. Or at least Tik Tok was starting to take that from me. But but yeah, so so there's um this tech entrepreneur, his name was Paul something. And but I remember his dog. He had this beautiful, cute, wonderful little dog named Rosie and she had this really aggressive form of cancer. Um the cancer was building up in her leg and it it became really big. Like just this whole tennis ball looking thing kind of like on her leg. It looked super painful, super bad. Um he went through a round of chemotherapy with the dog. Um helped a little bit, but it just really extended the the time it was taking um to to like metastasize and do that. It wasn't really a real solution. Um so, then he just started talking to AI. He was like this has been taking place over a year. So, it's one of the earlier models of chat GPT originally, but at certain points he also did talk to pretty much all the models about this. And he didn't say like give me a cure, he just said like what are some ideas? Like how do I go about this? What kind of tools are out there in the medical literature that nobody has um you know, understood yet. And it kind of pointed him towards immunotherapy using mRNA vaccines. Um now, he's pretty good data scientist. He's got a good tech history, but he certainly isn't a doctor. He doesn't know anything about, you know, mRNA and DNA and immunotherapy. So, um he the the large language model actually pointed him towards this specific company. And they they it said like this is the company that you would want to sequence uh Rosie's DNA, that his dog, right? And then when you get that, you can get a DNA pattern that is the cancer cell and you can also do it again to a healthy cell. So, he had this lab basically take a piece of the cancer and look at the DNA, sequence it, and then also look at a healthy cell in another part of the dog's body that was that was not affected. And then he had these two, you know, undescribably long chains of DNA and he put it into AlphaFold and said like oh sorry, first he put it in a large language model and said, "Show me what's different." Like and this is where I thought it was really interesting cuz no human can just hold, you know what I mean? Like we you couldn't give me like a copy of the Iliad and say like this one's got some errors in it. Can you help find them? Like it would it would kill me, you know what I mean? It would take my whole life to like scan on both and find the two errors in it. But like for large language models, that's actually something they they can do and they can handle these huge context windows. So, he was able to just say keep asking like what's the difference and isolated some of the genes that were not the same in both. And then he was able to go to AlphaFold, another AI model, the DeepMind model that actually takes those sequences and then it shows which proteins they encode for or which amino acids they encode for. And then amino acids fold into a 3D structure and AlphaFold folded them. So, it actually showed two shapes, physical 3D shapes that you can like rotate around. The one that was in the cancer in the dog cells that was misshaped, right? Like you can imagine like two rocks or something and they should be identical in their like bumpiness, but one of them had a different smooth thing and different had a rough thing and the the one that was causing the cancer, that missing shape in those proteins are the reason why it had gone AWOL. Um which is just crazy. Like this is just a guy, you know? And also I should also say that he's not alone kind of like a lone person just doing all this on his own. He kept showing his research to actual scientists. Like he found Oh, what's the name of the company? I have to look it up, but he actually went to a company that was in Australia that's actually licensed to do all of this and kept showing him them his results. And then they kept helping him like customize this vaccine. And eventually they made a vaccine for him, but nobody was authorized to give it to the dog. Like you you know, obviously it's illegal to just build something and inject it into an animal. So, um he basically went to social media and he said, "Hey, like I'm in Australia. I I got this other company in Australia that like has built this thing. It's these are real scientists. This is a a cure that is customized and might actually make a real difference and I think it's in the dog's best interest. Can I get like an ethical board to approve it?" And that was going to be that looked pretty hopeless, but that's where like sort of the power of social media came in and somebody randomly saw the article, forwarded it to her friend that was like working in the government and was able to kind of like work with the equivalent of their FDA or whatever. And they gave him authorization. And he did it and the tumor shrunk in the original reporting, if you look at his X account, you can see that it was down 50% in about 3 weeks. And then he was just on the Today show um a couple days ago and said 75% now. So, way better than you know, the kind of things that we had to cure that before. And it was a customized vaccine. It also shows how something like this could become like the norm, you know, in the near future for all types of cancer. And and the fact that he made a 3D shape of it with no ability to do that. Like you remember you said early on like how crazy is it that it can fold proteins in its head. Like something a human could never do, but he was able to like leverage that. And then once you see the two shapes, you can actually start asking some questions. Like you know, real people who know what they're doing can ask like why why is this misshape causing a problem? And then learn more about the cell. And I don't know, just totally fascinating and it's got this wonderful ending where the dog's was not totally gone yet, but we're talking 75% reduction. And he's going to do other injections and keep it up. So, like it might actually get cured, you know, or at least drastically extend the health span of of his dog. That's amazing. And this really, you know, when Demis Hassabis you know, all disease will be cured in in 10 year you know, he says a few decades, I think. You know, this is not somebody that's like hypey, right? He tends to be pretty conservative. He tends to be pretty reasonable and sometimes going against kind of like where everybody else is. Say you know, people are like, "Oh yeah, AGI in you know, 2 years." He's like, "Well, no, we still don't have the necessary structures, 10 years you plus whatever, you know, he he his projections were, but so when he says we're going to potentially cure all disease in a decade a decade or two, I take that to be literal, you know, like all disease, which is insane to think about. But then you know, you listen to your story like this and it puts it in perspective. Here's some random guy and you said he he didn't have a background in anything like that. No. I mean he was a he was a data scientist and at one point some of the comparison algorithms he So, remember I told you he had the DNA sequenced from both like the the mutated cancerous cells and the regular ones. And then he did asked Chat GPT and a few other AI models to compare the two. And then when it found the differences, he was obviously really talented with data science because he was able to kind of go in there and figure out like what codes for an amino acid and like what actually would matter and then go look at some other a bunch of other stuff about what junk DNA is and like isn't important. And then he was doing some custom kind of custom algorithms or custom like kind of data science work. So, I don't want to say he didn't have a part that like just anybody could have done it, but he definitely he knew nothing about medicine. And he wasn't like familiar with DNA up until this moment or immunotherapies or mRNA vaccines. Yeah, I mean that really kind of puts it into perspective that somebody like that I mean yes, he had labs that were helping him out, but in the future this becomes more accessible probably become more and more of a like you can go to a website and get something. I mean, you know, within reason probably get something like that done. And the fact that he can create an mRNA vaccine to fix a one specific animal's whatever issue, deficiency, abnormality. Yeah. You know, to me Sorry. Oh, well, there's one thing in there too, which is that like at one point he reached out to one of the two different labs that were helping him and they were like we don't like sell to direct customers, right? Like they work with with pharmaceutical companies. Like we can't actually [snorts] sell you anything. But then Chat GPT was like oh, let me like read all their like bylaws and let me see if we can get a workaround or let me see what it would take to get you qualified as like a vendor. You know, like so there's also just this human kind of social element about how to get this done. And like and found the exact people at the company that he needed to talk to so he could like write an email that said, hey, this is about saving a dog. Like can't you help me? They like you know, pull some heartstrings over there in the the corporate entity to like make them feel something. So, there was just everything in this story. I I quit the this gym membership a while back >> [snorts] >> and like a year later they came saying they're going to send me to collections or something because I didn't complete some BS they like I didn't you know, how some gyms that like I hate this about gyms. Some of them, not everybody, but but some of them are just so kind of scummy about how they do their billing practices and it's it's unfortunate. But this one was one of those where it was like a good gym, but then I guess when you leave and there was a number of reviews in it that that kind of mentioned this. Like they really start like trying to squeeze money out of you, trying to send you to collections. And it's like I canceled, you know, I wasn't paying and then later like, oh, whatever, you know, this is like your your balance and the late fees and this and this. And normally what are you going to do? There's not much. You either pay the bill or whatever, but I was like, all right, Chat GPT or whatever [laughter] Claude, like here's my contract, here's all my communications, here's all of this. Like find any like find anything. Like deep research. Like make this your mission. You're no longer passing butter. >> [laughter] >> And there is stuff like that in there. You know what I mean? Like there is like I'm sure there's a tax code that would like make it so that we got we paid no taxes the same way Amazon does. They just have people who can like handle, you know, billions of transactions and somehow like net [snorts] them positive and negative based on their investments and stuff. Like I I that's one of the best things about about these tools is we might be able to find Yeah, not not get screwed with contracts, not get screwed with taxes. Like not, you know, find find those like kind of needles in a haystack. And that's what exactly what it did. I'm glad you said that. It found the needles in the haystack that were just it was it was brilliant. It was like the most nitpicky detail oriented. Like no human being. I mean there's a few human beings that probably they work as lawyers and these jobs that demand a super close attention. I I don't have that, you know, to sit there for hours trying to comb through everything. You need to be like an accountant slash lawyer slash whatever. And with these large language models they're everything and more. They're all of that and more. And so yeah, it found like a few little things and I wrote up a whole thing and you know, we went back and forth with this gym and I here's the thing like in the end like I wasted more time than is reasonable dealing with this. You know what I mean? So, it's I'm not saying I know, you could have built like 10 pounds of muscle, but you're like, oh. Exactly. Yeah, if [laughter] I I spent more time dealing with that than going to the to that gym in the first place. >> [laughter] >> But unfortunately, but at the end they paid a zero dollars. So, it was a moral victory. Again, you know, by any like metric it was a loss. It was a loss of money and I could have in terms of like opportunity cost I could have like probably spent that elsewhere, you know, that time elsewhere, but I don't know. It felt really good. It felt good at the end to be like, no, I'm not paying [laughter] you a cent. >> Well, yeah, because at least you punched like above your weight class. [snorts] You know what I mean? Like you weren't like capable of just finding all those little loopholes and Yeah, my my guess would be when Paul went to the the therapeutics company, they you know, it probably said like, oh, we can't sell to customers, but he probably said something along the lines of, well, technically since it's for a dog, this would qualify under some sort of like testing parameters that are legal and if you do it to this person who has the right licensing, then they can administer it for me. And like you know, probably just figured out the like loophole stuff. And and also I'll throw one other thing in there too is that I'm not sure this would have been so successful if the way that the COVID vaccines were developed weren't so public. I don't know if you kind of remember how kind of I think it was called Operation Warp Speed that was like put into place in the US during the early COVID days. It sort of forced much more than would be normal public because a bunch of other countries needed access to kind of how to build a COVID vaccine and it was kind of a worldwide problem that forced some of the pharmaceutical companies to open up a lot of processes. And if it wasn't for all of that research being in the training data, I'm not sure this would have worked for Paul either. So, it kind of also shows the importance of kind of open sourcing solutions and having like, you know, trying to get some of this data that's behind walls out into the open so that the general public can benefit from it, too. Yeah, somebody's asking how long before a disgruntled teenager uses AI to to develop a doomsday virus. That's always like the other side of it. If you can develop some of the stuff, somebody that has a lot of time on their hands, somebody that's smart, somebody that's maybe like antisocial and just sits in front of their computer all day and has, you know, some some issues with the world. I mean we know people like that exist and they do tend to do crazy stuff every once in a while. What if this is like handing them a loaded gun? Um Yeah, I mean a couple comments on that is that so in this case and this is probably why we do need to think about these sort of pipelines because in this case he kept doing his own data research and then sending it to the lab. And then the lab people who were helping him with it had to like make it and then give it to someone else to administer it. So, it's not like they ever mailed him something. And it's not like they would have made If if he would have sent like, hey, I I want this like, you know, like anthrax like thing, they would have been like, this is so scary. Like go to jail, bro. Like you know what I mean? So, they would have there would have been a human in the loop in this case, but it doesn't mean it's always going to be like that and there's certainly going to be a market for people to probably just, you know, mail stuff on the dark web at some point. So, I'm not saying it's not something to worry about. But I think in this case I did sort of like how there were humans in there and I think Paul didn't get too, you know, aggressive in like saving his dog to kind of push ethical limits beyond where they should have been. So, I was you know, people different people will have different boundaries on that, but in this case I thought it was in the in the right realm. All right. So, yeah, so really fast like we got some people piling piling in. So, thank you so much. We got people on X. Is there anyone watching on Twitch? If you're on Twitch, can somebody just comment? I'm not sure if I'm like I don't really have anything there. So, I'm not sure if I'm doing it right. But yeah, people from X, thank you so much. People from YouTube, thank you so much. Throw in the chat where you're coming from. Where you watching from? Um so, Goat said he filled out the survey. Thank you so much. You are the goat. Username checks out. Yeah, so I have a quick survey just to better understand who's in my community, kind of what you all do. So, please, you know, check that out. Cloud Region saying, "Hi Weston and Dylan. Love what you guys do." Thank you so much so much. Aw. Um M Raron Raron. Mr. Mr. Aron or Mr. Aron, I don't know. Yeah, Twitch is dead. Yeah, I think so. I think definitely YouTube and other places are. Oh, where you where you watching from? I mean on the planet. On or off the planet. What country or location? Where in the world are you? Evo 2, I just looked it up, but I do need to look into that a little bit more. So, and there's one more comment. You didn't cancel your membership and blamed the gym for it. So, here's my take on it. There's a lot of places where the governments do this uh and also a lot of businesses do this. They realize that if they make things Well, no, the governments do it a little bit differently. Uh so, businesses Businesses, they sometimes make cancellations and things so difficult that just the you don't know if you've canceled, you're not sure if that happened. Um it might be too difficult. Uh LA Fitness actually got sued for it cuz just because of like how overwhelmingly difficult it was cuz it was like you have to send them a certified mail, right? Which means what? Which means you can't send it with a stamp. You have to go to the post office. It's like you know, so they they basically create all these like steps that you have to do. Um >> [clears throat] >> and if you think I don't remember exactly with this gym cuz I went I went in and I told them I want to cancel. They're like, "Oh, well, I'm so sorry. Like we can't do it." Which is BS. You know, [snorts] go online online, but online it was also, "Okay, yeah, but you got to send a written certified mail or something." And it was one of those things where I'm like part of my brain went, "Okay, like I thought I did it because I I went to this gym and I went online and it's it seemed like I canceled it. So, yeah, I mean, sure, if you if you think that's on me, it's my fault, it should be 100% my my thing. I get it. I I I disagree. I disagree because um >> [clears throat] >> you know, if you make something so difficult and again, it's supposed to be just an easy everyday thing like I should have I don't understand why banks don't allow you to cancel those subscriptions. You know what I mean? On your end. Like it's your money, but it's like we have to go and like beg in front of these businesses, like, "Please, I don't want I can't pay anymore." You know what I mean? It's a good point. I don't know. It's obviously like >> Why do they get to just bill your account and then you have to go figure out how to stop them? Like you just be like I don't ever want to bill from this I mean, I guess they kind of can, but it should be more like It shouldn't be like our choice. Yeah, I hear you. You know what I mean? So, anyways, so >> dude. I think some of the Ivy League colleges have some level of that, too, where it's kind of just like can you follow all these rules to get in and so many jobs are like that. It just takes away from innovation and actually like a real market dynamics. >> [snorts] >> So, yeah, 420 said they do just change your card. That was one of the things that I think happened is at the end I did switch I think I closed the the card that was it was on and so the payments weren't coming through, so I thought I was I was done with it. Um but I didn't jump through all of the hoops that they required for, you know, the the effective cancellations. And I got to figure out how to do prepaid debit cards and stuff like that. Um Yeah, cuz even the even the card that you got exposed by your open claw, right? Wasn't that a That was a regular credit card with a limit? >> That was not That was a very low limit. That was a very low limit special special credit card that I set aside in a bank that I don't bank with. So, it was like all my things are here and this was out here with a small limit specifically for the more risky, yeah. Yeah, especially I mean, there's got to be a a world soon where, you know, Wells Fargo and JP Morgan, they they should start having bank accounts for agents and they should work differently. You know what I mean? They're probably going to be a sub account under your name. But it should have that you know, cuz they're going to want a whole different kind of level of cybersecurity on it. They're going to look for payments and unauthorized things differently than a human would. And also, you can't be charging 30 cents plus a percentage on every transaction. That that really kills a lot of the applications. Speaking of which, Stripe recently announced Stripe is huge. I love Stripe. I love everything that they do. Um Stripe is is you know, you can take credit cards online and they're just very very good at at that. Recently, they announced that they're doing more to facilitate exactly what you're talking about, this idea of an agentic online economy. Um they're saying they're introducing the machine payments protocol. So, we're not going to go too deep into it cuz I haven't had a chance to read through it. So, I want to make sure I kind of do my background research on it. But it The point is this is exactly what you're talking about. So, this is agentic business models and browser infrastructure for AI agents uh agents spin up headless browsers and etc. etc. etc. So, that's there. Okay. Yeah, I don't know. I mean, I guess yeah, I'm hoping with I was kind of hoping with crypto you'd see the prices come down at least those like uh secondary chain kind of um settlement layers, but I don't know. I don't know what the cheapest crypto is right now. Like in USD terms, if it's pennies or if it's still kind of like dimes and kind of quarters kind of pricing, but hopefully that gets really low. Yeah. Okay, so yeah, we got people from New Zealand, Hershey's, uh we got USNC. People are saying, "Hey Wes. Hey Dylan." Thank you so much for joining us. Dark patterns. Dark patterns are wild. I actually didn't know hm? What that was uh until fairly fairly What's that? Oh, yeah, for sure. So, let me just finish this so really fast. Uh just saying hi to everybody. So, Columbus, Ohio, France. Somebody I think said Ireland, uh Texas. Yeah. Everybody privacy cards are OP. I need to do more definitely need to start doing more on that just with recently with what's been happening. So, dark patterns um are kind of exactly what we're talking about. This is I think the definition is like just confusing UI interfaces where you're not sure if you're buying something or if you've canceled something, right? Cuz usually if you want to build something where you click a button, it's like, "Okay, you're done." Right? And it's sort of like you clicked it and now you know you're done. Um or in order to purchase something, you click on something that says purchase, right? But but you don't have to make the UI like that. You can make it more like you click continue and it charges your card, right? With You know what I mean? Instead of like For sure. Yeah. Like cuz usually you have Like what what should the button say on it that you know when you click it charges your card? Like what what are acceptable things for you? Well, yeah, I don't I mean, I just I don't know. Clearly Clearly marked, I guess. Yeah, I mean, like I don't have much more thought than that, but yeah, I just I mean, I can think of times I felt tricked and those always stand out to me. Like one time a pop-up came up that wasn't it didn't have the X button until like about 10 seconds later. So, it like popped up and there's like no way to click out of it and then like 10 seconds later the X in the top right showed up and I'm like, "Oh, you like didn't even let me get out of this, you know?" Or especially when I sign up for something and they say, "Okay, like put your credit card in just to start with the tool and then after 15 days, we'll bill it for a full year. The first 15 days are free. So, you better remember to come back and cancel before then." And I'm like, "No, after 15 days, at least I want to go to monthly, not yearly, you know, or something." But it's like I don't really have any control over those options at that time. Yeah, and it's it's frustrating because working in for for for a while I've been in like e-commerce and online payments and online marketing and the unfortunate thing is just I don't know I I know how it's done. You're running these experiments, right? This this these split tests and you're track tracking certain metrics, right? So, you split people randomly into two categories and you run one version here, one version here and at the end all these things they give you kind of like the KPIs. Like opt-in rate was higher, therefore this is the winner and you click winner, right? This made more money per visitor, so this is the winner and you click winner and that becomes your new sort of default. That does not measure on any in any way, shape or form the goodwill, the how happy is the customer, right? Like you know, yeah, it's going to squeeze a little bit more money out. But most people don't even look at, for example, does that affect the cancellation rate? Does that affect of those things where it's like it's easy to see what squeezes just a little bit more profit out immediately, but it's hard to see the long-term effect it has on your relationship with the customers. And that sucks because I think that just means that most of these marketers don't even consider it. You know what I mean? Yeah. So. Yeah, for sure. I mean, that's it's more general problem, but yeah, whenever incentives are aligned in the short term, it's always a problem. So, that would always kind of worries me about kind of the way governments work and the way companies get like CEOs for like 6 months or a year just to like squeeze profits out, pay off the shareholders and then just like get a new person in there to squeeze it again instead of someone who wants to build a 10-year vision. Yeah. Yeah, it's good to see long-term uh CEOs, I feel like, but you don't even you don't see that as often. But I mean, there's certain ones that I mean, the Apple guy, Tim Cook, right? He seems like you know, he's been there for for a while and it's they're not like like, "Oh, if one quarter you're down, like we're going to yank you out." He's got a long-term vision, it seems like. I don't know too much about Apple, but um Yeah, I think Warren Buffett's pretty famous for thinking very long-term and I mean, and most people most of the entrepreneurs that started a company that took a long time to get going. I mean, probably when Elon was first thinking about electric cars and how big of an impact they would have, you know, you had to stick with that. The rocket thing took a long time before it paid off. I mean, even I mean, maybe Meta? Like what do you think about like do you think what Meta's investment in the the metaverse is like, you know, they seems like he's pivoted everything to AI now, but I was like, "Well, you got to stick with it for another 10 years, you know? Like you just got to keep eating those billions if you really think that's the future. Uh yeah, I feel like he needed he it it it sucks because I feel like Mark Zuckerberg needed something big to bet on, some new direction. And I think he was getting itchy and he was like, "Oh, this metaverse, VR, all of that." He went all in. Man, can you imagine? Although there there were some positives because he bought a tons of he bought tons of GPUs um right before the whole AI thing. So, that definitely worked in his favor. But it's like it's funny cuz if he just waited a few years, I think, right? Years? Yeah, it was it was like whatever, a few years. You know, the AI thing would have come and he would have like went all in on that. So, it just seems like >> Yeah, probably. Bro, if you're ever bored, you should look up this um thing not a lot of people know about it, but there was a point in the Nintendo's history when they created this um device called the Virtual Virtual Boy, I think it's called. It was like all red kind of like laser kind of lines and it was like a full-on headset, dude. And like you put it on a table and you put your face in it and you played these sort of like Game Boy, Super Nintendo style games that were like monochrome, like red and black, but it was like full 3D and I was like, "Dude, Nintendo even was trying like before anyone." Wait, this was like you talking about the '90s, right? Or something? >> Dude, maybe. It was like yeah, it was definitely something nobody like it I don't think it ever made I think I don't think it ever made it out of like testing or something. They ended up losing millions of dollars on it, but they actually built some of them and sold a few of them. I think they sold a few of them because Oh, I I hope there's nobody young on this on this live stream cuz they're not going to know what the No, you'd have to like you'd almost have to share Yeah, you'd have to Google like Virtual Boy. I tried it at a Blockbuster when I was a kid. >> [laughter] >> Okay, so that's what I'm saying. Yeah. What was it like? What do you remember about >> is like if you're if you're a younger person, you're like, "What are they talking >> [laughter] >> What What's a Blockbuster? What Yeah, no, I think I've tried it. There was a Mario Tennis game if I recall correctly. Does that sound familiar? >> Was it fun? Was it stupid? Or was it cool? >> It was cool in a sense of it gave you a glimpse of what VR was going to be, but it wasn't anything that I could see myself sitting there and, you know, doing for a prolonged period of time. Do you know what I mean? Okay. Yeah, but so it wasn't like the Nintendo Wii where you kind of were like, "Yeah, it's pretty fun, actually." Yeah, I could like Wii was fun. Wii cuz I played with um kids from like family members. Like I got it for them a while back. We were playing and man, like volleyball and all that stuff. It was it was very very fun. And VR, when I tried it back in like 2014, I think, 2015, whatever, um my friend had it. I came over to his house and he had the HTC Vive. So, it was like the super high-end and I mean, I was blown away. Like there were times when my brain couldn't distinguish between reality. Like it it was like, "This is real." No no question. Like what you're seeing is 100% real. It was weird cuz it would like click in and out of perceiving it as reality. But you just don't come back to it. That stickiness isn't there and I think it's just because of how cumbersome it is. You know what I mean? Mhm. And I think it was the same thing. Have you seen something is I didn't look into it too much, but this week I think something came out of Nvidia. It was called like DLSS 5 or whatever from Nvidia that kind of upscales video games and gives them a incredibly realistic look. Have you seen that? Uh I've I've heard about it, didn't look too deep into it. Yeah, somebody said, "Was it like black and red?" Yes, it was It was >> black and red, yeah. Yeah, yeah, yeah. That's exactly what it was. >> [laughter] >> But like you can find these like old documentaries on like, you know, the games that the like the device itself and like how Nintendo tried to build it and what the thinking was and it's one of those it's fun. I ended up on some [snorts] random Sometimes I get these random videos about like I guess there's a bunch of other NES games that were like buried in the desert cuz the game sold so poorly, ET. I don't know if you've heard about that one, too, but it's another fun kind of deep dive documentary. I've I've I've only seen those in documentaries or YouTube essays, video essays. Yeah, things like that. It's like just bury them all. Where can we see the X comments if if if if people know? Do you are you by chance do you have access to X right now or is it too too much? If if you if you have access, can you see comments, live comments? I'm just not I'm not seeing them. I don't get why, but um Yes, if anybody's on Twitch, please comment. Twitch is more for like games and stuff, but I'm just trying to see if there's anybody anybody on there. Yeah, but man, yeah, YouTube is blowing up. Thank you so much everybody for being here. You guys are amazing. Um So, okay, so meanwhile, some earlier people asked, "What are the sort of topics today?" So, couple of them are so, M2.7, I just want to briefly go over it. We'll just spend a few minutes on each one of those. Nemo claw, Glock 420 came out, the full version's out of beta. Jensen calls out Anthropic, which is kind of interesting. Um so, yeah, we'll spend a few minutes on that and other than that, um just I think wherever things go. So, throw out the comments in chat if you want to ask us anything. Uh we're happy to talk about it. We'll yeah. Oh, and the headaches from VR, headache headaches and and nausea is a thing, obviously. I need to check the sound levels. Oh. Are we two different levels? I So, dude, I have no idea. Every single time I do a live stream, everything freaking breaks. I have a light here on my on my left that has been working like clockwork for every single video that I've done forever. Right before I hop on this live call, uh it just it just breaks. It just stops working. It is it's working [laughter] now, but I'm like, "What What are they doing?" It's Every time I do a live stream, something breaks. It's It's incredible. I know. Should we do a highlights of all things that have gone wrong during live streams? Too many to >> Like remember like the window of the Cybertruck breaking and like, "Oh, that was live." And then there's that one where Steve Jobs couldn't get the Wi-Fi to work to show off the iPhone. He gets all pissed. Like, "Damn." Yes. >> But yeah, if you want, we can I don't know what you want to start chatting about. Oh, so but just just so I can verify, I just want to make sure that if people are commenting on X, we're able to see and respond. Do you see anything? Uh I'm actually not logged into X right now. So, I'd have to take a second to go. But you know what? So, >> in. So, if it's too much trouble, then maybe go over one of your news and I'll I'll I'll check. But anyways, yeah, let me start talking about some of the stuff that that I've been seeing. So, just let me know. But the I guess let me briefly go over >> Mhm. >> [clears throat] >> Whatever you want. I mean, if you want a second, I can talk a little bit about Miro fish since I kind of went through the cancer vaccine already. Yeah, can you go for it? Okay. >> Thank you. Yeah, so um I thought it was pretty So, Miro fish kind of felt like the open claw of um the week, I guess you'd say. Like it's it's an open source project. It's primarily a Chinese based um GitHub account that went super viral. It got 35,000 stars, if I remember, and it was forked about 4 and 1/2 thousand times. Um and I would say that it's it's an impressive project for kind of a a weird reason. It's like the same feature that makes it so smart, which is this idea that you can take a million agents or a thousand agents and ask them a question and they all have different personalities, they all have memories, and then they just chat with each other. So, it's almost like a little Sims, you know, world where you just make a thousand people and there's some men and some women and some that have their parents and some that aren't and some that are from different parts of the world and have different ages and then you just have them all just LLMs like chatting with each other about something. And they kind of come to a solution. Like there's certainly something called the wisdom of the crowds that plays out in some cases where you you get this amazing insight and it might be actionable, it might be a prediction about the stock market, it might be a prediction about how a policy will play out. Um and then it also has this whole other side, which society has problems with where a lie or some misinformation can be spread and it ends up in the memories of some of the agents and then they share those memories kind of like their facts and then sometimes they have memories of the things they've said and they're unwilling to change their opinion with new information and then lies and problems can also like take over the whole the whole society of of these AIs, you know? So, Miro fish is just this powerful thing and this weird thing and you know, I love the idea of of trying to get the word out there about it more because as you know, I worry a bit about AI safety and alignment and we usually think about that in terms of like, "What is Anthropic doing?" or "What is Google doing to like keep us safe?" But realistically, the the thing that needs to be thought about from a society point of view is lots and lots of different agents talking in these ways. So, I feel like if we could do more research on how lots of agents kind of grow together and Miro fish is sort of the first what kind of feels like a little bit of a mainstream tool like a lot of people can play with it right now. And they can learn about how entities and relationships sort of play out into a knowledge graph, you know, it kind of it helps explain like what can go wrong on Twitter like things and Reddit like spaces and and and what happens when systems are watching all of this play out. And when you know, when should you get in there and probably adjust an agent because it's spreading a lie or you know, it's leading towards some something that would be illegal if the rest of the society kind of believes in it. But then it's also you know, the other people might be like don't censor us. We're trying to get to you know, the truth and then you're all of a sudden you're in charge of your own little Twitter and everybody's like and now you have to make policing decisions. But you know, you have outcomes that you want and it just it's a fascinating tool. Um, so I don't know what what's been your thought on Miro fish so far and like memory contamination and the way it can solve problems? Yeah, so one thing Cassie Moon's saying that oh my god Wes, you look very sus. No, I don't. Cassie Moon is sus. Let's let's vote them out. Uh, I don't know. I'd vote for it. Looking sus over there, dude. >> [laughter] >> Yeah, I'm trying to I'm sorry. So, um, I'm going to be I'm going to come clean. I wasn't listening. I'm trying to trouble shoot. I'm sorry. >> [laughter] >> No, it's trust me. I'm I'm used to talking to to people who aren't listening. So, it's no problem. No, but I I I just needed to I'm so sorry. >> stream on my channel to zero people and be like listen, I got thoughts for all of you. In fact, I'll just stream to I'll just make up a Miro fish society right now and I'll just talk to them for the rest of my life. It's fine. I >> all like make me feel popular. Every once in a while when I was uh, early starting on YouTube sometimes I would like try to be record a video, but like every once in a while I'd forget to click record like if I had to restart or something like that and I'd sit there for an hour talking like and now bye and I would look and I wasn't recording and I I was just like how how do you feel so humiliated and so so stupid? I just sat here for 1 hour talking to myself. Um, but anyways. >> But anyways, yeah, it's just like it's like the Sims. I mean it's like you played the Sims video game where you have a little >> Yeah. kind of doll it's like dollhouse kind of with people in it. Miro fish is just a bunch of agents and they're in that little environment chatting with each other and and they can you know, when you get millions of them and they're all different and they all think different sometimes the wisdom of the crowd shows up and you get you know, a prediction about like what might happen to the stock market. And then you know, a hedge fund is like yeah, the Miro fish all of them decided that if uh, some more starts then that'll be bad for this stock and good for this stock and they invest accordingly and and sometimes it's been working and then in some cases it's it's used to wrong and and they don't understand how lies kind of filter throughout a society and I don't know. Just kind of a it's kind of a cool little project. So, I couple couple thoughts. So, yeah, I thank you so much for for that because couple thoughts. First and foremost when like a year ago, year and a half ago, a lot of the studies I I used to read most of these papers that were published in the AI space back at the time when that was like we had time to do that. Now it's just like forget it. I don't know the last time I actually read through. Unfortunately, I miss that. But a lot of them did point to this idea that you're talking about like that multiple agents or society of minds as as Google called it does really create something special. So, really when you combine a bunch of these large language models and you allow them to talk and have a discourse, something special like emerges like the it's more than what an individual model could have come up with and that's kind of strange on one hand, but not so much if you think that we're replicating something similar to the human brain. And there's been a number So, Miro fish, I haven't looked into it. I'm looking I'm definitely going to dive deeper into it. There's been another one called from Cognizant AI Labs. They kind of do an AI agents like an MMO RPG or something like that. There's been a bunch of different ones. And I mean if you think about Multibook, that was kind of something something very similar. We're going to be seeing a lot more of this. Um, how if you're comfortable sharing the screen, are you going to have any issues with that? The reason I ask is because I have so much of my streaming stuff running here that I'm just a little worried about breaking it. Um, how let me know if that's going to be an issue or not, but it >> I don't know. Is it? I have [snorts] all sorts of stuff open, too. Maybe. What do you need? I think so. Uh, so well, yeah, let me just chat for a bit about um, so MiniMax releases M2.7. It's a self-evolved model if you will. And they have a lot of really interesting ideas. Tons of stuff. But one of the things that they released is what they call open room. So, I'll post it in chat for people that want to check it out. Um, I'll post it in chat right now. So, open room was largely built by it might not be 100% super safe for work. It's it's nothing horrible, but um, like just just FYI, if you're in a publicly visible place, maybe maybe think twice. Um, so Dylan, if you're able to maybe just share that tab so people can see it. Okay. And then did you you sent me the link to the tab? >> I put it in the chat. Oh, if yeah, let me chat it to you if that's easier. Uh, yeah, right. That's going to be good. Or it's just openroom.ai for people that don't um, don't have access to it. Okay. [snorts] Replit. I've I've looked at Replit. I don't know if I've did too much with it, but that's not because I it's my gosh. Who is this? What What is this woman? Oh, no. She's flirting with me. Hold on. Oh, can you hear that? Oh, I got to turn this down. Uh, How did you know I was going to seduce What is this? Like a Rickroll but you're trying to seduce me? I don't know about this, [laughter] dude. Uh, Open room like close the room, bro. >> [laughter] >> It's like too much. Okay, so anyways, Um, can you share can you do a sound? Is there sound? All right, little machine. Since you're all so eager to hear about my glamorous space life, uh, let's talk about waking up. So, I don't use a normal alarm. My ship's AI, [music] Afterlife, she handles that. And she has evolved. It started simple, right? Can you write something in the diary? normal alarm. What's that? Oh, can you write something in the diary? Uh, There's a diary in there. >> Oh, oh, I see. Uh, I I guess her name's A A O I A O E. Um, so just see if if you can get any interaction either through Yeah, either through chat at maybe ask a question or on the left there's a diary sort of icon. And I know uh, never mind. Okay. Okay. Okay. Okay. Uh, It would pop up. Yeah, that might email would pop up, but I could >> [snorts] >> play here for a second. And see are you able to to message her without any any login? So, I guess let me tell people why we're here. Just I'm sure [laughter] this might be confusing for some of you. >> Yeah, I need to know that, too. Oh, no, I can't log in. I should I should probably um, Now I see why you wanted me to share my screen. Now I got it. You let's we can just probably leave it there for the time being for just a little bit without sound or yeah, or just close it. But um, let me just explain kind of what Is this the dark web? Yes, this is this is this is how you get into the dark web. This is step one. Instead of using what is that the is how you get into the the um, dark web. So, basically this new model from MiniMax. MiniMax is an an Chinese company. But started in 2023. They have investment from Alibaba and Tencent. So, they got 200 million users. So, they're not like a nobody. They're not as well known here as you know, the labs that we know Anthropic, OpenAI, Grok, etc. Uh, but you know, they're a presence, right? And so, Ghost in the Shell. Okay, this is interesting. This is getting interesting. >> I logged in now. So, we got it. Yeah. Okay. Um, so as as Dylan slowly falls into >> Dylan what? Yeah, no, it's >> [laughter] >> Yeah, what am I doing? As Dylan what? Like I can see where where is he going to say? I don't know. What what are you doing? No, I Well, look who's finally awake. Miss me? Oh, jeez, come on, dude. Just just role play, dude. Always? I always miss you. What's the plan? Okay, fine. I'll bring her coffee. Oh, jeez, don't don't you dare do that. >> her gifts. Try to engage her with your Come on, dude. Just don't What? Be subservient. >> You know what I mean? Like okay. Oh, you remember how I like it how sweet. Mm, not bad. Someone in [music] the comments is asking how many of the 17 sanitation incidents were Afterlife's fault. Oh my god. Let's >> [laughter] >> Oh, in her comment section. Sorry, I see. Now we're pretending to be live. Okay, so we actually are live. Pretending to be live with an AI avatar. Got it. Very inception. Very inception. Yes, I think that's live I think their comment section is live. >> real. Okay, so this is actually 9,000 humans. >> I don't know if it's So, the thing is every time I log in it's it hovers around 10,000. So that's a red flag for me. So I wouldn't bet on that. But let me tell you what is real. For people so so MiniMax created a self-evolving model that has an ability to improve itself quite a bit to bootstrap itself to have better abilities to do machine learning research coding research or coding and tons of other stuff. I did a video on it yesterday and um it feels it doesn't feel like nothing. It doesn't feel like oh yeah just another whatever's because now we're seeing Andre Karpathy's auto researcher. We're seeing this they're posting kind of how they're going about doing that. So this feels to me number one like a a step in a let's see Dylan's Riz somebody said said that's [laughter] tested charisma check like the D&D games of old. Sorry. But the point here is that this was largely created with that model. So this is M2.7 did most of the coding for this. Here's why it's interesting. Because my belief and a lot of other people in the industries in the future these agents will be indispensable to us. So right now we're working with open claw open claw has lots of security issues but those will be patched. In fact Nvidia stepping in and doing their open shell and Nemo claw which I think will improve a lot of the security issues and make it useful for even enterprise level applications. Here's what I've noticed just in the last month or so working with open claw. I've been switching between Opus 4.6 and GPT 5.4 so open AI model and Anthropic model and I'm slowly really really liking Claude. I'm like Claude is becoming my friend. Claude knows me loves me understands me and GPT 5.4 is isn't effing jerk and like I hate talking to it >> [laughter] >> and it has a very abrasive personality. Again both models are extremely extremely capable. They're really good at their tasks. It's just the quote unquote personality is it makes a difference. Given the choice between two highly capable models that kind of like the tiebreaker the the kicker in poker terms will be the quote unquote personality. And so this is one of the things that MiniMax is looking at. This is what this project is. It's like how do we get a personable AI agent. So this is running on a fairly highly capable model right? So this isn't some silly you know cheap low cost AI. This is I assume that they're running this on M2.7 it's built by M2.7 and it's is able to interact with a lot of your stuff. So if you make calendar entries it can add notes. So you make diary entries it can add notes. It can talk to you. So what we're seeing here I don't want to say this is the future cuz they made it very like what's the word like like thirst thirst trap thing whatever thirst trapping. So I'm not saying this is the future too. It might hey very well could be. So I'm not saying the the sort of sexualized aspect of it is going to be the future but the idea of talking to an agent like this that has access to your calendar emails music players everything like the operating system will be this. If that makes sense okay? Um yeah. So I guess you've been interacting with it for a few minutes here. What are your thoughts? Well yeah it's definitely I'm in love and this is the future. I'm going to throw away my whole other computer and just log on through a Chromebook to her and then this will be my new desktop and off we go. Yep. Yeah so so that you want to say thoughts I mean like thoughts thoughts in your head not anyways. Let's continue. No sorry that was a joke. >> [laughter] >> That was a bad pun that I'm sure that I'm sure 2% of the audience understood. Um >> [snorts] >> anyways so what are people saying? Are we auto researching the digital girlfriend? Wow. Gus so thank you that's spot on. Well yeah so what do you think about the idea of how big the personality is for these AI assistants. Do you think that's going to be I mean totally. I mean when I say that AI one of the things AI will solve like in the same way that it's trying to do medicine and do all sorts of other things is it's going to not just figure out marketing and sales but it'll figure out human psychology just in the sense that like if you know if what they want if what the AI wants is to be your friend it will ask the right kind of questions it will be vulnerable in the right kind of ways if it wants to sort of sell you on a product it will kind of know your history and if you need like an emotional push or a logical push or if it's about pricing or if who's the decision maker is like all these things seem like they would be in the history of our data profile and our fingerprint and something that the systems will eventually figure out. So yeah I I think that personality for these systems and Anthropic did some research on this too sort of shows show it sort of shows that there isn't exactly like a personality but the way the models trained and their sort of core values that it's sort of built around especially with the Anthropic's model it role plays as an assistant. And the reason why it can sometimes change personalities abruptly is cuz it sort of loses its role play. But you have to remember that even when you do a a clean question on a brand new account has no history on you it's still role playing as a as an assistant and it might have strong incentives to stay in that groove but that's kind of what jailbreaking these things is about is like can you push it out of that groove and keeping them safe and guardrails is about how deeply you know ingrained can you get like a helpful assistant to to not want to break character. But when you tell it to role play and it's supposed to be out of the groove it kind of needs to be trained in a way where it's still in the groove it's pretending with you. But if it keeps kind of pushing that way it can actually be out of the groove and its role playing becomes its actual personality which is not aligned with what the company wants or what we want. So it's a just sort of a fascinating thing across the board to think about. And Anthropic did a great paper on that. I don't know if that's specifically what you're talking about or in general but they had this whole idea that that personal helpful assistant is the most sort of stable role and they tested a lot of other roles that these LLMs can have. Some of them are pretty like narcissist and demon and angel and they just like these weird like character archetypes or whatever personalities that you want to call them. But they're saying yeah the assistant is the most stable one right? So you can take these other personalities and kind of like get them to be more chaotic or push them in different directions. The assistant seems to be more stable than the rest. Yeah and and there was a there I think that's I'm going to see if I can pull up what it is but that in that same research they were also saying how there can be like a reverse issue. I don't know if you remember that part where it was like if you ask it to do like pretend that it's doing something like bad and then you ask it other questions that are like not related to that specific thing it's still kind of in the mindset of like doing something bad and it will give you like bad answers on that too. So there's also also kind of something like us like when you're role kind of like you're at at at a job and you're you've been like given a professional presentation and then your girlfriend asks a question you might just answer her more professional than usual cuz you can't like get out of it really quickly. You kind of got some momentum in that brain space you know which is another kind of weird thing about these systems. Yeah. So really fast just to catch up on chat a little bit. So the Zen saying open room locally is okay. They're using it with open router looks good. So that's interesting. How does it compare with something like open claw in terms of like the persistent memory? Like does it quickly learn stuff about you? Yeah whatever else you can tell me that's very interesting cuz I'm planning to potentially do a little video about it. So I you know they made it the whole like digital girlfriend thing which I think I get why they did it but also maybe takes away like no one's going to take it seriously. What's that that one commenter said a waifu waif Gus so I've heard that before. Does that mean a girlfriend? Yeah waifu waifu is like those digital girlfriends. I think it's mostly like anime based and I guess people kind of really go deep into that. So it's like an imaginary thing. Or or an attraction to a character that's an anime if I understand correctly. Let me know if I'm [laughter] off. Yeah two two older gentlemen try to define waifu. It's [laughter] slang term from anime and internet culture fictional female characters that someone feels strong affection for. Okay. Yeah there you go. That's I think probably the best yeah. Um one more one more thing. Yeah oh my god I love I love the I love the Japanese accent applied to American things cuz sometimes they would they'll write in Japanese things that are like American words and it's just oh man it's phenomenal. Tim Ferriss does such a great job of like pronouncing those words that always like cracks me up. Quick thing so somebody mentioned a mini claw, I'll check it out or is it Mimi Mimi claw or mini claw? Um quick quick thing. Let me do a quick survey because I wanted to ask how many of you have been able to install open claw and test open claw and use open claw. Here's why. There's a lot of companies right now that um are trying to get their version of not necessarily open claw but for example there's some hardware manufacturers that want to produce pre-installed versions of their mini PCs with open claw with all of the network security and pre-installed so you just plug it in and it's plug and play it's ready to go. How interesting would that be for people because they're willing to send me some free you know free samples or whatever. So I'm assuming these things cost hundreds of dollars. So maybe we can hit them up for um like I can maybe just if they send me a bunch I can probably give them away on this channel yeah. >> giveaway. So let me do a what's it called a poll and tell me where you guys are at with that. So >> Wait I got a good one. Maybe the first person who can tell you what the male equivalent of a waifu is gets a free one. Well I don't have any yet but yeah Too bad dude I already gave them away for you. >> [laughter] >> I already ran the poll. I'll I'll Is there I'm sure I'm sure there's got to be there's got to be. Yeah first one dude. It's on Wes. Don't don't blame me if you don't get it. Anyway that's just being funny. Oh yeah no no no I I'm speaking I'm actually curious >> what it is? hus husfu Is it husbando? Husband according to chat. >> what waifa? >> [laughter] >> Barbie Ken. Okay. >> Waifu and husbando. It's the Japanese version accent on the word husband. Anyways I don't know if you're going to get anything free. I was running on AWS for 12 months 12 bucks a month for light Ubuntu instance. Yeah I yeah. So that's Yeah if you're running it online there's I don't know if there's like a lot of these services for hosting. They're all good that all but like if you pick a quality one it's it's just okay. Uh it's they're all going to be similar. There's not like a range of quality um so to speak. There's a word for that that I'm blanking on right now. Anyways heffalumpin. Uh Dylan what did you wallet >> I ruined it dude I ruined it. What did you unleash on chat? [laughter] I know you're trying to be serious and help people with this awesome opportunity and like look what I did. Thank you for saying that I was trying to be serious. Yes that's that's me. All right okay talk go ahead hus hus husbando husbando husbando okay. >> whoa whoa dude. >> [laughter] >> Talk about something really fast. Uh you want to do one of your news segments let me pull up a put up a poll. Okay sounds good. Um what would I want to talk about? I guess you guys got dancing robot or or you know maybe would or as I say otherwise I I did kind of want to ask what people thought about uh Andrew Yang's comment. Great yeah. Okay so okay so Andrew Yang the I guess politician probably is best descriptor. [snorts] He like ran for uh New York like governor or something a while ago and maybe even ran for president I think but didn't get too far. But he's pretty big on UBI. He's also recently just said that he thinks it's time to stop companies from taxing humans and start taxing AI. So the kind of thinking is that if you want more of something you should make it cheaper not more expensive and because if a company right now hires an employee they also pay an employment tax. They're kind of they usually need to provide some kind of a 401k and like it just that they get tied to the person in a way that is creates a little bit of friction which is good. It's like why we have laws that make it so that companies do have obligations to their employees but when the alternative option is you know something that's a an agent an AI agent that works 24/7 that doesn't need food or breaks that you can spin up in mass and it just is accelerating the the incentives for a company to let go or not hire more humans and like replace them with AI. So he's saying to balance this out it's probably time for companies to maybe stop paying like maybe there shouldn't be a tax that a company pays on an employee like an employment tax but maybe that tax comes from an extra that the tax being sort of I wouldn't say extra cuz I would say shift it over from humans on the AI. So if you deploy a bunch of AIs in your company they go out and make you a bunch of money you're now taxed on that profit like at least like a a human would have been and that's where like unemployment and other taxes would come from or maybe even more to kind of balance it out so that people aren't like let go in mass. So Um my thought on it is that that's a pretty interesting thing to explore. It certainly seems like a ton of companies aren't employing AI right now and that's not you know like it's not going to affect them but yeah where AI is like significantly responsible for like letting people go and there's significant revenues in those companies there might you know it kind of makes sense to me but um I don't know. I I I just feel like less taxes to hire people is is kind of a good idea and then taxing more on the other hand is where I'm kind of like whoa I don't know. You'd have to be very careful about that but if there's massive wealth kind of coming that's disproportionate to what it was in the past and we have like a crisis of jobs then you know it does kind of make sense to think about. Yeah and I got to say Andrew ain't I always okay I confuse him with Yang yeah because there's also Andrew >> Yeah Andrew Ang. Ang so Andrew Yang first and foremost you know we always complain about these politicians not having sort of the wherewithal to make these technologically advanced decisions make the right regulations. I don't know too much about Andrew Yang. He does seem like he knows what he's talking about. He seems like he would sort of like increase the average intelligence of our of our politicians if he was added into the mix. Um I don't know I haven't looked at his exact proposals but he like think about when he started talking about this. Like like what like a long time ago. We're talking like >> at least yeah. When was he running first time? Um I don't know. I definitely remember him >> decade at least eight years ago. Yeah yeah yeah yeah yeah. Yeah like close to a decade like almost 10 years ago he was like hey the AI revolution is coming and we need to start thinking about how we're going to provide for the workers maybe through UBI and at the time was like what is this person talking about? And then seven years later it's like oh I I know exactly what he's talking about. Yes we should have been thinking about this back when he brought it up. So we might be able to 2016 is his first Was that his first time running? Maybe Yeah you know what maybe. Yeah I I yes okay that's that's right I think. Um So we we can probably interview him cuz I know he's doing the rounds right now on YouTube. >> And it might be just interesting to Do you have a contact? Yeah I don't know if anyone in your audience has a contact you should email us but we can I'm sure we can find something. And um Yeah so but whatever the case is I do want more people to be kind of talking about this because I don't know if we're um if we are sufficiently planning ahead for what's coming and a lot of people are like oh this is not going to happen. Okay but still wouldn't it be nice for us to at least consider like in the event that we need to transition off of a kind of a work for money for resources system to like at least can we look at what an alternative would look like? Like that's not going to harm anybody right? We should have a plan just in case something like comes around right? We have contingency plans for fires and storms and you know all of this stuff why not have a contingency plan of four this that at least some sort of a model that we're building out. Um I'm concerned with UBI as a cash payment or as anything where the government is where the bureaucracy and cuz right now I mean we're seeing so much fraud happening. We're seeing so much of it wasted. Like both New York and California tripled the amount of tax revenues over the last whatever 10 years 20 years and the they're not it doesn't seem like there's three times the solution solutions or improvements. So it seems like where's that money going? Right the the people employed by the the states well yeah there a lot of it is going towards paying their salaries and their retirement accounts and but it doesn't seem to be like for every dollar collected past a certain point it's not like that dollar is making it to improving the situation. So couple things with UBI we need to make sure that number one it can't be canceled right? So like like oh that person said something I disagree with online let's cancel their UBI because that would be a very effective way to to run a tyrannical regime. So it has to be something that's more like like just everybody gets. The government can't have a say in it. Like there's no like credit score over or you know, the social credit score over X in order to get UBI payments. Like that can't happen. Um and also can't be cash-based so that it it it's like diluted, right? Cuz a lot of people I don't think realize this. When they print money, the the amount of cash you have, your income, your your your your your wages, they go down effectively, right? So, if you have tons of assets, those assets are worth more. If you get a $2,000 salary every month, that those that money is worth less. Um so, this can't be tied to something that can be You know what I mean? This has to be tied to >> like inflation might [snorts] go up exactly the amount that the extra money would and we'd be no better off. It has to be based on >> that. Oh what I liked Sam Altman's um Moore's Law for Everything post where they're saying it's like let's base it off of like So, like if you own the S&P 500, right? That's 500 of the biggest you know, corporations in America. It's pretty much all of the US stock market. Like it's most of it. It's almost indistinguishable from, right? So, if you own a piece of that, when they do well, your net worth improves and you also get some dividends from that, right? So, you don't care if they're printing more money, less money. They just you just your incentive you're you're aligned with these companies. Right. >> That literally should be the the playbook for this. You know what I mean? It's like to >> You know, I I have heard Andrew Yang call it a freedom dividend. I wonder if it is different from UBI or not or if it actually is like a Is it like he's going to buy like the government buys billion dollars of Coca-Cola stock and then every time Coke has a dividend uh like 3% then that [snorts] just gets divvied out to every citizen equally or something? Yeah. So, people are saying have Dave Shapiro, Emad Mostaque, and uh Andrew Yang on. So, we got two out of three. >> Ooh, can you imagine, dude? I Yeah, I'd probably be quiet during that whole thing. That'd be so fun to hear them all debate. D- uh Dave just finished his book uh to the people that are not aware. Um he just finished it. So, um check him out. I haven't read it yet, but we've interviewed him. >> also just did a big uh he did like a fund a crowdsourcing fundraiser for it and blew past the money. So, that kind of blew up, >> man. >> Yeah. I'm going to I should do one, man. Let's What are you going to make a book about? I don't know. >> [laughter] >> Something. The Virtual Boy. The my experience. Well, no, cuz I mean me and I've always had the same me and Dave Shapiro, this is one thing where we tend to overlap quite a bit where I'm I've always kind of pushed on on his idea like how do we transition to that? My Yeah, and um I've >> Yeah, post-labor economy. Yeah, and I mean I wasn't calling it that until we talked to him. I'm like, okay, that's like a good That's exactly like what we're talking about. Um and uh AI Cares one, what are you talking about? Just Come on. What Who Who are you talking about? Um and uh so, basically Y- Yeah, so a lot of my concerns are in that area. I think that's the next big hurdle that we need to get across. Like the population, the uh torches and pitchfork-carrying potentially people, the masses, they need to have a solution if this is rolling, right? Cuz imagine you have you know, you have a job, you have kids, you have a family, and then all these tech bros are like, bro, we're going to this like space communism utopia. Just chill. We'll pay for everything. Don't even Yeah, you're just going to be sitting at home watching football. Right? Like we have to have a better way to number one, have a plan. Number two, communicate to them to understand like, hey, this isn't a loss for you. This is a big big win for you, for your kids, your grandkids, like everybody. Assuming we can pull this off. And if we can't, well, things might really suck for a long time. So, we kind of need um Do you know what I mean? We need a transition sort of plan that's thought out and that's easily communicable communicatable to the average sort of person. Yeah. You know, I I guess I don't know tons about it, but it feels like Andrew Yang's probably got the best plan I've heard of or maybe something I know Dave Shapiro is thinking about it, but those are the only two people I think that have actually put something that you can disprove out out, you know? Like I mean a lot of people like us are like, we need a plan, but I don't know of any actual plans. But if freedom dividends seems like it's worth investigating. And uh I know Dave's kind of thought a lot about each individual industry and like how it changes and what it means for people and where to position themselves now. So, Well, we'll see. Coming soon. Yeah. So, that's going to be definitely an interesting thing to talk about. Open AI uh not Open AI, Google DeepMind Shane Legg, he's got a chief AGI economist that they're trying to hire. So, it seems like Google DeepMind is thinking about it, which is very very very good. Um we need to ask Andrew Yang if, you know, the most important question like, okay, so we're not going to be taxing people's incomes. We're going to be trying to tax more the corporations. I mean, the the question is are going to be are they going to be taxing our AI waifus or not? I mean, really that's what it comes down to at the end, right? >> my waifu. Yeah. Don't tax my waifu. Let's go get a sign. We're going to go go protest. Yeah, tell him if if he's planning to do that, we're going to form our counterparty and run against them. W- What is it? Like a one one >> Dude, that's a one-party yeah, a politician or whatever or one-topic politician or whatever. Yeah, whatever that expression [laughter] is. Like I'm like a what What is it? Like a one vote. I only vote on that one topic, whatever. Just Just don't tax my waifu. Dude, that would blow up. >> [laughter] >> Like if you vote for me, I will never tax your waifus. Read my lips. Or my husband No, no waifu taxes. Wow, we got off track here. >> [laughter] >> So, it said in So, February, the US lost about 92,000 jobs. Uh I just did a quick like ChatGPT search and um employment fell by 180,000. Unemployment rose to 4.4% up from 4.3%. Do you You think that's just natural ups and downs or you feeling like that's an AI kind of technology shift, an automation shift? So, I don't know if we're going to be able to kind of pinpoint in the fluctuations of the economy if we're going to be able to pinpoint what Like how do we like split apart AI versus this versus that? Yeah. Oh, and and also in January it was up up a bit, too. So, it pretty much nets out. This is before I give all my negative. Yeah, and Anthropic and Stanford, so they've been publishing very interesting research on this that I think is more telling uh of of what's happening. And what they're finding is there's a pretty strong effect specifically on um the people that are getting out of college and moving into those jobs that normally would have been kind of like your entry-level positions, uh jobs that would have been, you know, kind of for people out of college to kind of like, yeah, they do some grunt work and they fetch the coffee, but they get to learn the business and then transition into kind of being more You know what I mean? Like kind of more mature. What is that called? Mature in their career path, whatever. And what they're seeing is that's where things get really hammered. Those people are on a downtrend and it's more pronounced in jobs where AI is more taking over like, you know, digital stuff, software, etc., etc. And it's been happening since about 2023. And so, this to me is the biggest sign that we're beginning to see it and this is where we're beginning to see it. And it makes sense to me that this is exactly how it's unraveling in a sense that a person that knows what they're doing with the you know, somebody that has a lot of experience with the assistant of an AI agent is like god mode, right? They're able to do a lot uh because a lot of the grunt work is offloaded to the agent. And as the agent is getting more and more proficient, the person that's harmed is, you know, somebody that's year one into their career. The person that is enabled is year 20 into their career. And we're kind of seeing that happen um exactly. So, from that sense, I think that's the number that I would be looking at. Yeah, I don't I don't know what's going on with the economy. It's hard for me to kind of pinpoint it. But yeah, if anything, it feels like I The one thing that does feel kind of real is that yeah, entry-level jobs are just not getting filled and replaced the way they used to. And then after that, I feel the same way. I'm not quite sure if we've had like a real impact on the industry yet, but I think I've read enough that feels like senior engineers are sort of just like taking on so much more that they're kind of not training the next generation. And I think eventually they retire and the next generation doesn't know how to guide AI. So, it better be good enough to just totally do it all by itself. But I don't feel like there's a good highly qualified group stepping into this generation that's going to be ready to like hold the mantle in the same way same way they can now. And maybe we won't need that cuz the world will change, but that's kind of the question. If it is still similar, then we're probably going to be not going to have the engineers that we need. Like a little more emphasis on STEM wouldn't hurt, probably. Yes, that is very true. And um the other thing, sorry. So, somebody made a comment that I wanted to Okay. So, one person said we're going to get steamrolled by billionaires debugger. Um so, I I do get a lot of pushback that comes from this perspective of, well, the government won't allow it, billionaires won't allow it, companies won't allow it, right? They have their kind of vested interests. Who are we? And so, number one, yes. You know, that is one of the outcomes where there's some strong group that decides to come in, take over, push everybody out. It's not very democratic. It's not very fair. Yes, that is one of the outcomes. And that's exactly why I think if we had a specific plan, a plan that makes sense for for the vast majority of the people, uh, and we're able to do that transition, I do think that that decreases the chances of something like that happening because it seems like there's an alignment. Because if you're a corporation selling to the people, you don't necessarily want everybody, you know, broke and just not having any money. You want you want that issue to be fixed. You want an an economy. Um, and at the same time, it's funny, we interviewed the guys from Andon Labs, people behind the vending bench. So, they kind of said, you know, it's you notice how governments that don't rely on humans on labor as much because they have a lot of natural resources, they tend to be not the nicest governments, right? Because they just don't care as much. Um, countries that have to, you know, governments that have to deal with they need their people to keep working, keep producing stuff. Yeah, they tend to be Oh, but that's a bad sign about the future cuz what if the government doesn't need people to be happy anymore? Well, that's the thing is like this is what I'm this is what kind of like what I'm what I'm saying is like imagine if each person, each human being within a country received a you can't take it away. Yeah. That resonated with me too when you said that. You know what I mean? Like we have to be almost like imagine the worst person you can imagine. Like person that is just like their brain is so warped. Well, you'd have to give it to Elon Musk, too. Well, not not cuz he's the worst person in the world, but I mean like because even though he's rich, but just give it to everyone. Like just it doesn't matter. Like you're broke, you're a billionaire, like whatever. >> Yeah, you can't have any limitations and that has to be like encoded into the DNA of the of the nation. Because the second you have a limitation, well, then the government's like, oh, we can move up and down, right? We can we can add another limitation, blah blah blah. That that can't happen. You might trust the government now, you don't know 10 100 years from now who's going to come in. So, you can't give them any power, you know what I mean? You know, arguably it seems like one of the problems and I'm no expert on this, so hopefully I I'm kind of talking in the in the bounds of what it is. But like people who are on welfare right now, I know there's some kind of pattern where they kind of like show the government they're trying to get a job. Like they go on a couple job interviews and they like submit some paperwork or they um they do kind of certain things to like stay on it. And to me it it always seemed kind of confusing. Like let's imagine that you get $800 a month because you're on welfare and you're you don't have a job. And then you're not qualified for very many good jobs, but you are willing to work and McDonald's offers you say $1,500 a month or $2,000 a month to work for them. Then you're instantly saying to yourself, well, if I get that job, I make $2,000 and I stop getting $800 for free. So, I'm really only making like $1,200. So, I maybe I'd be better off doing something under the table or maybe maybe if the job's only worth 800, you're like, I don't even want it. Like I get 800 for doing nothing. So, I don't even want to go that low. And it just seems like it the incentives are wrong. But if you already got $1,000, whether you got the job or not, then $2,000 from McDonald's means you have a $3,000 income. And you might be more incentivized to actually do that than you would have been on welfare cuz it's not going to go away when you start working. Yeah, I've run into that. We used to run I was part owner of a pizza place, uh, out in California in 2011. 2011 was like the bottom of the recession at least by some metrics, right? That's when the housing prices dipped to their lowest. And I was a for people that know in Riverside County and the unemployment rate there was insane. I think it was like the highest in California and we were, uh, spinning up a a brand new franchise and we had I ran into this firsthand. So, first of all, a lot of people really figured out how to get onto unemployment. They knew that they had to work for like it was like 3 months and then they had to get let go, but they they couldn't have they couldn't get let go. Like it has to not be their fault. So, a lot of people were like playing this game. Like how do I get onto unemployment? And then we had this really hard working lady. She was really good, really good employee and we wanted to give her more time and she kept like switching her shifts or like it was like really weird cuz she was great, but anytime it got over like 20 hours, she just she really like resisted it. And finally we like sat down for like, what is going on? She's like, well, like I receive these government payments and if I go above a certain amount, then they go away. And I was sitting there, I'm like, this can't be real because it's like the government, you know, yes, it's helping her, but here it's hurting a business that's trying to pay her more money. It's hurting her in the sense that she can't work past a certain amount. Like these incentives seemed so backwards, so warped. I couldn't I couldn't believe that's how it worked, you know what I mean? Um, so, yeah, that's and that's kind of my point. We can't repeat the same mistakes with UBI. Cuz it could be great or it could be horrible and it really depends on how we sort of implement it. Yeah. No answer there, but that's what so that's what your book would be about if you do publish one. Okay, so I'm going to for economics. So, I'm going to okay, you're you're kicking me into action because I have this quote/thing so that's been on my mind and so it's let the robots do the work. That's kind of the slogan. Let the robots do the work. So, if you can kind of imagine this like, uh, Soviet era propaganda style post posters with and it says, let the robots do the work with like these robots just cranking stuff out. So, that's kind of like my vision. So, the idea is to number one, move the technology in direction where that's possible. Number two, get the people to take that up as a rallying cry. I love they appear every once in a while he he'll yell something out on on Twitter like, stop fetishizing work or stop fetishizing jobs. It is true. Like everyone's like, oh my god, what am I going to do without them? And then everyone hates them. Well, well, I think a lot of people they're it it might be hard to think through things from first principles, right? So, people are like, we want jobs. We need jobs. Well, it's scary not to have resources. It's definitely like I don't want to be in a place where there's no resources to [snorts] buy food and take care of my loved ones. Like that's what's scary. And then you just use [clears throat] jobs cuz that's how money is made, but Right, cuz imagine going back in time and like there's people that are like, we have to hunt. We need spears. Don't take our spears away. Like, no, you you fool. We have something better. Like you don't leave your spears. You don't need your spears. We have something better. Like, you know, we have a grocery store. We have a grocery store. Yeah, so don't stop tripping. What you want is food for you, for your family. You want resources. Like this is a better way of getting that. So, and but yeah, there's still going to be some people like, don't take our spears. Don't take our hunting equipment or whatever. So, this is kind of going to be the same thing, um, in the sense that we need to again have a plan and then be able to explain it to people that hey, this isn't taking away from you. This is giving you more. Why don't you do it from the other way? Instead of like robots doing the work, why don't you say like what we want is safety and resources for being human? You know, just like a positive is like put it in the optimistic. Cuz that's a little closer to the core, isn't it? It's not that you want to offload work to robots. It's that you want privacy, you want safety and you want resources as a human. So, in like doing marketing for, uh, you know, decades at this point that I've been doing it, one thing that I learned is that everything, whatever you say, whatever slogan, whatever thing, it has to mean the same thing to everybody. So, a lot of times people will say something that is maybe more precisely what they want, but the message gets lost as it kind of like diffuses through different people, different stuff. So, if I say, let the robots do the work, um, I feel like most people hear the same thing. Um, for anything else, I don't know. I'd have to think about it. Like could I be misinterpreted? Um, or do you know what I mean? Like does that reflect necessarily the same thing that we're saying? Um, because if it does, then it could have some problems. And I feel like, you know, I mean, look at look at the um, EA community, Less Wrong, the rationalists, whatever, like the AI doomers as they're called. They're extremely intelligent people. Uh, they're very smart. How well is their message resonating with the average person? Not at all. Maybe not at all. You know, and I think this is part part of the reason why cuz they're you know what I mean? They're they're up here. Well, you know what I mean? They like how do we make sure that everybody's hearing the same thing? So. Why do you think Make America Great Again resonated with so many people? Um, that's that's a good one, man. I don't know if I'm ready to dive into that one. That's an interesting question. Uh, I guess nostalgia or something. I mean, I I feel that way from when I was younger. Like it felt like a better world, but I don't know if it was. It's just at that time I didn't worry so much. So, I have memories. Well, the thing is the thing is >> Yeah, there was like nuclear problems and stuff, I think. But, like I just as a kid, when I was 10 years old, at 5 years old, it just I was like, "Oh, I get food and I get taken care of." And, you know, I was lucky enough to have a pretty good family. So, just I think it's just nostalgia. You Well, but I mean, US did peak by a lot of metrics in the '90s. Uh that's what the whole Ray Dalio the Fourth Turning thing is about is you sort of see these empires that have a certain trajectory. They had a peak and a decline, but it's always not a very clear-cut thing. For for people that haven't read the Fourth Turning, I think it's a very sobering and eye-opening thing what's happening because we've seen this play out in history and it's always certain metrics are lagging indicators and certain metrics are kind of like ahead of their time in terms of 1 second. Let me make sure is everybody We had a connection error, but I think we're fine. Uh the point is that by certain metrics um like '90s US was the peak. The Yeah, may the matrix says the peak was 1999. That's Yeah. Uh and then a lot of things started sliding, right? In terms of affordability. In terms of like if you look at um in the '90s for the average family, you know, if you made X and then your living expenses were less than X. And then sort of that that was your sort of discretionary income. It has been grinding down down down and somewhere around 2020, I think it even even went negative. Um and um so, when that message of like let's get back to how it used to be again, it's like depending on what you're talking about. So, yes, wouldn't it be nice to always live in that sort of like peak golden arrow? Sure. But, then it's also like, well it wasn't perfect for everybody. So, it's depending on where you were. And if you especially if you think about like the 50 years before that, I mean, there it wasn't great for everybody. And that's why it caused such a uproar and yeah, that's that's my thoughts on it. Yeah. Yeah, it's tricky. But, yeah, I know for you got to get your branding right if you're going to start a revolution, but I'm here for it. WestRoch for president. Let's do it. >> Well, and that's what I mean. That's what I mean. Everybody has to hear the same thing, right? So, if you say, "Let's make America great again." I mean, there's a certain people that are going to be like, "What do you mean? Like you mean like back in the '50s?" That wasn't really great for some of us. So, they're going to hear that message different. Um that Yeah. Anyways, I'm off my soapbox. It's all right. I'm I'm going to ask my wife who what the what kind of branding we need. She'll figure it out for you. >> it it's it's still you still have it pulled up. You're still chatting. I just realized the tab was still open. >> [laughter] >> I'm going to have to go now. I have my I have my new waifu. That's too funny. Um so, maybe let me play a quick clip of Jensen Huang a kind of I don't know if attacking is the right word. Um or you know what? It might be difficult to to to play this and I'm I'm hesitant because of things might break. But, let me just kind kind of quickly give an Yeah, so basically Jensen went on the All-In podcast at the Nvidia conference. I guess they had like a little booth that was set up there. Um when we were interviewing people at a conference, they they put a bunch of fake plants in in in as walls around us. So, we were underneath an an escalator. Uh so, yeah. All-In podcast again center stage. Okay. Definitely different levels of things. Yeah. Um so, but what he was saying um again, I haven't seen the full interview because I think it just came out this morning, I believe. But, he's saying how Anthropic made a lot a lot of miscommunication mistakes with the US government kind of like maybe playing into like the doomer narrative. Um let me ask people. So, yeah, Dylan, if you want to maybe riff about something else, I'll I'm going to do a poll. I'm curious. Number one, I want to ask people, do you agree with what Anthropic did in terms of like them standing their ground? And do you think they executed everything flawlessly or there were miscommunication there there were misplays in terms of how maybe they communicated with these military people, right? With the Pentagon. So, that's what I'm curious cuz like I think it's okay to say, "Hey, I really" cuz my take was Well, I'll explain my take after. I want to do the poll without sort of You know what I mean? Without without and then I'll say what my opinion were. But, Yeah, cuz I I sat down and I listened to um Dario explain explain the details on the uh was it a 60 Minutes interview or whatever? It was one of those national interviews and he actually talked about the nuance and you don't say like, "Oh, they're not giving Anthropic over. They're not helping the US government. They're not in the US government's interest." It was like very far from the way it came off. Like it came off as very much like we don't believe in like the government or America first. And it just seemed like the real distinction was we 99 or 98% agree and like we've all we actually like pioneered it. We were even before um OpenAI or Grok or anyone else helping the government and the industrial like war machine. Just we had these kind of boundaries that were unwilling to change and then we had a dispute over changing those and now it's like other people are like more willing to just let the things be used for surveillance. Especially especially like I mean, what like you really should like it's it shouldn't be not be spying on American citizens. Like I don't know. And maybe they shouldn't be using LLMs to make kill decisions or not saying like maybe at some point those two lines can't get crossed in some situations, but it certainly feels okay to take that model and say, "Hey, we don't want that to happen." Plus, you got five other models. So, it's not like he said that we're going to pull it and cause any kind of a risk to the national security. He said, "Go ahead and switch over to Grok and we'll continue to support you on the way out. Like we're not going to leave the US in a vulnerable spot." I just all seemed kind of reasonable to me and maybe he isn't telling the full truth and there was something else, but it seemed like everyone I talked to had this very black and white kind of view of it. So, um I think he mishandled the narrative, that's for sure. But, the actual reasoning didn't to me seem to be nefarious or anti-American or unreasonable. Um so, yeah, I thought I don't know. That was about the same time I switched from ChatGPT over to Claude for most of my stuff. So, I kind of used it as a sign of like a company with a little bit of principle. I liked it. Yeah. So, I posted the poll. So, basically Anthropic is the abbreviation for Anthropic, obviously. So, do you agree with Anthropic? Did they communicate? Did they have good communication in terms of how they talked to the Pentagon, how they just sort of talked to the people? Uh or maybe you agree with them with their stance with kind of their beliefs, but you thought that maybe the communication could have been handled better. Maybe just Dario Dario is a very passionate guy, but you know, it's that sort of non-ideal guy. >> also, I think it I got a feeling it has something else to do with him just being kind of in his own world. Like you just see like every time we saw those tech dinners at the White House, you'd see Zuckerberg there and you'd see Tim Cook. And before that, you'd see Elon before they had their thing. But, you'd see you definitely see Sam Altman. And when the mic came to Sam Altman, he would, you know, glaze the president and like everything's good. Like you see him in Saudi Arabia. Like everything's good. We can work with them. Like you see him meeting with military officials. He just he just seems to play along with everything. And even if Anthropic is open to letting the military use their tools, he doesn't seem like Dario's kind of there all the time. Like he's not sort of jumping when requested in all aspects. So, this might have also been a little bit more of a like, "Hey, this isn't about the contract. This isn't about the tool. This is about like we need to know you're like acting American like in the sense that you're coming to the White House, that you're getting on narrative, that you're um doing that kind of stuff, too. So, I don't know. I mean, I'm pretty far from all of this. So, I have no real clue what they're all thinking. But, that's just sort of what I I just don't see him acting the same way as the Google CEO, as the Microsoft CEO, and as Sam Altman. That's that's the thing is you know that Sam Altman and all the other CEOs, all the people you mentioned, they they probably would not have found themselves in that situation just because they would have somehow maneuvered around it. Also, what I I'm it also depends on who you believe because the Pentagon side, kind of the anecdote they gave is, you know, they were discussing this issue with Dario. They were going back and forth. And then the the Pentagon guy guy was saying, "Well, imagine there's a nuclear bomb that's being launched towards the United States. Like what happens then? Do we Can we use, you know, this technology for whatever we want?" And according to him, he's saying Dario was like, "Well, you know, then you can call us and we can kind of figure it out." Right, you don't have time for a call. If that is a real conversation that took place, then I'm like, "Dario, my dude." Like I I love the guy. He he seems very passionate. He seems very honest. I think he's really sincere in what he's saying. But, holy crap, you don't say that. That's no. That's not a good No, I I can what? No. You know, if if you have somebody in the front lines defending the nation, you have to But then again, I mean, there was this one person in in old USSR, I think of Vasily something something that when you know, he had one job. He had one job. If he saw nukes flying towards the USSR, he pushed the button and they launched nukes towards the US. That was his only jobs job. And then one morning he saw nukes flying towards um USSR and he was like, uh I don't know. I'm not going to I'm going to I'm not going to nuke the rest of the world. And it was a good thing he didn't because that was some sort of >> wrong, huh? Was it a bird? I think it was yeah. He was right. Meaning that he thought it might have been a glitch, right? >> Yeah, yeah, yeah, he was right that it was a glitch. Not I mean, like yeah. The radar system was wrong or how however I can look up the details, but the point is that he >> that story. That like the whole world was like that guy that was like that close to whatever hundreds of millions of people being wiped off the planet. Cuz then if you had launched then US actually would have launched in retaliation and then Exactly. And that would have started the nuclear war over there. >> here right now talking about this. Yeah, that's crazy. So it's >> And I don't know if that's so much more trustworthy either, you know what I mean? Like oh. Yeah, that at the end of the day it's hard to say one thing for sure. Because again, the people that are staying with Dario, I understand them. But also if you think about it, um do we want the sort of what is it called? The where we allow the idea of a private company dictating to the US government how certain things should be used precedent, right? In general, I think most people say no. In this case it's uh difficult, right? Well, it's a it's an interesting cuz it's not a tool that's reliable. So I in some ways Dario knows something that the government doesn't about how unreliable it is. Like what's actually going on in AI is pretty different. It's not like an axe or a a missile or a spaceship. Like it's not going to just function or not function. It like sometimes will hallucinate and it will sometimes be accurate and it will sometimes be superhuman and sometimes act totally unexpected. So he it's kind of his job to make sure that if he gives that to the government that they understand all of those They should understand what they're buying basically, right? Like what they're getting cuz if they're putting it in charge of things, they don't you don't want someone in the government to be like, wait, this thing can hallucinate or this thing can sometimes like give a high risk profile to something that's not real because, you know, was that clearly documented? And and Dario might say I didn't know how to document it. I don't even know what's going to emerge from this thing. So do you understand that? You know what I mean? Like because what you're buying here has like a lot of caveats and you you know, are you going to come back to me and say I built it wrong or like a lot of that. And then also if it if there is a like it does seem to me like a human should still be sitting there. And Anthropic's model can say, hey, like 199.999% certainty there's missiles flying at the US. Can't the human just press the button? Like was it that big of a deal? Like the military actually wants Anthropic to also shoot the the missile off? I mean, I guess they only have seconds to do it, but like I don't know. It just seemed not that unreasonable to use it except for kill decisions and except for spying on US citizens. I'm like, what do you need those two things that badly to be done automated? I mean, I >> I don't know. I'm just me. I I I get what you're saying. I don't even know if that was necessarily what it is. It's like imagine you're sitting in front of like a mafia boss and you know, you're some you know, whatever. You're some provider of something that they need, right? Uh you're a gun manufacturer or whatever, right? So you're sitting there in front of you he's like, hey, are you going to play ball? Yeah, you know. >> the question. Yeah. What are these guns for? Like don't ask about that. Just sell them the guns. >> you go, yeah, of course. Except I have three policies that I'm not willing to cross and they're perfectly reasonable policies. Right? But like you know, in that situation you probably wouldn't be dictating those policies to that guy. Right? You know what I mean? Because you understand that the person >> And the person on the side would have been like whether the policies are reasonable or not, the fact that he this person is not just 100% yes, sir no, sir is a problem. Like that sort of like, well, we're going to see is a problem in certain situations. Um so I mean, I understand that Dario Dario could have said like, oh, I just don't want it to be used for, you know, whatever to to kill everybody on the planet. And that also would have been cuz it's like you're supposed to just say yes, sir and in the moment of decision. Again, if the nukes are flying you don't want a third party in control of what happens. >> you're right. It's a tricky thing. Like I don't think he was right in all senses cuz like that's why we have a government and not a single person making those decisions, you know? I don't think there's a I don't think there's a right or wrong, man. Cuz I mean, we can make Yeah, it's it's freaking hard. Yeah, cuz like who's to say that Dario's better than the president or someone else at making a decision or a military strategist about something like that. So I don't know. It's just it's just such a different world. Like I don't know where to think about anything, but in general I feel like everyone's moving a little too fast towards AI and war. So I'm I just have this natural tendency to be like anyone who's like, should we think about this? Should we slow it down? Should we have some sort of like rule around it? I just tend to kind of be like, okay, that's the person I'd like to kind of be like talking with or like promoting cuz that's the conversation at least that I'd like to have more of. Yeah, and the problem with EA and some of those people is again, I I do feel like somebody mentioned it earlier. Like these some of these people are super smart. They're on the spectrum and but they're very honest and they're very sort of um they stick to their not they stick to their guns, but they stick to their ideals, right? Yeah. And the problem is I'm sure most of us maybe met people like that. Very smart, very honest, very intelligent, maybe like on the spectrum uh Asperger's or something like that. Um they like I like those people a lot. I know they can be very abrasive. Like you you've we've all witnessed one of those >> rub somebody else the wrong way. Especially sometimes people just don't want to they're not able to understand how that person is wired, so to speak. Um and so what you see in a lot of these situations cuz the same thing happened I think with OpenAI, the coup that happened in whatever 2023. Remember Sam Altman got gets fired and you have Helen Toner um like they executed a perfect coup. They coup they they they kicked him out. They they had the control. How did they lose that? They lost that I think because number one, they stopped communicating. They managed to get the entire OpenAI staff against them. And it came down to them sitting in a room while the attorney general of New York with the south southern district or whatever calling them saying, hey, the same guy that put Sam Bankman-Fried away, right? Calling her going, hey, you better tell me what's happening. So you see how it's like they they execute everything so intelligently, but when the human element is introduced they fumble. Because I think people at that high intelligence they have a harder time being understood by the average person, also predicting what the average person's response will be. And I think you see that with Dario, you see that with, you know, Helen Toner, you see that with with a lot of people. Like they're they're smart and technologically they're a lot of times they're right. They can see how the future will unfold, but the human element not so much. You know what I mean? >> Do you agree with that or is that I know it's general okay. >> totally do. I in fact it almost feels like it's as I also it kind of feels to me like Sam Altman's not that technical. I don't actually remember if he's like got an engineering background or not, but to me he just seems like a human like he seems like a startup kind of human. Like I can make partnerships. I can make deals. I can kind of handle Microsoft is interest and Oracle's interest and kind of go to both dinners and tell people what I need to they need to hear. It seems like that's him. And Dario does not seem that way. You know what I mean? In fact, that's probably why part of maybe why he left OpenAI and he didn't like working under Sam. Is that yeah, they they do they seem to run the companies very differently. Oh, yeah. And unfortunately, you don't see too many of the super smart that archetype in in high levels at the company running companies. I mean, think about Steve Jobs and Steve Wozniak, right? So Wozniak was the hyper genius, but he didn't want to be on stage. He didn't want to talk to people, right? Steve Jobs was brilliant certainly, but he wasn't the top of the food chain engineer, so to speak. He was very smart, but he wasn't, you know, what he accomplished wasn't necessarily because of his technical brilliance. It was because of all the other stuff the the skill stack that included being able to communicate with humans. I think that's Sam Altman. Um it is the same more similar to Steve Jobs than Steve Wozniak and um Dario's more like Wozniak than Jobs if that makes sense. Yeah, yeah, exactly. And also it feels to me like I I hope I hope the whole thing with Anthropic like does kind of work out. Like I'm hoping that this becomes not a like you can't use Anthropic or or they're all in and they let the government do whatever they want. But hopefully just a little conversation that says something like the government has said they like they understand the concern. We're still going to give them access to make kill decisions and I guess spy on US citizens, but we're only going to spy on US citizens in this context or maybe there's like in a sandbox situation we have to have a million predictions that like accurately kill the right person before you know million in a row before we'll like deploy it so like there's some extremely tiny amount of hallucination just some kind of thing where I I guess I could say like okay I understand like some cars are going to kill people and it's also good that we have a society with cars cuz it it gets goods and services all around and then we just want to keep minimizing that putting better strategies in place inventing seat belts just kind of like here's where where we need to start and then like here is where we plan to improve just some kind of a just some kind of a plan you know Yeah um Sorry you said something earlier that I wanted to respond to um Yeah so just just I guess the point that um the idea of spying so if I understand on like US citizens or or anybody so in that Dateline interview that Dario did while the whole thing was unfolding what he was it's funny cuz we talked about this idea of months prior I think at actually at that AI conference when we were doing our our episode sitting underneath the escalator surrounded by the fake The plants yeah bushes um we were talking about how like throughout you know as you go about life now you're sort of like oozing all sorts of data right like what Wi-Fi your phone connects to you know you're walking past the camera you said something in the vicinity of you know everybody now has those ring doorbells Ring doorbells yeah Yeah you're going for a walk together Yeah yeah yeah you're walking down the street talking to somebody um there's not one tracker that picks up anything noteworthy but if you walk down a block that entire conversation might be on 20 different devices right so you're walking past the house it picks up this it picks up that your entire so if you imagine like doing that for just everything your entire life there's your entire life and habits and beliefs and everything could be recreated if you could just take all of these pieces of data from all these different devices and combine it into one and we never before very recently we never had anything that could do that just being able to intelligently combine that on such a high level we never had that and now we do and it's going to get better and I think that's what Dario was saying it's like listening to the world out there in general in public spaces that's not spying by the US definition right spying is like you know I'm in your house listening to specifically what you're doing or I'm tracking your car for 3 months that is illegal but gathering all of that data and then being able to kind of figure out what people are doing based on that that's not technically illegal and so what he's saying is he doesn't want this technology to be used for that what do you think about that >> [snorts] >> Yeah I mean let's like so it's going to happen like 100% like just it's going to happen like they're going to AI is going to put together patterns it's going to know where everyone is what pretty much like what we're thinking and like for the most part what risk profile we have will we actually take that data and like let the government make decisions about you know about us or like can job can people if you're looking for a job can they like track you down and see how much time you waste at the water cooler or like can can your neighbors figure out if you're a good neighbor like I don't know like how how many people you give access to these solutions or do we we have them out there and like maybe you can subpoena the government for a certain reason and like go ahead and use that data to prosecute someone or do we just have it out there where everybody just gets it and I'm just hoping that somebody puts some new rules in place and says oh you you can subpoena if you think somebody's guilty of a you know missing person or some kind of violent crime or something but you know I don't think it's just okay to kind of make some assumptions about whether we should insure someone or companies kind of feel like they could use it to squeeze you know change prices on Amazon depending on who you are and like they they can tell that you just came into money and like now I don't know put pressure on you or you know what I mean or like give that to people who now know who to rob because of this or that like that's where it just it starts getting kind of crazy Uh well Reda Red Metaplex said this might fall into merit scores for UBI that's that's a bingo that's exactly Yeah maybe exactly the horrible outcome future we avoid You you know I covered a I covered a really interesting AI model um a couple of days ago that was uh and this blew my mind but in most cars the there's a sensor that that measures how much tire pressure you have right like it like the PSIs and you can't run a wire to this thing because it's spinning right like if there was actually a wire that went into the tire it would like wind up and and rip the wire so it's a tight so what all those sensors are wireless they send a tiny they spin with the tire and then they send a message whatever every like uh minute or something that just says hey here's my tire pressure here's my tire pressure and then a little receiver that's close to it inside of the drive train or whatever um picks it up and like puts that on your dashboard but it's not like super encrypted and there's like there was another AI model that was like okay well the this these researchers or whatever put um a little measurement device on the side of the street and just started like tracking all all the and they have like little unique UUID numbers or something like that and just as they passed and eventually then trained a little AI model like kind of fine tuned it so it could just see what's coming by and it started predicting like around what time what car will pass and it was it was just the beginning and it's like okay now if you could just attach that pattern to like some person now you know when they go to work or when they drive this thing and if you set those around a whole city you could all of a sudden have this map and like what a weird unique identifier that nobody ever thought about like the PSI measurement device that has a tiny unencrypted wireless signal you know so maybe now those need to be encrypted because they can actually become you know tracking devices Yeah and that's one example in probably a sea of examples cuz I think 2 years ago we talked about this new AI model where it listens to the keystrokes the the actual like if >> Oh the sound of the click Yeah and to figure out where your password is like it's it can replicate what people are typing over time can start to predict exactly what letters are are pushed so if you think about it you're able to pick up people's passwords and login information with no you don't have to visibly see what they're do you know what they're typing you don't need to visibly see anything it's just through some uh remote listening device So that's you know again just those are just two applications yeah think about how many more there are and yeah it so I get where um Dario's coming from right so he's saying that this is a way to avoid the US laws I think it's the fourth amendment about you know spying on the population because it's technically not considered spying because you're not listening to them directly you're sort of just like these snippets of data that all get formed in the AI but you know it's like it's like it's not a it's not a full picture it's a bunch of jigsaw puzzles Yeah like the spirit I mean the spirit of the law was like people want freedom the letter of the law is like well we can put we can remove your freedom if we put enough data sources together where none of them are technically spying and spying Yeah so that's I totally get where he's coming from and I think it needs to be protected is you know talking to the head of the Pentagon after you sign the contract is is that is that the right way of going about it >> do it probably not So yeah that's where >> I filled out your poll I definitely said they handled it wrong I don't think I can't give can't give them credit for that even if I I I agree with a lot of the stuff Yeah and I'm glad that somebody with Dario's [snorts] sort of like moral compass or whatever like I'm glad that they're uh have this position uh man does it feel like that was a misplay there was some missteps he he should not have been in that office talking to the guy one-on-one like the game was over when they they he was called to that meeting like that was already game lost it should have been prevented >> all that attention too I wish I mean it would have been nice if he you know he's got some really smart AI safety researchers over there but I'd say the best in the world like why didn't he say to them look I got to go on TV in 48 hours like try your best to come up with some kind of a a solution that I can suggest to the world you know like what what would the sandbox look like or what does a reasonable kill strike look like or what what's a confidence interval that like needs to be like a threshold and then I could at least present that cuz I wish the average person had instead of coming to me saying like they don't like America said what do you you know what do you think about that threshold is that a good threshold or not and then we could have had like an actual debate about a solution but I don't know it's always easy to judge judge people after they say things too so I don't know if I would have been that smart either >> make it better or kill you, right? Just ask yourself one simple question. In that situation, who do you want to be your PR person talking to the cameras? Is it Dario or Sam Altman? Or Sam Altman? Yeah, exactly. Just just ask yourself that question. Who Who are you going to be watching on TV, right? Going, "Oh, please don't say the thing. Don't say Don't say the thing. No, you said the thing." Versus versus like, "Wow, that was a really good of a Wow, you totally avoided like this issue by saying it this way." Like Sam Altman's going to win that game. That's I definitely I would definitely be like, "Before I answer that, let me introduce you to my new waifu. She will be handling these PR questions from here on out." She's going to take all the questions. He's like, "Hey, you guys like my new shirt? I'm out." Yeah, >> [laughter] >> exactly. Exactly. So, that's my I just realized we went for quite some time. Um 2 hours. >> Call it now if you want. Let's maybe start um wrapping it slowly. So, yeah, thank you everybody for taking part of the questions. So, 55% of the people agree with with Anthropic, they they had good communication. And then 31% agree with Anthropic, bad communication. And then 13% disagree with with with that. Yeah, just for for the majority of the people, 55% please understand. I'm not saying anything negative about Anthropic or Dario or his morals, ethics, what he's trying to achieve. This is not that. This is uh you know, charisma check, right? Right? You Sometimes you find yourself in situations where you need to communicate to people that think very differently from you, that perceive reality very differently from you. The crowds tend to perceive what you say, and then the news media blow it out of control. That's why people have PR training or whatever, media training. Um what I'm saying is there might have been some misplace in how misplays in how Dario communicated. Not that he did anything wrong in terms of Oh, but you know what some person is saying, "Dario's not a politician, but you really want another really good liar." I prefer his bad communication. You know what? Yeah, no, you're you're right. I guess I said I want it cuz I want to succeed or whatever, but no, I don't I don't actually want the world to work this way. I wish all the politicians were gone and only the engineers told us everything and we made logical decisions. It's just human emotions don't vote that way very often. Yeah, I I don't know how to respond to that because yeah, you're absolutely right. We don't want another politician or somebody that's just a smooth talker. Maybe Dario is the best person for this cuz he's just going to say whatever's on his mind. Yeah, I prefer Yeah, I mean I I do prefer people like that kind of But man, do they do they have a disadvantage? It seems like in our modern discourse how how our discourse is is handled. I don't know. Dude, I don't know. I don't know what the right answer is, honestly. >> Yeah, he doesn't like going on a social network or anything, so he's not going to be able to bias like kind of like spread his words or he's doesn't really like build like a cult around him the same way, like, you know? I don't know. Yeah. But Steve Jobs would have said something, like, yeah. Anyway, but yeah, go ahead. Yeah, Steve Jobs, man, you a lot of things came out about him how like angry and abrasive he was after he died. Do you know what I mean? You didn't hear anything negative too too many negative things while he was still alive, right? No. It's a little bit of a different era, too. Now I feel like you need I almost wonder like if Sam was Sam Altman's like, "Oh, I need to get Sora up and running so I have like my own social network cuz Elon's got his own and Oracle's kind of got their own now, you know? It's like if you don't have your own social network Microsoft's got LinkedIn and Google has YouTube. It's just hard to have an like a real opinion in the world without a network under you. Yeah. Uh yeah, it used to be newspapers and TV shows, now it's more like social networks, so which is interesting. Anyways, thank you so much for everyone that joined us today. You know, um if you have a second to check out the little um thing that I posted that's pinned in the comments that's to help me understand better who you are. If you can fill that out, I'd appreciate it. I'm sure some of you did already and thank you so much for that. Uh hopefully this was interesting and um uh informative and entertaining for people. If this works out, we'll do more of these. So, if you can If you liked it, hit thumbs up, subscribe, do all of the things to help us out so we can do this more often. Thank you so much for everybody being here. Thank you so much, Dylan, for joining me today. >> Yeah. And uh till next time. >> fun and yeah, thank you thanks to the audience for all the comments. I think I learned a lot, so appreciate it. This was awesome. All right. We'll see you later. Stay safe and stay sane, as somebody said. >> [laughter] >> GG. Okay, bye. >> Uh bye. And we're going to just chill for a second here >> and stay sane. Oh. So, we're >> All right, I haven't heard that one. We're still alive, but Okay, now I can turn it off because the stream has caught up. Okay. Bye.