Library / In focus

Back to Library
The TED AI ShowCivilisational risk and strategy

Is AI destroying our sense of reality? with Sam Gregory

Why this matters

This episode strengthens first-principles understanding of alignment risk and the strategic conditions that shape safe outcomes.

Summary

This conversation examines core safety through Is AI destroying our sense of reality? with Sam Gregory, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Perspective map

MixedSocietyHigh confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 26 full-transcript segments: median 0 · mean -2 · spread -140 (p10–p90 -80) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
26 slices · p10–p90 -80

Mixed leaning, primarily in the Society lens. Evidence mode: interview. Confidence: high.

  • - Emphasizes alignment
  • - Emphasizes safety
  • - Full transcript scored in 26 sequential slices (median slice 0).

Editor note

Useful mainstream bridge episode for teams that need a shared baseline quickly.

ai-safetyted-ai-showcore-safetytechnicalintropublic-understanding

Play on sAIfe Hands

Episode transcript

YouTube captions (auto or uploaded) · video a6XeGqLjRGU · stored Apr 2, 2026 · 735 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/is-ai-destroying-our-sense-of-reality-with-sam-gregory.json when you have a listen-based summary.

Show full transcript
okay picture this we're in Miami there's Sun sand palm trees in a giant open Air Mall right on the water pretty nice right but then late one Monday night in January of this year things get weird cop cars swarm the mall like dozens of them I'm talking six city blocks shut down lights flashing people everywhere and no one knows what's going on the news footage hits the internet and naturally the hive mind goes into overdrive speculation conspiracy theories are flying and one idea takes hold aliens folks are dissecting grainy helicopter footage in the comments zooming in analyzing it frame by frame to find any evidence of aliens so I thought I'm a Tik tocker what if I brought this online fever dream to life and shared it with the masses using the latest tools I created a video towering shadowy figures silently materializing amidst the flashing lights of the police cars an alien invasion in the middle of Miami's Bayside Marketplace just a bit of fun I thought some got the joke a Miami twist on stranger things but I watched as other people flocked into my comment section to declare this as Bonafide evidence that aliens do in fact exist now you might be wondering what actually happened a bunch of teenagers got into a fight at the mall the police showed up to break it up and that's all it took to trigger this Mass hysteria it was too easy too easy to make people believe something happened that never actually happened and that's kind of terrifying I'm baval sadu and this is the Ted AI show where we figure out how to live and thrive in a world where AI has changed everything [Music] so I've been making visual effects blending realities on my computer since I was a kid I remember watching a show called Mega Movie Magic which revealed the secrets Behind movies special effects I learned about CGI and practical effects in movies like Star Wars Godzilla and Independence Day I was already into computer Graphics but seeing how they could create visuals indistinguishable from reality was a game Game Changer it sparked a lifelong passion to blend the physical and digital worlds several years ago I started my Tik Tok Channel I'd upload my own creations and share them with hundreds and thousands and now millions of viewers I mean just 5 years ago if I wanted to make a video of giant aliens invading a allall in Miami it would have taken me a week and at least five pieces of software but this Aliens video it took me just a day to make using tools like mid Journey Runway and ML and Adobe Premiere tools that anyone with a laptop can access since chat GPT came on the scene in late 2022 there's been a lot of talk about the Turing test where a human evaluator tries to figure out if the person at the other end of a text chat is a machine or another human but what about the visual Turing test where machines can create images that are indistinguishable from reality and now open AI come out with Sora of video generation tool that will create impressively lifelike video from a single text prompt it's basically like chat GPT or Dolly but instead of text or images it generates video and don't get me wrong there are other video generation tools out there but when I first saw Sora the realism blew my socks off I mean with these other programs you can make short videos like a couple seconds long but with Sora we're talking minute long videos the 3D consistency with those long Dynamic camera moves definitely impressed me there's so much high frequency detail and the scene is just brimming with life and if we can just punch in a single text prompt into Sora and it'll give us fullon video that's visually convincing to the point that some people could mistake it for something real well you could imagine some of problems that might stem from that so we're at a turning point not only have we shattered the visual turning test we're reshing it every day images audio video 3D the list goes on I mean you probably seen the headlines AI generated nude photographs of Taylor Swift circulating on Twitter a generated video of Vladimir zilinski surrendering to the Russian army a fraudster successfully impersonating a CFO on a video call to scam a Hong Kong company out of tens of millions of dollars and as bad as the hoaxes and The Fakes and the scams are there's a more Insidious danger what if we stop believing anything we see think about that think about a future where you don't believe the news you don't trust the video evidence you see in court you're not even sure that the person on the other end of the zoom call is real this isn't some far-flung future in fact I'd argue we're living in it now so given that we're in this new world where we're constantly shattering and reshing the visual Turing test how do we protect our own sense of reality I reached out to Sam Gregory to talk me through what we're up against Sam is an expert on generative Ai and misinformation and is the executive director of the human rights Network witness his organization has been working with journalists human rights Advocates and technologists to come up with solutions that help us separate the real from the fake Sam thank you for joining us I have to ask you as we're seeing these AI tools proliferate just over the last two years are we correspondingly seeing a massive uptick of these visual hoaxes the vast majority are still these shallow fakes because anyone can make a shallow fake it's trivially easy right just to take an image grab it out of Google Search and claim it's from another place what we're seeing though is this uptic that's happening in in a whole range of ways people are using this generative media for for deception so you see images sometimes deliberately shared to deceive people right someone will share an image claiming it's you know of an event that never happened and then you know we're seeing a lot of audio CU it's so trivially easy to make right few seconds of of your voice and you can you can churn out endless endless um uh cloned voice we're not seeing so much video right and that's you know um a reflection that you know really doing complex video Recreation is still not quite there right yeah video is significantly harder at least for the moment and I personally hope that it would stay pretty hard for a while though some of these Generations are getting absolutely wild I had a bit of an existential moment looking at this one video from Sora it's the underwater diver video for anyone who hasn't seen this uh there's a diver swimming underwater you know investigating this historic almost archaeological spaceship that's crashed into the water bed and it looked absolutely real and I was thinking through what that would have taken for me to do the oldfashioned way and I was just gasping that this was just a simple prompt that produced this Immaculate one minute video I'm kind of curious have you had such a moment yourself it it's funny because I was literally showing that video to my colleagues and I didn't cue them up that it was made with with Sora because I wanted to see whether they clicked that it was um an AI generated video because I think it's a fascinating one it's kind of on the edge of possibility there's definitely a kind of a moment that's happening now for me and and it and it's really interesting cuz you know we first started working on this like five or six years ago and we were just doing what we described as prepare don't panic and really trying to puncture people's hype particularly around video deep fakes because people kept implying that they were really easy to do and that we were surrounded by them and the reality was it wasn't easy to fake you know convincing video and to do that at scale so it's certainly for me Sora has been a click moment in terms of the possibility here even though it feels like a black box and I'm not quite sure how they've done it how accessible this is actually going to be and how quickly so related to this a lot of these visual hoaxes tend to be Whimsical even in uous right in other words they don't cause serious harm in the real world and are almost akin to pranks but some of these visual hoaxes can be a lot more serious can you tell me a little bit about what you're seeing out there the most interesting examples right now um are happening in election context globally and they're typically um people having words put in their mouths in the recent elections in Pakistan in Bangladesh you had candidates saying uh boycott the vote or vote for the other party right and they're quite compelling as at a first glance particularly if you're not very familiar with how AI can be used and and they're often deployed right before an election so those are clearly in in most cases malicious they're designed to deceive and then you're also seeing ones that are kind of these leaked conversation ones so they're not Visual hoaxes and so you've got really you know quite receptive uses happening there either directly just with audio or at the intersection of audio with animated faces or audio with the ability to make ellip sync with a with a video if I if I wanted to ask you to zoom in on one single example that's Disturbed you the most something that exemplifies what you are the most worried about what would it be I'm going to pick one that is uh it's actually a whole genre and and I'm going to describe this genre because I think it's the one that people are familiar with but once you start to think about it you realize how easy it is to do this and that is pretty much everyone has seen Elon Musk selling a crypto scam right often paired up with a newscaster your favorite newscaster or your favorite political figure in every country in which I work people have experienced that they've seen that video where it's like the newscaster says Hey Elon come on and explain how you follow this new crypto scam or come on political candidate and explain why you're investing in this crypto scam for anyone who hasn't seen it these are just videos with a deep fake Elon musk trying to guilt you into buying crypto as a part of their Bitcoin giveaway program and so the reason I point to that is not because it has massive human rights impacts or massive news impacts but it's just this is so commodified but we have this sort of bigger question of how it plays into our overarching understanding of what we trust right does this undermine people's confidence in almost any way in which they experience audio or video or photos that they encounter online does it just reinforce what want to believe and for other people just let them believe that nothing can be [Music] trusted we're going to take a quick break when we come back we're going to talk with Sam about how we can train ourselves to better distinguish the real from the unreal using a little system he calls sift more on that in just a [Music] minute we're back with Sam Gregory of witness before the break we were talking about how these fake videos are starting to erode our trust in everything we see and yeah maybe you can find flaws in a lot of these videos but some of them are really really good and nobody's zooming in at 300% looking for those minor imperfections especially when they're scrolling through a feed right like before their morning commute or something yeah and you're hitting on the thing that I think you know the the news media has often done a disservice to people about how to think about spotting AI right we put such an emphasis on kind of like you know you should have spotted the pope you know had his ring finger on the wrong hand in that puffer jacket image right or didn't you see that his hair didn't look quite right on the hairline or didn't you see he didn't blink at the regular rate and it's it's just so cruel almost to us as consumers to expect us to spot those things we don't do it I don't look at every Tik Tok video in my you know in my for you page and go like let me just look at this really carefully and make sure if someone's trying to deceive me and so we've we've done a disservice often because people point out these glitches and then they expect people to spot them and it's it's it creates this whole culture where we distrust everything we look at um and we try and apply this sort of personal forensic skepticism and it it doesn't lead us to great places all right I want to talk about mitigation how do we prepare and what can we do right now when we first started saying prepare Don't Panic it was five or six years ago and it was in the first deep fakes hype cycle which was like the 2018 elections when everyone like deep fakes are going to destroy the elections and I don't think there was a single deep fake in the 2018 us elections of any note now let's fast forward to now right 2024 when we look around the world the threat is clear and present now and it's escalating so prepare is about acting listening to the right voices and thinking about how we balance out creativity expression human rights and do that from a global perspective because so much this conversation often is also very us or Europe Centric um so what can we do now you know the first part of it is who are we listening to about this and I often get frustrated in AI conversations could get this very abstract discussion around AI harms and AI safety and it feels very different from the conversation I'm having with journalists and human rights Defenders on the ground who are saying I got targeted with a non-consensual sexual deep fake I got uh my footage dismissed as faked by a politician because he said it could have been made B by AI so as we prepare the first thing is who do we listen to right and we should listen to the people who actually are experiencing this and then we need to think what is it that we need to help people understand how AI is being used this kind of question of the recipe um and I use the recipe analogy because I think we're not in a world where it's AI or not it's even in the photos we take on our iPhones we're already combining Ai and human right the human input then the AI modifications that make our photos look better so we need to think how do we communicate that AI was used in the media we make we need to show people how Ai and human were involved in the creation of a piece of media how it was edited and how it's distributed the second part of it is around access to detection and the thing that we've seen is there's a huge gap in access to the detection tools for the people who need it most like journalists and election officials and human rights Defenders globally and so they're kind of stuck they get this piece of video or an image and they are doing the same things that we're encouraging Ordinary People to do look for the glitches you know take a guess drop it in an online detector and and all of those things are as likely to give a false positive um or a false negative as they are to give a reliable result that you can explain so You' got those two things you got an absence of transparency explaining the recipe you've got gaps in access to detection and neither of those will work well unless the whole of the AI pipeline uh plays its part in making sure the signals of that authenticity and the ability to detect is retained all the way through so those are the three key things that we point to is transparency done right protection available to those who need it most and the importance of having an AI pipeline where the responsibility is shared across the whole AI industry I think you covered like three questions beautifully right here so a key challenge is telling what content is generated by humans versus synthetically generated by machines and one of the efforts you're involved in is the appropriately named content authenticity initiative could you talk a bit about how does that play into a world where we will have fake content reporting to be real yes so um about 5 years ago there was there were a couple of initiatives founded by a mix of companies and media entities and witness joined those early on to see how we could bring a human rights voice to them and one of them was something called the content authenticity initiative that adobe kicked off and another was something called the Coalition for Content Providence and authenticity uh the short hand for that is ctpa so let me explain a little more about what ctpa is it's it's basically a technical standard for showing what we might describe as the provence of an image or a video or another piece of media and providences basically the trail of how it was created right this is a standard that's being increasingly adopted by Platforms in the last couple of months you've seen Google and meta adopted as a way they're going to show to people how the media they encounter online particularly AI generated or edited media was made um it's also a direction that governments are moving in some key things that that we point to around standards like the c2p is you know the first thing is they are not a foolproof way of showing whether something was made with AI or made with a human what I mean by that is they tell you information but you know we know that people can remove that metadata for example they can strip out the metadata and we also know that some people may not add this in for a range of reasons so we're creating a system that allows additional signals of of trust or additional pieces of information but no one confirmation of authenticity or reality I think that's really important that we be clear that this is in some sense a harm reduction approach it's a way to give people more information but it's not going to be conclusive in a kind of sort of Silver Bullet like way um and then the second sort of thing that we need to think about these is you know we need to really make sure that this is about the how of how media was made not the who of who made it uh otherwise we open a back door to surveillance we open a back door to the ways this will be used to Target and criminalize journalists and people who speak out against governments globally beautifully said especially in the last point I I noticed Tim Sweeney had some interesting remarks about all of the content authenticity initiatives happening has kind of described it as sort of uh surveillance DRM where you cannot upload a piece of content right like if if people like you aren't pushing on this direction we may well end up in a world where you cannot upload imagery onto the internet without having your identity tied to it and I think that would be a scary world indeed the thing that we have consistently pushed back on in systems like c2p and is on the idea that identity should be the center of how you're trusted online it's it's helpful right and in many times I want people to know who I am but if we start to premise Trust online in individual identity as the center it and require people to do that that brings all kinds of risks that we already have a history of understanding from social media right that's not to say we shouldn't think about things like proof of personhood right like how do we understand that someone who created media was a human maybe important right as we enter an AI generated world that's not the same as knowing that it was Sam who made it not a generic human who made it right so I think that's really important it's a slippery slope indeed and really good point on sort of the distinction between validating your a human being versus you know validating you are Sam Gregory that's a there it's a very subtle but you know crucial distinction let's move over to fears and hopes you know back in 2017 you felt the the fear around deep fakes were overblown clearly now it is is far more of a clear and present danger where do you stand now what are your hopes and fears at the moment so we've gone from a scenario in 2017 where the the the primary harm was the one that people didn't discuss that was gender-based violence um and the harm everyone discussed political usage was non-existent to a scenario now where the gender-based violence has got far worse right and targets everyone from public figures to teenagers in schools all around the world and the political usage is now very real and the third thing is you have people realizing there's this incredibly good excuse for a piece of compromising media which is just to say hey that was faked or hey plausibly I can deny that piece of media by saying that it was faked and so those three are the sort of the core fears that I experience now that have translated into reality now in terms of hopes I don't think we've acted yet on those three core problems sufficiently right we need to address those and we need to make sure that you know we we criminalize the ways in which people Target primarily women with um non-consensual sexual deep fakes which are escalating in the second area of fears which is the fears around their misuse to in politics and and to undermine news footage and and human rights content I think that's where we need to lean into a lot of the um the approaches like the authenticity and Providence infrastructures like the ctpa the access to detection tools for the journalists who need it most and then smart laws that can help us rule out some usages right and make sure that it is clear that some uses are unacceptable and then the third area that's the hardest one because we just don't have the research yet about what is the impact of this constant sort of drip drip drip of you can't believe what you see and here we can only reach an 84% probability that it's real or false which is not great for public confidence but we also don't know how this plays into this broader societal trust crisis we have where already people want to lean into kind of almost plausible believability on stuff they care about or just plausibly ignoring anything that you know challenges those beliefs I think you brought up a really good point about it's almost like the world is fracturing into the Multiverse of Madness I like to call it where people are looking for whatever validation to sort of confirm their beliefs at the same time it can it can result in people being jaded right where they're just going to be detached well I I don't trust anything and so I'm curious how do you see consumer behaviors changing in this world where the visual Turing test gets shattered over and over again for all sorts of different more complex domains are people going to get savier what do you think is going to happen to society in such a world so we have to hope that we walk a fine line we're going to need to be more skeptical of audio and images and video that we encounter online um but we're going to have to do that with a skepticism that supported by signals that help us what I mean by that is if we enter a world where we're just like hey everyone everything could be faked it's getting better every day hey look out for the glitch then we enter a world where people skepticism quite rightly will accelerate because all of us will experience like on a daily basis being deceived right and I think that's very legitimate for us to then feel like we can't trust anything right in the ideal World everyone's labeling what's real or fake but when that's not happening what do people do I always go back to you know basic media literacy I use an acronym called that was invented by an an academic called Mike Coffield and sift is s i f t s stands for stop right because it's basically stop before you're emotionally triggered right whenever you see something that's too good to be true um I stands for investigate the source which is like um who shared this is is it someone I should trust the F stands for find alternative coverage right did someone already write about this and say wait that's not the pop and a puffer jacket in reality that's an AI image and the the fourth part of that which is um getting complicated is T for trace the original which used to always be a great way of doing it in the shallow fake era because you'd find that an image had been recycled but it's getting harder now so what I look at the sort of the knife edge we've got a walk it's to help people do sift in an environment that is structured to give them better signals of AI was used and where the law has set parameters about what is definitely not acceptable and where all the companies all the players in that AI pipeline are playing their part to make sure that we can see how the recipe of how Ai and human was used and that it's as easy as possible to detect when AI was used to uh manipulate or create a piece of imagery audio or video I really like sift I think that's also very good advice for people when they come across something that is indeed too good to be true very often we will be like oh well that's interesting and go about our day the devices we use every day aren't foolproof right they've got vulnerabilities there is this game of wack-a-mole that happens with patching those vulnerabilities and now we've got these cognitive vulnerabilities almost and you know on the detection side the tools are going to need to keep improving because people are going to find ways to use the detectors to create new generators that evade them right and so that game of wacka will continue but that isn't to say that all hope is lost we can adapt and we can still have an information landscape where we can all Thrive together that's the future I want the way we describe it a witness we talk about fortifying the truth which is that we need to find ways to defend that there is a reality out there thank you so much Sam I I will certainly sleep easier at night knowing there are people like you out there making sure we can tell the difference between the real and unreal thank you so much for joining us Sam Gregory and I had this conversation in mid-march and a few days later there was another development YouTube came out with a new rule if you have ai generated content in your video and it's not obvious you have to disclose its AI this move from YouTube is an important one the kind Sam and his colleagues at witness have been advocating for it shifts the onus onto creators and platforms and away from everyday viewers because ultimately it's unfair to make all of us become AI detectives scrutinizing every video for that missing Shadow or impossible physics especially in a world where the visual Turing test is continually being shattered and look I'm not going to sugarcoat this this is a huge problem and it's going to be difficult for everyone folks like Sam Gregory have their work cut out for them and massive organizations like Tik Tok Google and metad do2 but listen I'm going to be back here this week and the week after that and the week after that helping you figure out how to navigate this new world order how to live with AI and yes Thrive with it too we'll be talking to researchers artists journalists academics who can help us demystify the tech technology as it evolves together we're going to figure out how to navigate AI before it navigates us this is the tedi show I hope you'll join [Music] us the Ted aai show is a part of the Ted audio Collective and is produced by Ted with Cosmic standard our producers are Ella feder and Sarah McCrae our editors are B ban Chang and Alejandra Salazar our showrunner is Ivana Tucker and our associate producer is Ben Montoya our engineer is Asia parar Simpson our technical director is Jacob Winnick and our executive producer is Eliza Smith our fact Checker is Christian aparta and I'm your host baval sudu see yall in the next one [Music]

Related conversations

AXRP

3 Jan 2026

David Rein on METR Time Horizons

This conversation examines core safety through David Rein on METR Time Horizons, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -0 · 108 segs

AXRP

7 Aug 2025

Tom Davidson on AI-enabled Coups

This conversation examines core safety through Tom Davidson on AI-enabled Coups, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -5 · 133 segs

AXRP

6 Jul 2025

Samuel Albanie on DeepMind's AGI Safety Approach

This conversation examines core safety through Samuel Albanie on DeepMind's AGI Safety Approach, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med 0 · avg -4 · 72 segs

AXRP

1 Dec 2024

Evan Hubinger on Model Organisms of Misalignment

This conversation examines technical alignment through Evan Hubinger on Model Organisms of Misalignment, surfacing the assumptions, failure paths, and strategic choices that matter most for real-world deployment.

Same shelf or editorial thread

Spectrum + transcript · tap

Slice bands

Spectrum trail (transcript)

Med -6 · avg -7 · 120 segs

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.