Library/Spotlight

Back to Library
TED TalksCivilisational risk and strategySpotlightReleased: 18 Feb 2026

What you know that AI doesn’t | Priyanka Vergadia

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from TED Talks. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 6 full-transcript segments: median 0 · mean -2 · spread -249 (p10–p90 09) · 17% risk-forward, 83% mixed, 0% opportunity-forward slices.

Slice bands
6 slices · p10–p90 09

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 6 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation under intake methodology.

ai-safetyted-talks

Play on sAIfe Hands

On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.

This entry currently has a show-level source URL, not an episode-level media URL.

Episode transcript

YouTube captions (TED associates this talk with a public YouTube mirror) · video eR3IsKVLorg · stored Apr 10, 2026 · 147 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/what-you-know-that-ai-doesnt-priyanka-vergadia.json when you have a listen-based summary.

Show full transcript
Well, 71 percent of Americans believe that AI will cause massive job losses. Algorithms are getting smarter, faster, more capable every single day. My work puts me at the heart of this anxiety, where I bring AI applications to market for big tech companies and I help customers and businesses really take the potential of this technology further for their businesses. And through it all, I have seen brilliant professionals second-guess themselves as AI gets smarter. But let me tell you this one fundamental truth about AI. AI is excelling at identifying patterns. It understands data. We humans excel at understanding what these patterns actually mean in this beautifully chaotic world of human behavior. And even as these models and algorithms get stronger over time, this will stay true. Why? Because we understand things that cannot be quantified. Context, intent, unspoken emotions, cultural nuances. This depth of understanding comes from lived experiences that AI cannot replicate. So today I'll share with you three stories from my experience to prove this point that AI understands data and we understand experiences. And the key here is to not compete with AI, but to work with it while staying irreplaceably human. So how do we do that? Well, I was recently at a conference and met Sarah, a product manager. Her team has built an AI-powered analytics dashboard that's telling them very clearly that 80 percent of their users are only using basic features, and 20 percent are using advanced features here and there. Now Sarah looks at this data and she's like, OK, logically it makes sense. But she's questioning it. And this is the part I really love. She didn't just trust the algorithm as-is. She picked up the phone and called their 20 clients that were their top clients and asked them why they're not using these advanced features. Not to her surprise, she finds that they actually want to use these features, but they cannot find them because they are buried in some menu options, and the documentation isn't clear as well. Now, AI identified the pattern: that people are not using advanced features, but it totally missed the why behind it. Sarah's team goes in, rebuilds the entire experience, makes these features easier to find, and a few months later, the advanced feature adoption skyrockets. AI saw the symptom. Sarah diagnosed the disease. Now, the lesson that we take away from this example is clear. We've got to question the question. When AI recommends something, we need to ask why? If we continue to do that, we will be successful. On another occasion, I was working with a customer, Marcus, who is increasing sales efficiency using AI tools for their sales teams, analyzing the data through emails and engagement. And their AI tool is telling them that one of the biggest deals they have has a 95 percent probability to close. This was looking amazing. The data was saying positive sentiment, lots of engagement, but Marcus wanted to dig deeper and make sure that the deal happens. When he looks at the human element of this deal, he finds that ... Not the same people are showing up to these meetings. It's different stakeholders every time, and the responses in the emails have gotten vague and more corporate. AI is reading all of this activity as engagement. But really, there's something else going on behind the scenes. He dug a little further and identifies that the customer is going through a restructuring. And three teams thought that they owned the decision to make this purchase. If Marcus didn't get into this human element of the deal, the deal would never happen. AI identified the activities. Marcus measured meaning in those activities. So the lesson to learn from this story is you need to read the room, not just the dashboard. Understand those micro-expressions, the social cues in the room, the what are people saying, how are they nodding. We've all been in meetings where somebody says, "That's interesting." Are they politely dismissive or genuinely curious? Well, our emotional radar knows that. AI doesn't. I was with a friend recently, her name is Priya, and she works to use social media as a platform to help brands grow their revenue. Her AI tool is telling her to post fashion-hack videos, those videos where you get a lot of fashion tips out, for one of the brands. And she did that and they saw great engagement, lots of follower growth. But when talking to the team, they identified that none of that follower growth and engagement on social media was leading to sales or revenue. They were building the wrong audience. They were attracting bargain hunters, that was exactly opposite of the person who would pay 200 dollars to buy an ethically made jacket. This was what this brand makes. Now AI was optimizing for followers and engagement. Priya knew they were making the wrong audience, so she flips the switch. She stops taking AI-recommended content, instead, starts building content that is showing sustainable cost of building these fashion items. She started showing stories of artisans that were making these clothes. Now AI in this case was optimizing for activity and engagement. Priya optimized for building a community. And they started seeing the sales skyrocket. So the lesson that we learn here is always pause and ask, what is the story behind this data? And only we can do that. So if you see all these examples, there's one thing very common. The future doesn't belong to humans or AI. It belongs to humans that work closely with AI while staying irreplaceably human. Our ability to read the room, our ability to look at emotions, that is irreplaceable. Our ability to empathize with people, that's irreplaceable. So the next time ... You're feeling anxious about AI taking your job, remember that AI can identify patterns. Only we, and you can identify the human behind it. Thank you. (Applause)

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source