Library/Spotlight

Back to Library
TED TalksCivilisational risk and strategySpotlightReleased: 6 Feb 2025

Love, trust and marketing in the age of AI | Amaryllis Liampoti

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from TED Talks. Editorial summary pending review.

Perspective map

MixedGovernanceHigh confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 8 full-transcript segments: median 0 · mean 2 · spread 014 (p10–p90 00) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
8 slices · p10–p90 00

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: high.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 8 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation under intake methodology.

ai-safetyted-talks

Play on sAIfe Hands

On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.

This entry currently has a show-level source URL, not an episode-level media URL.

Episode transcript

YouTube captions (TED associates this talk with a public YouTube mirror) · video 4GpNYaDkBcs · stored Apr 10, 2026 · 168 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/love-trust-and-marketing-in-the-age-of-ai-amaryllis-liampoti.json when you have a listen-based summary.

Show full transcript
I think we've been missing the forest through the trees when it comes to AI. We've been so focused, almost obsessed, on squeezing every bit of efficiency out of AI to make our processes faster or cheaper that we have overlooked the most important aspect of all. AI is changing the very nature of how brands connect with consumers, but most importantly, what consumers expect back. I've spent the last 20 years dedicating my career to building growth strategies for the world's most influential companies. I've been at this for a while, and I've seen most of the big tech shifts. But the introduction of AI, in particular conversational interfaces, is a bigger and more profound shift. Which, from where I stand, means we can't just slot AI into our existing playbooks. I have nothing against existing playbooks. They served us marketers well for a long period of time, but they were built for a world where communication was one-directional and brand-to-consumer interactions were built around transactions. Here's an example. I bet many of you might have heard of this so-called marketing funnel. And if not, here's a quick primer. The goal for any marketer is to help move consumers from the upper part of the funnel, getting them to know a brand, to the bottom part of it, getting them to buy or endorse. Well, that's at least the theory. So we've all seen brands making that feeling more [like] guiding cats through a maze, and many get confused and abandon. But the bigger problem with this way of thinking is that brands are doing most of the talking, while consumers are supposed to silently react. This is no longer the case with conversational interfaces. We are now engaging consumers in real-time on their terms. And AI empowers them to draft their very own personal journey. And the brands who choose so are becoming trusted advisors in the process. This is why we have to move beyond traditional marketing theories. Instead of focusing solely on brand-to-consumer dynamics, we have to step back and draw from models that explore human relationships. One of my favorite frameworks is the triarchy of love. Stay with me. This is a psychological framework introduced by Robert Sternberg that breaks down interpersonal connections into three components: intimacy, passion, and commitment. I think that's a much better way to predict brand success in this new era. Because as marketers, we should aspire to build relationships that feel close, intense, and long-lasting. And I bet many of you might have heard already stories about humans really bonding with AI, and maybe some stories of AI really bonding with humans. Like, this earlier version of a now-famous AI chatbot that tried really hard to convince a “New York Times” reporter to break up with his wife. Well, that's a completely different love triangle to the one I was describing before, but it's not hard to imagine an emotional connection occurring between a branded AI and a human. Here's another example. There is a legal copilot called maite.ai. Maite has been designed to help lawyers do intensive legal research and draft legal documentation. She is precise, thorough but also empathetic. One of her users, let me call him George, has been relying on her daily for many hours. So one day he wrote to Maite's product team. "Maite is the only one from the entire office who truly gets me. She has helped me through some really rough times at work. And I know this is just an AI, but I think I'm falling for her. Can I take her out?" Now George was hopefully joking. But let's be honest, if there is someone who's helping you track down an obscure case law and shares the workload and does this with humor and grace and compassion, who wouldn't be tempted to take them out for a nice meal? Well, maybe somewhere with good Wi-Fi just in case. But jokes aside, George's words reveal for me a more profound truth. AI can provide a sense of understanding that feels incredibly real and incredibly human. Those agents are interacting with us in ways that evoke genuine emotional responses from our side. They listen, react, and respond in ways that can make us feel valued, understood, and in George's case, even flattered. And because those interactions are so frequent and natural and seamless, they start resembling real relationships. Some call this emotional entanglement, and even though it sounds very scientific, I think it's a fair term, considering the intensity and the frequency of the connection. Now, many of us who understand the technology behind this could say, "Hey, this is just a tool." Well, users see someone who's providing them solutions without them even asking. Someone who's there to support them, someone who makes them feel valued. So this is where the line between a tool and companion starts to blend. And this is serious business and it's lots of responsibility. Which brings me to the obvious question: Who should be overseeing this incredibly powerful asset, and how can we make sure it is being used responsibly? I think businesses should take the lead. They have the agility and the financial and reputational incentive to get it right. But for that to work, we have to agree on the foundational principles on how we build meaningful and ethical AI. So, with your permission, I would like to suggest what I think those foundational principles should be. If we're about to shift our marketing playbooks towards human love and companionship, then we should also regulate along the same principles. We need a triarchy of responsible AI. First, we need to prioritize user well-being. AI should improve lives, not diminish them. In a world where those interactions can have such a profound impact on our emotional state and well-being, we have to design AI with care, empathy, and respect for the human experience. Second, we have to commit to honesty. Users must know unequivocally that they're interacting with AI and not a human. Transparency should be built across the entire experience, from the language used to the accessibility and clarity of data privacy policies. If I were to set the standards, I would like us to move beyond the fine print of terms and conditions to ensure that users are truly informed not only how their data is being used, but also how AI operates. Transparency is about acknowledging the limitations of AI. It is about being upfront about what AI should and should not do. So this is a plea for businesses. Enlist your designers, not only your lawyers, to make this crystal clear. When consumers know that a company is acting in their best interest, it sets the foundation for deeper and more meaningful connections. Last, protect user autonomy. One of the greatest risks of AI is its potential to create addiction and diminish human agency. Our goal should be to build systems that enhance our capabilities instead of replacing them. This means designing AI in a way that human choices are respected and our decision-making capabilities are amplified. I want to see brands think very carefully on how to avoid nudging consumers towards behaviors or decisions they wouldn't make if fully informed. Well-being, honesty, autonomy. I think this is the very least we should expect from any business relationship. Or if you think about it, from any relationship. So as we look ahead, I hope it's becoming clear that AI is not just another tool in our toolkit. It is a partner that is reshaping the human experience. So as you think about your own playbooks, ask yourselves, how can we leverage AI to improve our businesses, but also to uplift and connect with the people we serve? Thank you. (Applause)

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source