Library/Spotlight

Back to Library
TED TalksCivilisational risk and strategySpotlightReleased: 18 Dec 2025

Why are people starting to sound like ChatGPT? | Adam Aleksic

Why this matters

Auto-discovered candidate. Editorial positioning to be finalized.

Summary

Auto-discovered from TED Talks. Editorial summary pending review.

Perspective map

MixedGovernanceMedium confidenceTranscript-informed

The amber marker shows the most Risk-forward score. The white marker shows the most Opportunity-forward score. The black marker shows the median perspective for this library item. Tap the band, a marker, or the track to open the transcript there.

An explanation of the Perspective Map framework can be found here.

Episode arc by segment

Early → late · height = spectrum position · colour = band

Risk-forwardMixedOpportunity-forward

Each bar is tinted by where its score sits on the same strip as above (amber → cyan midpoint → white). Same lexicon as the headline. Bars are evenly spaced in transcript order (not clock time).

StartEnd

Across 6 full-transcript segments: median 0 · mean -1 · spread -80 (p10–p90 00) · 0% risk-forward, 100% mixed, 0% opportunity-forward slices.

Slice bands
6 slices · p10–p90 00

Mixed leaning, primarily in the Governance lens. Evidence mode: interview. Confidence: medium.

  • - Emphasizes governance
  • - Emphasizes safety
  • - Full transcript scored in 6 sequential slices (median slice 0).

Editor note

Auto-ingested from daily feed check. Review for editorial curation under intake methodology.

ai-safetyted-talks

Play on sAIfe Hands

On-site playback is enabled when an episode-level media URL is connected. This entry currently points to a source page.

This entry currently has a show-level source URL, not an episode-level media URL.

Episode transcript

YouTube captions (TED associates this talk with a public YouTube mirror) · video ZkXrTHpnQrQ · stored Apr 10, 2026 · 113 caption segments

Captions are an imperfect primary: they can mis-hear names and technical terms. Use them alongside the audio and publisher materials when verifying claims.

No editorial assessment file yet. Add content/resources/transcript-assessments/why-are-people-starting-to-sound-like-chatgpt-adam-aleksic.json when you have a listen-based summary.

Show full transcript
How sure are you that you can tell what's real online? (Laughter) You might think it's easy to spot an obviously AI-generated image, and you're probably aware that algorithms are biased in some way. But all the evidence is suggesting that we're pretty bad at understanding that on a subconscious level. Take, for example, the growing perception gap in America. We keep over- and overestimating how extreme other people's political beliefs are, and this is only getting worse with social media, because algorithms show us the most extreme picture of reality. As an etymologist and content creator, I always see controversial messages go more viral because they generate more engagement than a neutral perspective. But that means we all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality. The same thing is currently happening with AI chatbots, because you probably assume that ChatGPT is speaking English to you, except it's not speaking English, in the same way that the algorithm's not showing you reality. There are always distortions, depending on what goes into the model and how it's trained. Like we know that ChatGPT says “delve” at way higher rates than usual, possibly because OpenAI outsourced its training process to workers in Nigeria who do, actually, say, "delve" more frequently. Over time, though, that little linguistic overrepresentation got reinforced into the model even more than in the workers' own dialects. Now that's affecting everybody's language. Multiple studies have found that, since ChatGPT came out, people everywhere have been saying the word "delve" more in spontaneous spoken conversation. Essentially, we're subconsciously confusing the AI version of language with actual language. But that means that the real thing is, ironically, getting closer to the machine version of the thing. We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and regurgitating it so the AI can be fed more of our data. You can also see this with the algorithm through words like "hyperpop," [not a] part of our cultural lexicon until Spotify noticed an emerging cluster of similar users in their algorithm. [When] they identified it and introduced a hyperpop playlist, however, the aesthetic was given a direction. Now people began to debate what did and did not qualify as hyperpop. The label and the playlist made the phenomenon more real by giving them something to identify with or against. And as more people identified with hyperpop, more musicians also started making hyperpop music. All the while, the cluster of similar listeners in the algorithm grew larger, and Spotify kept pushing it more, because these platforms want to amplify cultural trends to keep you on the app. But that means we also lose the distinction between a real trend and an artificially inflated trend. And yet, this is how all fads now enter the mainstream. We start with a latent cultural desire. Maybe some people are interested in matcha, Labubu or Dubai chocolate. The algorithm identifies this desire and pushes it to similar users, making the phenomenon more of a thing. But again, just like how ChatGPT misrepresented the word "delve," the algorithm is probably misrepresenting reality. Now more businesses are making Labubu content because they think that's the desire. More influencers are also making Labubu trends because we have to tap into trends to go viral. And yet, the algorithm is only showing you the visually provocative items that work in the video format. TikTok has a limited idea of who you are as a user, and there's no way that matches up with your complex desires as a human being. So we have a biased input. And that's assuming that social media is trying to faithfully represent reality, which it isn't. It's only trying to do what's going to make money for them. It's in Spotify's interest to have you listening to hyperpop, and it’s in TikTok’s to have you looking at Labubus because that's commodifiable. So again, we have this difference between reality and representation, where they're actually constantly influencing one another. But it's incredibly dangerous to ignore that distinction, because this goes beyond our language and consumptive behaviors. This affects the world we see as possible. Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because of the limited training texts in Iran reflect the more conservative political climate in the region. Does that mean that Iranian ChatGPT users will think more conservative thoughts? Elon Musk regularly makes changes to his chatbot Grok when he doesn't like how it's responding, and then uses his platform X to artificially amplify his tweets. Does that mean that the millions of Grok and X users are subconsciously being trained to align with Musk's ideology? We need to constantly remember that these aren't neutral tools. Everything that ends up in your social media feed or in your chatbot responses is actually filtered through many layers of what's good for the platform, what makes money and what conforms to the platform’s incorrect idea about who you are. When we ignore this, we view reality through a constant survivorship bias, which affects our understanding of the world. After all, if you're talking more like ChatGPT, you're probably thinking more like ChatGPT as well, or TikTok or Spotify. But you can fight this if you constantly ask yourself: Why? Why am I seeing this? Why am I saying this? Why am I thinking this? And why is the platform rewarding this? If you don't ask yourself these questions, their version of reality is going to become your version of reality. So stay real. (Cheers and applause)

Counterbalance on this topic

Ranked with the mirror rule in the methodology: picks sit closer to the opposite side of your score on the same axis (lens alignment preferred). Each card plots you and the pick together.

More from this source