I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • Hegar@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    8 months ago

    Part of the problem is hyperactive agency detection - the same biological bug/feature that fuels belief in the divine.

    If a twig snaps, it could be nothing or someone. If it’s nothing and we react as if it was someone, no biggie. If it was someone and we react as if it was nothing, potential biggie. So our brains are bias towards assuming agency where there is none, to keep us alive.