I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?

I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    7
    ·
    7 months ago

    That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.

    • slazer2au@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      7 months ago

      Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.

      • agamemnonymous@sh.itjust.works
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        7 months ago

        LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.

        People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.

        • 0x30507DE@lemmy.today
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          7 months ago

          People can also stop saying words and think for a second about the information they’re actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you’d stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it’s been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that’s what the data said so it has to be right, for instance.

          I’m aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it’s oversold to all hell, and is definitely not even close to intelligent.

          • agamemnonymous@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            7 months ago

            You haven’t really looked into multi-agent setups at all, have you? Basically any system of multiple agents can double-check themselves.

            Additionally, none of this conflicts with my original point. If you train a human on bad data, they’ll GIGO too. I know plenty of humans who have confidently told me objectively false things because they had bad training data.