• LifeInMultipleChoice@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    18 hours ago

    So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

    • hendrik@palaver.p3x.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 hours ago

        It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

        This is more about robotics than AGI. A system can be generally intelligent without having a physical body.