• hsdkfr734r@feddit.nl
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      edit-2
      22 hours ago

      An LLM cannot think like you and I. it’s not able to solve entirely new problems. And it doesn’t have a concept of the world - it paints hands without knowing what a hand does.

      It is a system which learns the rules of something by means of reinforcement learning to tune the coefficients of its heap of linear equations. It is better than a human in its area. I guess it can be good for tedious, repetitive tasks. Nevertheless it is just a huge coefficient matrix.

      But it can only reproduce what is in the training data - you need lots of already solved examples in the training data. It doesn’t work for entirely new problems.

      (that’s also the reason, why LLMs don’t give good answers to questions about specialized niche topics. When there are just one or two studies, there just isn’t enough training data for the LLM.)

    • superglue@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      10
      ·
      1 day ago

      Right? I see comments all the time about it just being glorified pattern recognition. Well…thats what humans do as well. We recognize patterns and then predict the most likely outcome.

      • KairuByte@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        19 hours ago

        That is one part of many that a human brain does. This is like trying to say the color red is a rainbow, because the rainbow has red in it.

          • KairuByte@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 hours ago

            How? You’re focusing on one thing a human does and using it to point to how human like LLMs are, while ignoring everything else humans do. You’re missing the forest for the trees.

            • superglue@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              7 hours ago

              I didn’t say that at all. What I said was LLMs solve problems just like a human does. Pattern recognition. Then I asked you to provide an example of one thing a human does that doesnt boil down to pattern recognition. The words we speak and type are patterns. The decisions we make are based on patterns we learned in the past. Thats really all I meant by it.

              • KairuByte@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                6 hours ago

                LLMs don’t solve problems. That’s the point being made here. Many other algorithms do indeed solve issues, but those are very niche, as the alogos were explicitly designed for those situations.

                While yes, humans excel at pattern recognition, sometimes to the point of it being a problem, there are many things we do that have nothing to do with patterns beyond the fact that they are tangentially involved. Emotions for instance don’t inherently follow patterns. They can, but they aren’t directly tied. Exploration also doesn’t come from pattern recognition.

                If you need examples of why people flat out say LLMs aren’t solving problems, look at the recent “how many r’s in strawberry” which has admittedly been “fixed” in many models.

                • superglue@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  5 hours ago

                  At the end of the day LLMs take in historical data and use it to predict what comes next. Just like humans do. But I guess we can disagree and leave it at that.