• 0 Posts
  • 392 Comments
Joined 4 months ago
cake
Cake day: August 29th, 2024

help-circle

  • 98% and 98% are identical terms, but the machine can use the terms to describe separate word’s accuracy.

    It doesn’t have languages. It’s not emulating concepts. It’s emulating statistical averages.

    “pie” to us is a delicious desert with a variety of possible fillings.

    “pie” to an llm is 32%. “cake” is also 32%. An LLM might say Cake when it should be Pie, because it doesn’t know what either of those things are aside from their placement next to terms like flour, sugar, and butter.


  • First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

    Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

    Humans suck at math.

    Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

    Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).