• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • Eranziel@lemmy.worldtoTechnology@lemmy.worldThe GPT Era Is Already Ending
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    18 days ago

    This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.

    I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.








  • You are making it far simpler than it actually is. Recognizing what a thing is is the essential first problem. Is that a child, a ball, a goose, a pothole, or a shadow that the cameras see? It would be absurd and an absolute show stopper if the car stopped for dark shadows.

    We take for granted the vast amount that the human brain does in this problem space. The system has to identify and categorize what it’s seeing, otherwise it’s useless.

    That leads to my actual opinion on the technology, which is that it’s going to be nearly impossible to have fully autonomous cars on roads as we know them. It’s fine if everything is normal, which is most of the time. But software can’t recognize and correctly react to the thousands of novel situations that can happen.

    They should be automating trains instead. (Oh wait, we pretty much did that already.)










  • It’s hard to trust a firm that is explicitly being paid by the company they’re investigating. I could be convinced that they are actually a neutral third party and that their investigation was unbiased if they had a track record of finding fault with their clients a significant portion of the time. (I haven’t done the research to see if that’s the case.)

    However, you have to ask yourself - how many companies would choose to hire a firm which has that track record? Wouldn’t you pick one more likely to side with you?

    The way to restore credibility is to have an actually independent third party investigation. Firm chosen by the accuser, perhaps. Or maybe something like binding arbitration. Even better, a union that can fight for the employees on somewhat even footing with the company.


  • The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.

    It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.