Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
Don’t look for statistical precision in analogies. That’s why it’s called an analogy, not a calculation.
No, this is the equivalent of writing off calculators if they required as much power as a city block. There are some applications for LLMs, but if they cost this much power, they’re doing far more harm than good.
Your first paragraph assumes that labour costs are the same in both markets and that there is little development or tooling cost to setting up that manufacturing base locally. Both are false, and both of those are really the reason overseas manufacturing is a thing in the first place.
Exactly this, and rightly so. The school’s administration has a moral and legal obligation to do what it can for the safety of its students, and allowing this to continue unchecked violates both of those obligations.
I agree that LIDAR or radar are better solutions than image recognition. I mean, that’s literally what those technologies are for.
But even then, that’s not enough. LIDAR/radar can’t help it identify its lane in inclement weather, drive well on gravel, and so on. These are the kinds of problems where automakers severely downplay the difficulty of the problem and just how much a human driver does.
You are making it far simpler than it actually is. Recognizing what a thing is is the essential first problem. Is that a child, a ball, a goose, a pothole, or a shadow that the cameras see? It would be absurd and an absolute show stopper if the car stopped for dark shadows.
We take for granted the vast amount that the human brain does in this problem space. The system has to identify and categorize what it’s seeing, otherwise it’s useless.
That leads to my actual opinion on the technology, which is that it’s going to be nearly impossible to have fully autonomous cars on roads as we know them. It’s fine if everything is normal, which is most of the time. But software can’t recognize and correctly react to the thousands of novel situations that can happen.
They should be automating trains instead. (Oh wait, we pretty much did that already.)
Even talking about it this way is misleading. An LLM doesn’t “guess” or “catch” anything, because it is not capable of comprehending the meaning of words. It’s a statistical sentence generator; no more, no less.
He can give himself whatever titles he likes, that doesn’t mean he makes any positive technical contribution.
Machine learning has many valid applications, and there are some fields genuinely utilizing ML tools to make leaps and bounds in advancements.
LLMs, aka bullshit generators, which is where a huge majority of corporate AI investment has gone in this latest craze, is one of the poorest. Not to mention the steaming pile of ethical issues with training data.
Very nice writeup. My only critique is the need to “lay off workers to stop inflation.” I have no doubt that some (many?) managers etc… believed that to be the case, but there’s rampant evidence that the spike of inflation we’ve seen over this period was largely due to corporate greed hiking prices, not due to increased costs from hiring too many workers.
There is no way that arming Taiwan results in Taiwan starting a war of aggression against China.
What arming Taiwan does is make it an increasingly bad idea for China to invade Taiwan. It’s a deterrent to make sure the nuclear power who constantly threatens Taiwan (read: China) doesn’t think they can just go and take what they want without consequence, and probably commit a little genocide on the side.
You know, like Ukraine.
This. Satire would be writing the article in the voice of the most vapid executive saying they need to abandon fundamentals and turn exclusively to AI.
However, that would be indistinguishable from our current reality, which would make it poor satire.
The issue with “Human jobs will be replaced” is that society still requires humans to have a paying job to survive.
I would love a world where nobody had to do dumb labour anymore, and everyone’s needs are still met.
What part of “we paid these guys and they said we’re fine” do you not? Why would they choose and pay and release the results from a company they didn’t trust to clear them?
I’m not saying it’s rotten, but the fact that the third party was unilaterally chosen by and paid for LMG makes all the results pretty questionable.
It’s hard to trust a firm that is explicitly being paid by the company they’re investigating. I could be convinced that they are actually a neutral third party and that their investigation was unbiased if they had a track record of finding fault with their clients a significant portion of the time. (I haven’t done the research to see if that’s the case.)
However, you have to ask yourself - how many companies would choose to hire a firm which has that track record? Wouldn’t you pick one more likely to side with you?
The way to restore credibility is to have an actually independent third party investigation. Firm chosen by the accuser, perhaps. Or maybe something like binding arbitration. Even better, a union that can fight for the employees on somewhat even footing with the company.
The fundamental difference is that the AI doesn’t know anything. It isn’t capable of understanding, it doesn’t learn in the same sense that humans learn. A LLM is a (complex!) digital machine that guesses the next most likely word based on essentially statistics, nothing more, nothing less.
It doesn’t know what it’s saying, nor does it understand the subject matter, or what a human is, or what a hallucination is or why it has them. They are fundamentally incapable of even perceiving the problem, because they do not perceive anything aside from text in and text out.
Do you think every paper writer would comply? Do you think that the actually problematic writers, like those cutting so many corners that they directly paste ChatGPT results into their paper, would comply?
I don’t know about the regulatory side, but Boeing gutted their experienced engineering corps starting about 10 years ago. In the pursuit of profit of course. I think we’re seeing the effects of that finally coming to the fore.
My understanding of the role of the regulatory agencies for stuff like this is that they can ground a model of plane if they believe there’s a systemic issue. Like we saw with the MAX.
This article and discussion is specifically about massively upscaling LLMs. Go follow the links and read OpenAI’s CEO literally proposing data centers which require multiple, dedicated grid-scale nuclear reactors.
I’m not sure what your definition of optimization and efficiency is, but that sure as heck does not fit mine.