LLMs are not advancing enough any more. There just isn’t any more useful human generated text to train new models on. The net is already full of AI generated slop. OpenAI currently spends 2.35 USD to make 1 USD. It’s fundamentally unsustainable.
It’s also like that for nearly every technology that has failed. For every Amazon that ran in the red until it grabbed enough market share to make a profit, there are 1000 firms that went tits-up, never having turned a profit. (Actual constant may vary from 1000, but it’s pretty damn big regardless).
Yes, but solar cells are in the end very simple products made of very simple resources, with a limited task: concerting one type of energy into another. That said, there is still research in making them more efficient and cheaper, and the that research isn’t cheap.
But generative AI / LLM takes an insane amount of resources to train and maintain, is complex to create, with a very complex task, and a slight increase in quality takes progressively more resources (like, say 10% better would be 50% more energy use - I don’t have the numbers anymore but iirc they were even worse). A better LLM would therefore be much, much more expensive while people are apparently already underwhelmed with the latest models.
With the growing competition, fast rising costs and meagre quality updates, while already unable to financially sustain themselves right now, I truly don’t see it. Honestly, this is why I think Microsoft is cramming their subpar Copilot into everything - to sort of justify all the money they pumped into this.
Without enough funding, they absolutely will care.
Thats between $33 billion and $47 billion at current costs. Someone needs to fund that.
I’d also note that their models seem to be getting worse, with outright irrelevant answers, worse perfoemance, failures in following instructions, etc. Stanford and UC Berkeley did a months-long comparison, and even basic math is going downhill.
They don’t care if they earn money the next 5-7 years.
And they will hit the point of a great model doing human work for less than a monthly salary. It’s just a matter of time.
I’m incredulous.
There was that thread asking what people are using LLMs for and it pretty much came down to “softening language in emails”.
For most jobs LLMs can provide a small productivity bump.
IMO if an LLM can do most of your job then you’re not producing much value anyway.
I am honestly very very curious: how?
LLMs are not advancing enough any more. There just isn’t any more useful human generated text to train new models on. The net is already full of AI generated slop. OpenAI currently spends 2.35 USD to make 1 USD. It’s fundamentally unsustainable.
It costs 1 billion dollars to develop solar cells before they even sell the first product.
The costed 100.000 dollars when starting to sell.
They go for under 10 bucks per square today.
And it’s like that for any technology ever invented.
It’s also like that for nearly every technology that has failed. For every Amazon that ran in the red until it grabbed enough market share to make a profit, there are 1000 firms that went tits-up, never having turned a profit. (Actual constant may vary from 1000, but it’s pretty damn big regardless).
Solar panels are useful though.
Yes, but solar cells are in the end very simple products made of very simple resources, with a limited task: concerting one type of energy into another. That said, there is still research in making them more efficient and cheaper, and the that research isn’t cheap.
But generative AI / LLM takes an insane amount of resources to train and maintain, is complex to create, with a very complex task, and a slight increase in quality takes progressively more resources (like, say 10% better would be 50% more energy use - I don’t have the numbers anymore but iirc they were even worse). A better LLM would therefore be much, much more expensive while people are apparently already underwhelmed with the latest models. With the growing competition, fast rising costs and meagre quality updates, while already unable to financially sustain themselves right now, I truly don’t see it. Honestly, this is why I think Microsoft is cramming their subpar Copilot into everything - to sort of justify all the money they pumped into this.
Without enough funding, they absolutely will care.
Thats between $33 billion and $47 billion at current costs. Someone needs to fund that.
I’d also note that their models seem to be getting worse, with outright irrelevant answers, worse perfoemance, failures in following instructions, etc. Stanford and UC Berkeley did a months-long comparison, and even basic math is going downhill.
I’d rather say that it’s a matter of exponentially increasing funding and computing power.