A software developer and Linux nerd, living in Germany. I’m usually a chill dude but my online persona doesn’t always reflect my true personality. Take what I say with a grain of salt, I usually try to be nice and give good advice, though.

I’m into Free Software, selfhosting, microcontrollers and electronics, freedom, privacy and the usual stuff. And a few select other random things, too.

  • 4 Posts
  • 798 Comments
Joined 6 months ago
cake
Cake day: June 25th, 2024

help-circle


  • You’re right. The current LLM approach has some severe limitations. If we ever achieve AGI, it’ll probably be something which hasn’t been invented yet. Seems most experts also predict it’ll take some years and won’t happen over night. I don’t really agree with the “statistical” part, though. I mean that doesn’t rule anything out… I haven’t seen any mathematical proof that a statistical predictor can’t be AGI or anything… That’s just something non-expert people often say… But the current LLMs have other/proper limitations as well.

    Plus, I don’t have that much use for something that does the homework assignments for me. If we’re dreaming about the future anyways: I’m waiting for an android that can load the dishwasher, dust the shelves and do the laundry for me. I think that’d be massively useful.


  • Why does OpenAI “have” everything and they just sit on it, instead of writing a paper or something? They have a watermarking solution that could help make the world a better place and get rid of some of the Slop out there… They have a definition of AGI… Yet, they release none of that…

    Some people even claim they already have a secret AGI. Or at least ChatGPT 5 sure will be it. I can see how that increases the company’s value, and you’d better not tell the truth. But with all the other things, it’s just silly not to share anything.

    Either they’re even more greedy than the Metas and Googles out there, or all the articles and “leaks” are just unsubstantiated hype.



  • I wonder where this will lead. I mean the usual strategy of selling something is to look at customers, see what they want or need, give them about that… And it’ll make them buy your product. And I can see that in some of Microsoft’s products. But recently, that doesn’t seem to be super important any more when it comes to the operating system. I mean they’ve done that before. Used their marked share on the desktop to push their agenda. Even if their customers don’t like any of that. Or alternated between improvements and the worse new Windows version in-between… But especially with Windows 11 it doesn’t seem to me they care any more. Do they still have a lock on desktop computers like they used to? So they can afford to do that? Because I’m hearing more complaints than before…









  • What’s the correct term within casual language? “correctness”? But that has the same issue… I’m not a native speaker…

    By the way, I forgot my main point. I think that paper generator was kind of a joke. At least the older one, which predates AI and uses “hand-written context-free grammar”:

    And there are projects like Papergen and several others. But I think what I was referring to was the AI scientist which does everything from brainstorming research ideas, to simulating experiments, writing reports etc. That’s not meant to be taken seriously, in the sense that you’ll publish the generated results. But seems pretty creative to me, to write a paper about an artificial scientist…


  • Right, the public and journalists often lump everything together under the term “AI”. When it’s really a big difference between some domain specific pattern recognition task that can be done with machine learning and >99% accuracy… Or an ill-suited use-case where a LLM gets slapped on.

    For example I frequently disagree with people using LLMs for summarization. That seems to be something a lot of people like. And I think they’re particularly bad at it. All my results were riddled with inaccuracies, sometimes it’d miss the whole point of the input text. And it’d rarely summarize at all. It just picks a topic/paragraph here and there and writes some shorter version of that. Missing what a summary is about, providing me with the main points and conclusion, reducing the details and roughly outlining how the author got there. I think LLMs just can’t do it.

    I like them for other purposes, though.



  • If you do it right, you can have that AI replace all the complicated pirating and downloading process. I think someone already came up with a paper writer AI. You just give it the topic, and it fabricates a whole paper, including nice diagrams and pictures. 😅

    Yeah, but that also made me worry. I wonder how AI and science mix. Supposedly, some researchers use AI. Especially “Retrieval-Augmented Generation” (information retrieval) and such. I’m not a scientist, but I didn’t have much luck with AI and factual information. It just makes a lot of stuff up. To the point where I’m better off without.


  • I think you’re all making look a bit worse than it is. I downloaded a few PDFs via IPFS and it worked for me. And I was happy it provided me with what I needed at that time. I can’t comment on reliability or other nuances. It also was slow in my case, but I took that as the usual trade-off. Usually, you either get speed or anonymity, not both. And there are valid use-cases for denylists. For example viruses, malware, CSAM and spam. I’d rather not have my node spread those. It’s complicated. And I also talk in public like that. I think what matters is what you do and implement, not if you say you comply with regulation and the DMCA…

    Thanks for the links, I’ll have a look.