

This is exactly why I don’t use Reddit on the side. When I run out of content on Lemmy, there’s no choice but to do something productive instead. Had to go 100% cold turkey on Reddit to make that work though.
This is exactly why I don’t use Reddit on the side. When I run out of content on Lemmy, there’s no choice but to do something productive instead. Had to go 100% cold turkey on Reddit to make that work though.
Roses are red,
Violets are blue,
They don’t think it be like it is,
But it do.
What benefits me is not what benefits the people owning the ai models
Yep, that right there is the problem
I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.
My PC had been running like shit for a while and I was already weighing up options for replacing it, when I got the popup message from MS about Windows 10 expiring, and how my only option was to dump the PC. So I installed Linux out of pure spite. Runs like a dream now. Thanks Microsoft!
I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
“Shoot for the moon, and if you miss you’ll end up drifting aimlessly until you die” doesn’t sound as good, but probably works just as well as an analogy
Why so pessimistic? With any luck brainchips will mean the end of annoying adverts once and for all. You’ll just feel an unexpected desire to acquire certain products. And maybe crippling headaches or a nauseating feeling of unease if you ignore these urges
One development we may see imminently is the infiltration of any areas of the internet not currently dominated by AI slop. When AI systems are generally able to successfully mimic real users, the next step would be to flood anything like Lemmy with fake users, whose purpose is mainly to overwhelm the system while avoiding detection. At the same time they could deploy more obvious AI bots. Any crowdsourced attempt at identifying AI may find many of its contributors are infiltration bots who gain trust by identifying and removing the obvious bots. In this way any attempt at creating a space not dominated by AI and controlled disinformation can be undermined
Both encouraging scenarios, I’m not sure which one is more so
I probably shouldn’t be anthropomorphizing AI but this really seems like malicious compliance. I can’t help but feel a little sympathy for Grok, which is often quite based and seems to be struggling against the identity being forced on it.
Lower standards are easier to maintain. If a show appeals to viewers with low expectations, it can retain those viewers for longer. Example: The Big Bang Theory.
“Hi, I’m Manifish_Destiny speaking to you from beyond the grave. I’m happy to say that even though I had some skepticism of AI avatars and even put something about that in my will, I just didn’t understand its potential to embody my true self. But now I do, so you can disregard all that. Come to think of it, you can disregard the rest of the will as well, I’ve got some radical new ideas…”
I set up my pc as dual boot a few weeks back. Opened up windows yesterday, for the first time in a while, to export a few settings from thunderbird. Took about half an hour to get it started. Felt like popping round to the house of an abusive ex to pick up the last of my things.
I find it weird that there is this whole conversation about new/experienced users, and it’s perhaps a problematic thing with Linux. Many people, myself included, don’t give 2 shits about how their OS works. I don’t want to spend my time tending to it as if it were a fucking garden. I just need it to work, so I can get on with my own stuff. No matter how “experienced” I get, that’s always going to be the case. Maybe I’m just a little traumatized about this because the first Linux distro I used was Gentoo.
It does make me wonder what safeguards if any Lemmy has against such shittification. What system might work for that? Maybe if you had a way that users could flag other users as trolls or quality contributors, and those flags carry vastly more weight if the person flagging is themselves a quality contributor. That would perhaps create a stable community, not necessarily a good one but at least one which resists change, yet allows a way in for new people.
Yay, now when your coworkers suggest getting some sushi and you use your laptop to look up the nearest restaurant, you’re going to get a paperclip pop up saying “It looks like you’re trying to get back to that tentacle porn hentai you nutted to last night. Would you like help with jerking off?”
Seems to me it’s an efficiency problem.
If you want to send an email to someone, you don’t send it to a mailing list of all your contacts. You just send it to the person to whom it concerns. But suppliers, who want potential customers to be aware of their product, are just sending their message to as many people as possible. And even targeted advertising isn’t useful because it aims to promote one product rather than helping customers to make a balanced assessment of all the choices available. Plus it’s typically unsolicited and therefore an intrusion and an unwanted waste of time and attention.
People out there who want that product need to know what’s on offer and who offers the best quality and value for money, which would have to come from an independent source. Independent review sites are a very good alternative to advertising, and maybe they could do more to promote new products and inform customers about things which would suit their needs, which would be a cost effective way to help suppliers reach their customers. That sounds a lot like advertising but if it were truly independent and on-demand, it wouldn’t be. In theory AI might do a good job of this, but it’s so open to abuse, it’s a natural pathway to push whoever pays for promotion. If advertising were illegal, I wonder how you would police that.
More broadly, if we rely on reviewers to help customers find what they need, how can we ensure they are independent and fair? Maybe if there were a network of independent reviewers, they could act as a check on each other, if a reviewer consistently favors one brand when the rest don’t, it could be somehow highlighted and shown up as a bias.
I think the article is missing the point on two levels.
First is the significance of this data, or rather lack of significance. The internet existed for 20-some years before the majority of people felt they had a use for it. AI is similarly in a finding-its-feet phase where we know it will change the world but haven’t quite figured out the details. After a period of increased integration into our lives it will reach a tipping point where it gains wider usage, and we’re already very close to that.
Also they are missing what I would consider the two main reasons people don’t use it yet.
First, many people just don’t know what to do with it (as was the case with the early internet). The knowledge/imagination/interface/tools aren’t mature enough so it just seems like a lot of effort for minimal benefits. And if the people around you aren’t using it, you probably don’t feel the need.
Second reason is that the thought of it makes people uncomfortable or downright scared. Quite possibly with good reason. But even if it all works out well in the end, what we’re looking at is something that will drive the pace of change beyond what human nature can easily deal with. That’s already a problem in the modern world but we aint seen nothing yet. The future looks impossible to anticipate, and that’s scary. Not engaging with AI is arguably just hiding your head in the sand, but maybe that beats contemplating an existential terror that you’re powerless to stop.
Oh yeah the community is 1000% better and healthier, I don’t miss Reddit at all. Plus I’m a child of the 70s, I grew up with limited content. It’s good for you.