• 0 Posts
  • 21 Comments
Joined 4 months ago
cake
Cake day: March 12th, 2025

help-circle




  • I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.



  • I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.




  • One development we may see imminently is the infiltration of any areas of the internet not currently dominated by AI slop. When AI systems are generally able to successfully mimic real users, the next step would be to flood anything like Lemmy with fake users, whose purpose is mainly to overwhelm the system while avoiding detection. At the same time they could deploy more obvious AI bots. Any crowdsourced attempt at identifying AI may find many of its contributors are infiltration bots who gain trust by identifying and removing the obvious bots. In this way any attempt at creating a space not dominated by AI and controlled disinformation can be undermined





  • “Hi, I’m Manifish_Destiny speaking to you from beyond the grave. I’m happy to say that even though I had some skepticism of AI avatars and even put something about that in my will, I just didn’t understand its potential to embody my true self. But now I do, so you can disregard all that. Come to think of it, you can disregard the rest of the will as well, I’ve got some radical new ideas…”



  • bampop@lemmy.worldtoPrivacy@lemmy.ml"You need to try Linux"
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    I find it weird that there is this whole conversation about new/experienced users, and it’s perhaps a problematic thing with Linux. Many people, myself included, don’t give 2 shits about how their OS works. I don’t want to spend my time tending to it as if it were a fucking garden. I just need it to work, so I can get on with my own stuff. No matter how “experienced” I get, that’s always going to be the case. Maybe I’m just a little traumatized about this because the first Linux distro I used was Gentoo.




  • Seems to me it’s an efficiency problem.

    If you want to send an email to someone, you don’t send it to a mailing list of all your contacts. You just send it to the person to whom it concerns. But suppliers, who want potential customers to be aware of their product, are just sending their message to as many people as possible. And even targeted advertising isn’t useful because it aims to promote one product rather than helping customers to make a balanced assessment of all the choices available. Plus it’s typically unsolicited and therefore an intrusion and an unwanted waste of time and attention.

    People out there who want that product need to know what’s on offer and who offers the best quality and value for money, which would have to come from an independent source. Independent review sites are a very good alternative to advertising, and maybe they could do more to promote new products and inform customers about things which would suit their needs, which would be a cost effective way to help suppliers reach their customers. That sounds a lot like advertising but if it were truly independent and on-demand, it wouldn’t be. In theory AI might do a good job of this, but it’s so open to abuse, it’s a natural pathway to push whoever pays for promotion. If advertising were illegal, I wonder how you would police that.

    More broadly, if we rely on reviewers to help customers find what they need, how can we ensure they are independent and fair? Maybe if there were a network of independent reviewers, they could act as a check on each other, if a reviewer consistently favors one brand when the rest don’t, it could be somehow highlighted and shown up as a bias.


  • bampop@lemmy.worldtoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    4 months ago

    I think the article is missing the point on two levels.

    First is the significance of this data, or rather lack of significance. The internet existed for 20-some years before the majority of people felt they had a use for it. AI is similarly in a finding-its-feet phase where we know it will change the world but haven’t quite figured out the details. After a period of increased integration into our lives it will reach a tipping point where it gains wider usage, and we’re already very close to that.

    Also they are missing what I would consider the two main reasons people don’t use it yet.

    First, many people just don’t know what to do with it (as was the case with the early internet). The knowledge/imagination/interface/tools aren’t mature enough so it just seems like a lot of effort for minimal benefits. And if the people around you aren’t using it, you probably don’t feel the need.

    Second reason is that the thought of it makes people uncomfortable or downright scared. Quite possibly with good reason. But even if it all works out well in the end, what we’re looking at is something that will drive the pace of change beyond what human nature can easily deal with. That’s already a problem in the modern world but we aint seen nothing yet. The future looks impossible to anticipate, and that’s scary. Not engaging with AI is arguably just hiding your head in the sand, but maybe that beats contemplating an existential terror that you’re powerless to stop.