• 0 Posts
  • 85 Comments
Joined 8 months ago
cake
Cake day: May 7th, 2024

help-circle


  • Science communicators that make complex things accessible for the general public are a critical component to building and maintaining public support for scientific institutions. If we want science to serve public interests rather than corporate ones, we need to establish public funding for it, which requires a public understanding of what they are doing and why it’s valuable.

    A blog I very much like and keep recommending talks about both the importance of this and the differing viewpoints within academic culture (specifically about history, but many of the concepts apply to sciences in general). It also has cat pictures.

    This isn’t the first time I’ve heard about toxic culture in universities (Section “The Advisor”). Again, the entry is about graduate programs in the humanities, but it’s not just a humanities-specific issue.

    I personally didn’t know about HowStuffWorks (I was under the misconception that it was just a YouTube format, which I generally don’t watch a whole lot), but checking it out now, I definitely missed out, and I think it fits the criteria of the field-to-public communication.

    To drive such a valuable contributor to such despair they no longer want to live at all is a disservice to the public, a threat to what good their institution can do (which, for all its toxicity, probably also provided valuable research) and most of all a crime against that person. I hope they’re held accountable, but I also hope that public scrutiny can bring about improvements in academic culture so that his death might still do some good in the end.


  • AGI and ASI are what I am referring to. Of course we don’t actually have that right now, I never claimed we did.

    I was talking about the currently available technology though, its inefficiency, and the danger of tech illiteracy leading to overreliance on tools that aren’t quite so “smart” yet to warrant that reliance.

    I agree with your sentiment that it may well some day reach that point. If it does and the energy consumption is no longer an active concern, I do see how it could justifiably be deployed at scale.

    But we also agree that “we don’t actually have that right now”, and with what we do have, I don’t think it’s reasonable. I’m happy to debate that point civilly, if you’re interested in that.

    It is hilarious and insulting you trying to “erm actually” me when I literally work in this field doing research on uses of current gen ML/AI models.

    And how would I know that? Everyone on the Internet is an expert, how would I come to assume you’re actually one? Given the misunderstanding outlined above, I assumed you were conflating the (topical) current models with the (hypothetical) future ones.

    Go fuck yourself

    There is no need for such hostility. I meant no insult, I just misunderstood what you were talking about and sought to correct a common misconception. Seeing how the Internet is already full of vitriol, I think we’d all do each other a favour if we tried applying Hanlon’s Razor more often and look for explanations of human error instead of concluding malice.

    I hope you have a wonderful week, and good luck with your ongoing research!







  • I was contesting the general logic of this sentiment:

    Which “experts” do you need for what’s common knowledge?

    I took this to mean “If common knowledge suggests an obvious understanding, an expert’s assessment is can add no value, as they would either agree or be wrong.” Put differently: “If it seems obviously true to me, it must be true in general.”

    TL;DR: If you think you know more than experts on a given topic, you’re most likely wrong.

    On a fundamental level, this claim in general holds no water. Experts in a given field are usually aware of the “common knowledge”. They also usually have special knowledge, which is what makes them experts. If they claim things that contradict “common knowledge”, it’s more likely that their special knowledge includes additional considerations a layperson wouldn’t be aware of.
    Appeal to Authority as a fallacy applies if the person in question isn’t actually an authority on the subject just because they’re prominent or versed in some other context, but it doesn’t work as universal refutation of “experts say”.

     

    For this specific case, I’m inclined to assume there is some nuance I might not know about. Obvious to me seems that large, central power plants are both easier targets and more vulnerable to total disruption if a part of their machinery is damaged. On the other hand, a distributed grid of solar panels may be more resilient, as the rest can continue to function even if some are destroyed, in addition to being harder to spot, making efforts to disrupt power supply far more expensive in terms of resources.

    However, I’m not qualified to assess the expertise of the people in question, let alone make an accurate assessment myself. Maybe you’re right, they’re grifters telling bullshit. But I’d be wary of assuming so just because it seems true.




  • Until it does, we shouldn’t exacerbate the climate and resource issues we already have by blindly buying into the hype and building more and larger corporate-scale power gluttons to produce even more heat than we’re already dealing with.

    “AI” has potential, ideas like machine assistance with writing letters and improving security by augmenting human alertness are all nice. Unfortunately, it also has destructive potential for things like surveillance, even deadlier weapons or accelerating the wealth extraction of those with the capital to invest in building aforementioned power gluttons.

    Additionally, it risks misuse and overreliance, which is particularly dangerous in the current stage where it can’t entirely replace humans (yet), the issues of which may not immediately become apparent until they do damage.

    If and until the abilities of AI reach the point where they can compensate tech illiteracy and we no longer need to worry about the exorbitant heat production, it shouldn’t be deployed at scale at all, and even then its use needs to be scrutinised, regulated and that regulation is appropriately enforced (which basically requires significant social and political change, so good luck).




  • The (ideal) most reasonable approach for public information organs, in my opinion, would be to use all the channels that are available - Mastodon, Bluesky, Threads, but also X for the share of people that can’t be arsed to move (or don’t want to, because the people and communities they care about haven’t). I’d even count Facebook, Instagram, Reddit among those channels, as much as I resent those companies, as well as Lemmy and the other fediverse services (I’m not super informed here), a blog, RSS feeds, maybe an email subscription service too, just to be sure.

    In fact, I think diversifying your presence would be a great thing in general - platform exclusivity is turning out to be a quite toxic and disadvantageous concept. Well, it has been for a while, but it’s starting to become more visible.

    The real restriction is of course the technical infrastructure and personell to maintain all these presences. You could use of a content distribution system that takes a picture, a long text and a short summary to generate appropriate posts for all these platforms, but you’d still need people monitoring and responding on the various platforms, ideally people sufficiently familiar with the respective culture to communicate effectively.