It’s not configurable through the UI, but if you’re the admin of an instance you can change the character limit with some fairly simple source code tweaks.
It’s not configurable through the UI, but if you’re the admin of an instance you can change the character limit with some fairly simple source code tweaks.
Ah, of course - that’s unfortunate, but thanks for the pointer.
Not well versed in the field, but understand that large tech companies which host user-generated content match the hashes of uploaded content against a list of known bad hashes as part of their strategy to detect and tackle such content.
Could it be possible to adopt a strategy like that as a first-pass to improve detection, and reduce the compute load associated with running every file through an AI model?
I would say that for an action to be considered censorship in the strictest sense, it would need to be the suppression of information as imposed and enforced by a monopolistic authority.
If the State were to declare a book banned, that would be censorship because the State establishes itself as the single totalising authority over the people in the territory it governs. Should you contravene that ruling and possess the material in question, you’re opening yourself up to the threat of violence until you start respecting it. You’re not able to opt-out, the single authority imposes itself and its ruling on you.
Meanwhile, on federated social media there are many concurrently operating instances with different rulesets and federations. If the instance you’re part of decides to defederate with another, then you can move to another instance which continues to federate with the defederated instance in question if you’re unhappy with the decision. You’re able to opt-out of that ruling without consequence.
Plus, even if you decide not to move instance, the content hosted by the defederated instance will still be available through the instance itself.
Defederation doesn’t meaningfully suppress information, whereas censorship does.
There actually is a Web 3.0, and it predates the cryptocurrency-oriented conceptualisation of “Web3” by quite some time.
Web 3.0 is otherwise known as the Semantic Web, a set of standards developed by the W3C for formally representing (meta)data and relationships between entities on the internet, and for facilitating the machine-reading and exchange thereof.
It’s fairly silly that this course of action is the consequence of a desire to manipulate search engine results, but at least they’re archiving the articles before taking them down.
To address the headline, though, I don’t think that anybody reputable ever seriously claimed that the internet was forever in a literal sense - we’ve been dealing with ephemerality and issues like link rot from the beginning.
It was only ever commonplace to say the internet was forever in the sense that fully retracting anything once posted could range from difficult to impossible after it’d been shared a few times.
Only in the modern era dominated by corporations offering a platform in perpetuity have we been afforded even the illusion of dependable permanence, and honestly I’m much more comfortable with the notion of less widely distributed content being able to entropy out of existence than a permanent record for everything ever made public.
I could understand upgrading so frequently at the advent of mainstream smartphones, where two years of progress actually did represent a significant user experience improvement - but the intergenerational improvements for most people’s day-to-day use have been marginal for quite some time now.
Once you’ve got web browsers and website-equivalent mobile apps performing well, software keyboards which keep up with your typing, high-definition video playback working without dropped frames, graphics processing sufficient to render whatever your game of choice is for the train journey to work, batteries which last a day of moderate to intense use, and screen resolutions so high that you can’t differentiate the pixels even by pressing your eyeball to the glass - that covers most people’s media consumption for the form factor, and there’s not much else to offer after that.
The bill says that commercial entities serving pornography are required to do age verification through either verifying a driver’s license, verifying another piece of government-issued identification, or through the use of any commercially viable age verification mechanism.
So, yeah, I’d imagine compliance to look like either uploading a photograph or scan of an identity card or document for the site operators to check, or uploading it to an affiliated service which does age verification on their behalf.
Which is obviously horrendous from a privacy and information security standpoint for the consumer, and exposes the site operator to costs and legal risk associated with verifying and storing sensitive personal information.
It’s not as though the existence and mechanisms of piracy are a coveted secret. There’s a decent chance that they’ll learn about and attempt it independently, and the method they learn about online might expose them to greater risk than if they did it with more consideration.
On that basis, I think that knowledge transfer is at worst harm reduction. If it’s immoral, which I don’t believe it is, then at the very least your intervention could prevent them from being preyed upon by some copyright troll company when they do it despite your silence or protestations.