But also, in a world where such a law did exist, it would naturally force every third-party to create their contracts in a way that would allow the eventual release of the source code, or lose out on the deal and subsequently, the money.
But also, in a world where such a law did exist, it would naturally force every third-party to create their contracts in a way that would allow the eventual release of the source code, or lose out on the deal and subsequently, the money.
It might be feasible, but it’s a bit awkward to implement because Wireguard is stateless and doesn’t know if a client is offline or just hasn’t sent any traffic for some time.
That’s kind of weird, because the reason why I never bothered with (selfhosted) VPNs before Wireguard was because it was the first one that just worked. Granted, due to its nature, you don’t get a lot of feedback when things don’t work, but it’s so simple in principle that there’s not a lot that can go wrong. For external VPNs like this, it should just be: Load config, double-check, done.
That’s a bummer. I’ve been using the forked version as well, and even that dev has been annoyed with Google Play enough that it’s only released on F-Droid nowadays.
Personally, I don’t think it’s an issue only releasing only on F-Droid, because the people interested in Syncthing wouldn’t be deterred by that if they’re not already using it, but I totally get why that might sap the last bit of motivation the dev has.
I’ve made similar coasters a while ago, just with a stone tile as the base. Despite having to endure a cup of hot tea every day, it’s holding up very well so far and stays clean.
The PETG did minimally deform after months of using it, which is both good and bad. On one hand it’s now formed in a way that perfectly fits the specific cup I use, but that also means that it’s become a bad fit for every other cup or glass.
Your comment about using TPU has given me the idea however to make the tile reversible with the current PETG on one side and Flex material on the other.
I’d be more concerned as well if this would be an over-night change, but I’d say that the rollout is slow and gradual enough that giving it more time would just lead to more procrastination instead, rather than finding solutions. Particularly for those following the news, which all sysadmins should, the reduction in certificate lifespan over time has been going on for a while now with a clear goal of automation becoming the only viable path forward.
I’ll also go out on a limb and make a guess that a not insignificant amount of people only think that their “special” case can’t be automated. I wouldn’t even be surprised if many of those could be solved by a bog-standard reverse-proxy setup.
Part of this might be my general disdain towards sysadmins who don’t know the first thing about technology and security, but I can’t help but notice that article is weirdly biased:
Over the past couple of days, these unsung heroes who keep the internet up and running flocked to Reddit to bemoan their soon-to-be increasing workload.
Kind of weird to praise random Reddit users who might or might not actually sysadmins that much for not keeping up with the news, or put any kind of importance onto Reddit comments in the first place.
Personally, I’m much more partial to the opinions of actual security researchers and hope this passes. All publicly used services should use automated renewals with short lifespans. If this isn’t possible for internal devices some weird reason, that’s what private CAs are for.
Personally, I watch the channels from the creators I like and slowly grow my channels through their recommendations. My bookmark goes straight to the subscription page and have uBlock filters for all the unwanted recommendations.
I couldn’t stand having an algorithm decide what I watch.
At the very least it failed in a way that’s obvious by giving you contradictory statements. If it left you with only the wrong statements, that’s when “AI” becomes really insidiuos.
They’ve also had a partnership with iFixit for a while now, allowing them to sell genuine replacement parts.
The process still isn’t what I’d call repair-friendly, but I’ve been able to replace the screen of my Pixel 5 without much trouble. What bothers me most is the use of adhesive and too many parts being bundled together so they can only be replaced in bulk.
Even further, there’s also a clean split between the game and the framework they’ve built for it. So people can actually build their own games or tools using the osu!framework, and some already did so.
Which is neat, because it seems to me like it’s really performant and of course, low-latency, based on what I’ve seen trying the new client.
Considering the movie industry is currently at a point where it’s even punishing paying customers with low-quality 720p for daring to use the “wrong” browser, I don’t think the industry will figure out that there’s a market out there for high quality drm-free media anytime soon.
There really should be a right to adequate human support that’s not hidden behind multiple barriers. As you said, it can be a timesaver for the simple stuff, but there’s nothing worse than the dread when you know that your case is going to need some explanation and an actual human that is able to do more than just following a flowchart.
Depends a bit on the clients.
Assuming you only have one desktop and mobile client you should never run into any issues. If you do have multiple KeePassXC clients it’s all fine as well assuming Syncthing always has another client it can sync with.
Most amazingly, this setup is also unexpectedly resilient against merge conflicts and can sync even when two copies have changed. You wouldn’t expect that from tools relying on 3rd party file syncing.
I still try to avoid it, but every time it accidentally happened, I could just merge the changes automatically without losing data.
Oh yeah, you’re right. It’s both degradation in some way, but through entirely different causes.
Technically you can do everything through email, because everything online can be represented as text. Doesn’t mean you should.
PRs also aren’t just a simple back and forth anymore: Tagging, Assignees, inline reviews, CI with checks, progress tracking, and yes, reactions. Sure, you can kinda hack all of that into a mailing list but at that point it’s becoming really clunky and abuses email even more for something it was never meant to handle. Having a purpose-built interface for that is just so much nicer.
I’m sorry to be blunt, but mailing lists just suck for group conversations and are a crutch that only gained popularity due to the lack of better alternatives at the time. While the current solutions also come with their own unique set of drawbacks, it’s undeniable that the majority clearly prefers them and wouldn’t want to go back. There’s a reason why almost everyone switched over.
I’d guess because the same argument could be made for the website you’re on right now. Why use that when we could just use mailing lists instead?
More specifically: Sure, Git is decentral at its core, but all the tooling that has been built around it, like issue tracking, is not. Suggesting to go back to email, even if some projects still use it, isn’t the way to go forward.
I believe there are some services, including some selfhosted ones, that allow you to quickly create (and later delete) unique aliases.
That said, I was surprised that these dictionary spam attacks don’t really happen all that much, at least based on my own experience. Most of the ambient drive-by spam my server receives targets email addresses belonging to domains I don’t even own. Blocking those and a few Sieve scripts gets rid of 99% of spam for me.
Interestingly, there was one time I received spam to a bogus address belonging to my own domain: A while back, one of my actual email addresses got leaked (thanks Sega) and a few months later that address got copied into another dataset but with a typo, which I assume was caused someone using OCR.