How does that even work for those hosting their own? Do I just give myself Bluesky+? Because all those features I already have by virtue of hosting my own data.
How does that even work for those hosting their own? Do I just give myself Bluesky+? Because all those features I already have by virtue of hosting my own data.
When you make an outgoing connections, its source is a random local port in the 30000+ range or something like that, and when the remote server replies, it replies back on that port. But if your rules then treat the response to that port as a port forward rule, it won’t reach the NAT rule that would remangle the packet in the correct way to preserve the connection.
So, server wants to go out and uses port 33333, router NATs it and rewrites it as outgoing from your public IP from say, 44444, then the remote server replies back, and the router just sends the 44444 to your server as-is because port forward, and your server’s like, I don’t know anyone interested in port 44444 and drops it, while the client is waiting on port 33333 to hear back and never does, until it times out.
In iptables terms, that’s what --cstate ESTABLISHED,RELATED
handles and why you see it in NAT examples.
It probably causes all traffic that would be NAT’d out now gets a port forward which breaks the connection because it treats the returning SYN as a new connection which creates a new port mapping that’s incompatible with the original outgoing SYN, and it fails.
Try allowing all ports <10000 or something like that, you’ll likely observe it works again.
You need to allow all established traffic before the any rules without further processing, or at least that’s how it is with iptables. No idea what interface that is but if it’s OpenWRT, it does become iptables under the hood.
You can look at it the other way around too: Linus made a kernel, and enough people liked it that people developed Linux distributions, and it kept growing.
A lot of FOSS projects started as someone’s personal project they released (sometimes literally just to have stuff on their GitHub to be more hirable in job search) and it became insanely popular rapidly and now it powers entire ecosystems.
Not all projects starts with the ambition to become a big thing, and that’s usually how the really good stuff starts off as.
The Lounge started off as some users getting interested in Shout, which was just some guy’s pet project) and we forked it because we had a pile of patches for it to fix issues with it. I worked on it purely to serve my own purposes (just enough to IRC on the go without dealing with reconnecting to ZNC all the time and draining battery), and now it’s an active project a lot of IRC networks use as a guest client for their IRC network. No intent to disrupt the IRC clients landscape, I still used HexChat back then, but now it has secured a permanent spot in my open tabs as it does for many people. It’s actually a pretty good IRC client now.
Mine’s got all the slots filled and I think I still have spare PCIe lanes, Threadrippers are nuts.
To add: a lot of cert providers also offer ACME so while the primary user of ACME is LetsEncrypt, you can use the same tech and validations as LetsEncrypt on other vendors too.
It’ll never abused nor fall on the wrong hands. Never. And then it does and they act like nobody could foresee that happening. It’s infuriating.
All the data collection going on, it’ll backfire spectacularly eventually.
They’re usually local hardware but configured and managed via cloud services. Although I’ve seen people using EC2 instances as firewalls for some cursed enterprise reasons, which I guess does make it a firewall in the cloud.
They’re gonna have to pay me to waste my time with this trash
Mixing brands is a non-issue, you just lose on some features like integration of everything with everything, so more manual configuration. But that’s about it.
You can have your TMHI connect over Ethernet to a switch where you’ll have ports then there you can get your wired connections and your point to points and your mesh network all off that switch. If you need more ports add another switch.
That said I’m pretty sure Ubiquity has stuff for all those needs, it’s just pricier than random crap you can buy at BestBuy.
Why is it always !technology@lemmy.world in particular
I can’t even think of a way to be devil’s advocate here: there’s no world in which this is good for anyone, even the benefits for Google are highly questionable in lost trust.
No matter what people think of legacy media and news, they’re still important and sometimes the only source of information. Seeing them missing from searches really makes you question what else they’re hiding from you.
IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.
I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.
What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.
My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.
~Not really. All the features of that tool are basic functions we’ve had before LibreOffice was still OpenOffice.~
~Since this converts to Markdown, it’s inherently a very lossy conversion. What’s hard to pull off is preserve the full formatting when converting to an odt or something.~
Someone pointed out it doesn’t just convert word documents to Markdown, it can also transcribe and OCR, so I guess it does have some usefulness!
Legal content that violates community rules or instance rules but is otherwise harmless like spam, is kept in the modlog just fine. People don’t generally browse the modlogs as you generally can’t interact with the stuff anyway. No upvotes (more likely, tons of downvotes), no commenting. Not much better than going to the internet archive to undelete a post elsewhere.
More extreme content like CSAM usually gets deleted then purged instead, which is a feature that properly goes in and wipes the content permanently.
I’m concerned about DRM violating my rights. But apart from that, media is largely for consumption, there’s very few reasons to need to edit a movie or something, and the laws at least attempt to cover fair use. DJs remix songs and stuff just fine. Or news article, you’d mostly want to quote them which is well defined in the legal framework. It’s important to remember that open-source doesn’t imply free of charge: there is paid GPL software.
Open-source is important in software because it’s much more complex, and you can end up in a situation where software you bought doesn’t work because the company refuses to fix it, or straight up stops working because the company went bankrupt 10 years ago and things have changed too much. Proprietary software is a black box that can be doing literally anything, and legally, you’re not even really allowed to reverse engineer it to even make sure it does what it says it does.
Stallman started the free software movement out of frustration with a printer driver that he knew how to fix, but the company wouldn’t give him the source code so he could fix it, and I believe at the time it would also have been illegal to reverse engineer it and patch it, or at the very least it was against the license. And that’s also my reason for using open-source software: not because I want free stuff, but because I want libre stuff that I can fix and maintain. Most people won’t, and that’s where the sharing clause comes in: someone else that can patch it will, and everyone can just use that.
Ideally things would be free and widely available but that’s too commie for most people and we’re headed in the polar opposite direction. Buuut there’s always the high seas where you can set your own moral compass.
They’d get sued whether they do it or not really. If they don’t they get sued by those that want privacy invasive scanning. If they do, they’re gonna get sued when they inevitably end up landing someone in hot water because they took pictures of their naked child for the doctors.
Protecting children is important but can’t come at the cost of violating everyone’s privacy and making you guilty unless proven innocent.
Meanwhile, children just keep getting shot at school and nobody wants to do anything about it, but oh no, we can’t do anything about that because muh gun rights.
The key there is the switch does most of the work in hardware, so you can have 1G going between all ports with no CPU usage, so the internal 1G port doesn’t matter as much, and the hardware acceleration lets it efficiently handle routing across VLANs without involving much of the internal port. Those internal switches can usually handle VLANs and basic NAT nesrly entirely on its own.
With a single external 2.5G port you lose that because your traffic will have to go in the router and back out to the switch to cross VLANs, so it’s basically a 1.25G link. And it needs to be a managed switch too since the router doesn’t come with a built-in one anymore. Best you can do is software VLANs but the other device will need to also use the VLAN explicitly in that case, as there’s no switch to give you untagged ports.
I’m struggling to think what one can even do with just two ethernet ports of different speeds. It’s begging to be used as a gateway, VPN or firewall but you can’t because you’ll top out at 1G anyway. And assuming one of them is the LAN side, supposedly it’ll be going to a switch so the router will never see LAN traffic anyway, only stuff through it which hits the bandwidth limitation.
I guess technically one could bond the WiFi and 1G link to make use of the 2.5G link? Or as an AP like it’s got 2.5G upstream and passes through another AP down the line using the 1G port.
Very questionable specs.
E: it occured to me this looks like a potentially really good standalone AP if you give it 2.5G upstream and then branch off to another device down the line like some Ubiquiti ones do. But the form factor is ugly as hell to be mounted on a ceiling…
It’s a machine that used to be well oiled but management’s been deferring maintenance for decades, the oil’s gross, it’s leaking everywhere and overheating, it’s barely hanging on, and the manufacturer’s long been out of business.