

You can read IAEA’s press releases for each attack. They go through the precise function and nature of each building and access the potential danger. Though they haven’t updated for the US’s latest bombing.
You can read IAEA’s press releases for each attack. They go through the precise function and nature of each building and access the potential danger. Though they haven’t updated for the US’s latest bombing.
My understanding is that it’s technically against their TOS but loosely enforced. They don’t specify precise limits since they probably change over time and region. Once you get noticed, they’ll block your traffic until you pay. Hence you can find people online that have been using it for years no problem, while other folks have been less lucky.
Basically their business strategy is to offer too-good-to-be-true free services that people start using and relying on, then charging once the bandwidth gets bigger.
It used to be worse, and all of cloudflare’s services were technically limited to HTML files, but selectively enforced. They’ve since changed and clarified their policy a bit. As far as I’ve ever heard, they don’t give a toss about the legality of your content, unless you’re a neo Nazi.
I’m guessing the cloudflared daemon isn’t connecting to jellyfin. You want to use http://
. Also is jellyfin
the hostname of the VM? Using localhost
or 127.0.0.1
might be better ways to specify the same VM without relying on DNS for anything.
Personal opinion, but I wouldn’t bother with fail2ban, it’s a bit of effort to get it to work with cloudflare tunnel and easy to lock yourself out. Cloudflare’s own zero trust feature would be more secure and only need fiddling around cloudflare’s dashboard.
It runs basically the same PebbleOS, so they’ll work with any app that works with the original Pebbles. They plan to keep using the community app hosting at https://apps.rebble.io/. There’s also GadgetBridge that’s compatible. Eric mentioned on HN the intention for an official open source library that can be used to make other companion apps too.
Yeah the mobile app is open source too https://github.com/pebble-dev/mobile-app
I had a 5 II too, used lineageOS for years, worked great. Doesn’t totally solve the battery or fingerprint reader. My screen got the dreaded green lightsaber too. Nail in the coffin was Australia turning off 3G so it can’t make calls anymore. (Wasn’t officially sold here so they didn’t bother loading it with VoLTE profiles)
Seems weird to have a separate app read sent and received messages? Is it poking holes in the Messages app sandbox?
Yeah fair. I tried setting it up, but honestly probably not worth the effort in home networks. Problem is browsers don’t know that the other end of the unbound DNS server is DoH, so it won’t use ECH. Even once set up, most browsers need to be manually configured to use the local DoH server. Once there’s better OS support and auto config via DDR and/or DNR it’ll be more worth bothering with.
Do you have the local unbound server respond to DoH so that the browser also uses encrypted client hello?
Consider something like the aoostar R1 with Intel N100. Small and low power like a commercial consumer NAS but cheaper and you can chuck whatever OS you want.
Would you consider making the LLM/GPU monster server as a gaming desktop? Depends on how you plan to use it, you could have a beast gaming PC than can do LLM/stable diffusion stuff when not gaming. You can install loads of AI stuff on windows, arguably easier.
I’ve been using pcloud. They do one time upfront payments for ‘lifetime’ cloud storage. Catch a sale and it’s ~$160/TB. For something long term like backups it seems unbeatable. To the point I sort of don’t expect them to actually last forever, but if they last 2-3 years it’s a decent deal still.
Use rclone to upload my files, honestly not ideal though since it’s meant for file synchronisation not backups. Also they are dog slow. Downloading my 4TBs takes ~10 days.
I’d view it as the longer you can keep using the current pair, the longer you can save money towards the eventual replacement.
My 10 year old ITX NAS build with 4 HDDs used 40W at idle. Just upgraded to an Aoostart WTR Pro with the same 4 HDDs, uses 28W at idle. My power bill currently averages around US$0.13/kWh.
I’ve always just wiped my work laptop and installed Linux.
Oh boy you’re gonna love Seal https://github.com/JunkFood02/Seal
Another aspect is the social graph. It’s targeted for normies to easily switch to.
Very few people want to install a communication app, open the compose screen for the first time, and be met by an empty list of who they can communicate with.
https://signal.org/blog/private-contact-discovery/
By using phone numbers, you can message your friends without needing to have them all register usernames and tell them to you. It also means Signal doesn’t need to keep a copy of your contact list on their servers, everyone has their local contact list.
This means private messages for loads of people, their goal.
Hey, we know this account sent this message and you have to give us everything you have about this account
It’s a bit backwards, since your account is your phone number, the agency would be asking “give us everything you have from this number”. They’ve already IDed you at that point.
They should bring back the original https://www.youtube.com/watch?v=tPYbzfwIJRA
To me I’d consider Linux not standardized since anything outside the kernel can be swapped out. Want a GUI? There are competing standards, X vs Wayland, with multiple implementations with different feature sets. Want audio? There’s ALSA or OSS, then on top of those there is pulse audio, or jack, or pipewire. Multiple desktop environments, which don’t just change the look and feel but also how apps need to be written. Heck there are even multiple C/POSIX libraries that can be used.
It certainly can be a strength for flexibility, and distros attempt to create a stable and reliable setup of one set of systems.
OpenAI noticed that Generative Pre-trained Transformers get better when you make them bigger. GPT-1 had 120 million parameters. GPT-2 bumped it up to 1.5 billion. GPT-3 grew to 175 billion. Now we have models with over 300 billion.
To run, every generated word requires doing math with every parameter, which nowadays is a massive amount of work, running on the most power hungry top of the line chips.
There are efforts to make smaller models that are still effective, but we are still in the range of 7-30 billion to get anything useful out of them.