So many answers!!
First it was from planets from Ursula Le Guin’s Hainish cycle.
Now it’s the names of birds visiting my feeder: chickadee, titmouse, mockingbird, etc.
So many answers!!
First it was from planets from Ursula Le Guin’s Hainish cycle.
Now it’s the names of birds visiting my feeder: chickadee, titmouse, mockingbird, etc.
HBA in IT mode.
Got it, thanks.
Why are people still doing hardware RAID!?
Didn’t know about ENA mirroring. Thanks! I’m tickled by the idea that all the paywalled journals are not backed up. If we ever have a planet wide catastrophe, we’ll have to rebuild using the open articles only!
That’s a lot of data to be archiving! What’s the archiving action responsible for this, or what group? I work with SRA and GEO daily for work, so this is interesting to see on lemmy.
It uses the Calibre database but isn’t a frontend per se.
Thanks for clearing that up.
As far as I’m aware, Calibre-Web serves are web front-end for calibre. I think you might have to install plugins manually on the desktop version, but it should be active when importing a book over the Calibre-web, especially DeDRM.
Fascinating! Where do you order the lenses? I’m in the US and I can’t find a place that is cheaper then Zenni where I can get my lenses and frames for <$20.
btrfs
or zfs
send/receive. Harder to do if already established, but by far the most elegant, especially with atomic snapshots to allow versioning without duplicate data.
deleted by creator
I wonder how well it supports Debian, etc.
I wouldn’t think so. 5400 rpm drives might last longer if we’re specifically thinking about mechanical wear. My main takeaway is that WDC has the best. I would use the largest number available which is the final chart which you also point out. One thing which others have also pointed out that there is no metadata included with these results. For example the location of different drives, i.e. rack and server-room specific data. These would control for temperature, vibration and other potential confounders. Also likely that as new servers are brought online, different SKUs are being bought in groups, i.e. all 10 TB Seagate Ironwolf. I don’t know why they haven’t tried to use linear or simple machine learning models to try to provide some explanatory information to this data, but nevertheless I am deeply appreciative that this data is available at all.
Backblaze reports HDD reliability data on their blog. Never rely on anecdata!
https://www.backblaze.com/blog/backblaze-drive-stats-for-q1-2024/
Ah, thank you for explaining. I understand where you’re coming from. Nevertheless, from the point of a view a small NAS, RAIDZ1 is much more space and cost efficient so I think there is room for “pets” in the small homelab or NAS.
I get that. But I think the quote refers to corporate infrastructure. In the case of a mail server, you would have automated backup servers that kick-in and you would simply pull the rack of the failed mail server.
Replacing drives based on SMART messages (pets) means you can do the replacement on your time and make sure you can do resilvering or whatever on your schedule. I think that is less burdensome than having a drive fail when you’re quite busy and being stressed about having the system is running in a degraded state until you have time to replace the drive.
We have a all-in-one keyboard and mouse. Labeled function keys to start streaming services with Chrome in kiosk mode. Obviously, mouse to navigate is in some ways more work than a remote, but actually much faster. Similarly, typing for a search is way faster with keyboard. Side benefit is that it’s larger size means it won’t get lost in the couch cushions.
I have a low power nuc that I use to watch TV. All streaming services + KODI or whatever. I don’t know why I would use some proprietary dongle. I prefer FOSS.
I mean if it’s homelab, it’s ok to be pets. Not everything has to be commoditized for the whims of industry.
Eh? TV boxes? Just use a web brower. What is a TV box?
MPD and whatever terminal MPD client you like.