• 1 Post
  • 48 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle






  • carzian@lemmy.mltoSelfhosted@lemmy.worldServer for a boat
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 months ago

    You’ve gotten a lot of good answers, so I’m going to do some out of the box thinking - maybe it will spark a few ideas.

    Goal:

    • self hosted server on boat

    Issues:

    • size
    • power
    • corrosion

    So if I were going to do this myself, I’d start with a pelican or other similar watertight container. We don’t want the equipment getting wet, and we don’t want it exposed to the salty air.

    I’d probably pick a usff computer, like a dell 9020 or maybe a framework motherboard. To get the storage, I’d get one of these to add multiple sata ports to the computer. Then its a matter of getting a bunch of ssds and powering them. I think the 12v goal is going to be too restrictive, most laptops need 19v to charge, so I’d just bite the bullet and get an inverter. If you’re really tight on power you could go with a pi, but the framework motherboard/usff both use mobile processors, and shouldn’t draw too much while idle.

    Any wires that pass though to the case should be made through waterproof bulkheads.

    Personally I’d nix the HDMI out requirement. One more port to keep track of and it complicates the self hosting. If you want it for media streaming to a TV then I’d recommend a roku and just run a jellyfin server on the computer. If you want it for server debugging I wouldn’t bother running it out of the case.

    The last thing I’d do is figure out cooling. For this I’d probably create some sort of closed loop heat exchanger from the case to either the outside air or the lake/ocean itself. This could be as simple as a pump running water through two radiators, one in the case and the other outside or just dumped overboard. If you know your power usage ahead of time you might be able to get away with a peltier element, dumping the heat outside the case.

    I’d probably put this all on its own power system, get a solar panel, battery, inverter, etc. It could even get topped off by the boat’s system if it needs extra juice.

    Also whatever you do, I’d figure out a way to ensure you’re giving your system a clean and steady 12v.



  • “The cause is a new SATA specification which includes the ability to disable power to the hard disk. When you look at the SATA power connection on the back of your hard drive, there are 15 pins that make contact with your power supply. It’s the third pin that delivers a 3.3V signal that disables the drive. What we need to do is prevent that third pin from making contact with the power cable.”

    Some hotswap harddrive bays use this feature, definitely more common in enterprise scenarios or in USB HDD enclosures.



  • carzian@lemmy.mltoSelfhosted@lemmy.worldcurrent best HDD-model choice
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 months ago

    I’ve always liked the ultrastar line. Used to be made by HGST and then WD bought them. I’m using specifically the HC530 14tb. The line has a long history of being very reliable enterprise drives.

    I’ve bought mine from both goharddrive and serverpartsdeals. Both are reliable resellers of used storage. They’ll warranty the drives for 2 or 5 years depending on which you to with. Prices are ~$130-$150.

    Be aware you might need to do the electrical tape over some of the power pins hacks depending on your setup.

    Ps. One of the listings for the HC530 on goharddrive or serverpartdeals is incorrectly labels as HC520. Just pay close attention.


    As far as raid goes, Raid 10 is currently very popular for its speed and drive failure tolerance. Remember, raid is not a replacement for the 3-2-1 backup rule. Raid has some fault tolerances for bad hard drives, but doesn’t protect against a failed raid card, fire, flood, robber, acts of god, etc.

    You can also look into zfs and truenas if you feel inclined. Be aware that if you go with this setup, ecc ram is basically a requirement






  • That’s definitely something to be aware of, but the vdev expansion feature was mergered and will be released probably this year.

    Additionally, it looks like the authors main gripe is the current way to expand is to add more vdevs. If you plan this out ahead of time then adding more vdevs incrementally isn’t an issue, you just need to buy enough drives for a vdev. In homelab use this might an issue, but if OP is planning on a 40 drive setup then needing to buy drives in groups of 2-3 instead of individually shouldn’t be a huge deal.


  • You need to research raid 1,6,10 and zfs first. Make an informed decision and go from there. You’re basing the number of drives off of (uninformed) assumptions and that’s going to drive all of your decisions the wrong way. Start with figuring out your target storage amount and how many drive failures you can tolerate.



  • Ah ok. I’ve done opnsense and pfsense both virtualized in proxmox and on bare metal. I’ve done the setup both at two work places now and at home. I vastly prefer bare metal. Managing it in a VM is a pain. The nic pass through is fine, but it complicates configuration and troubleshooting. If you’re not getting the speeds you want then there’s now two systems to troubleshoot instead of one. Additionally, now you need to worry about keeping your hypervisor up and running in addition to the firewall. This makes updates and other maintance more difficult. Hypervisors do provide snapshots, but opnsense is easy enough to back up that it’s not really a compelling argument.

    My two cents is get the right equipment for the firewall and run bare metal. Having more CPU is great if you want to do intrusion detection, DNS filtering, vpns, etc. on the firewall. Don’t feel like you need to hypervisor everything