• 0 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Droolio@feddit.uktoSelfhosted@lemmy.worldTIL - Caddy
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    Do ignore me then, I assumed you might know the reference and only I mean’t it in good humour. :) (Without spoiling anything - in the unlikely event you might some day watch it - Mr Milchick is a character that uses ‘big words’. Your choice of words struck a chord.) I will say though, you’re seriously missing out. The cinematography alone is brilliant and the acting exceptional.




  • You’re limiting yourself somewhat if you’re not able to plug in multiple drives at the same time. Otherwise, I might suggest mergerfs for basic JBOD. You won’t be able to use a single ZFS to avoid bit rot - only detect it. SnapRAID - ideal for offline setups - would be the next step up if you could dedicate one of your drives to parity.

    In your position, I’d do Duplicacy backups split/spanned over multiple backup drives (however you connect them).

    It has a pretty cool Erasure Coding feature that protects individual chunks from bit rot and possibly even bad sectors, plus the whole database-less architecture makes it very robust. De-duplication, high levels of compress, and encryption. Plus you can keep historic snapshots, so you can avoid the risk of accidentally sync’ing ransomware over the top.

    Edit: the CLI is free for personal use, and is source-available. Written in Go and extremely performant.




  • Multiple backups may be kept.

    Nice work, but if I may suggest - it lacks hardlink support, so’s quite wasteful in terms of disk space - the number of ‘tags’ (snapshots) will be extremely limited.

    At least two robust solutions that use rsync+hardlinks already exist: rsnapshot.org and dirvish.org (both written in perl). There’s definitely room for backup tools that produce plain copies, instead of packed chunk data like restic and Duplicacy, and a python or even bash-based tool might be nice, so keep at it.

    However, I liken backup software to encryption - extreme care must be taken when rolling and using your own. Whatever tool you use, test test test the backups. :)





  • There’s no point doing anything fancy like that - wireguard over Tailscale is pretty pointless, as Tailscale is literally wireguard with NAT traversal and authentication bolted on. Unless you enable subnetting, it can’t get more secure than that.

    And even if you do enable subnetting (which you might wanna do if you need access to absolutely everything), you can use Tailscale ACLs to keep tighter control - say, from specific (tagged) devices.





  • 100% this. OP, whatever solution you come up with, strongly consider disentangling your backup ‘storage’ from the platform or software, so you’re not ‘locked in’.

    IMO, you want to have something universal, that works with both local and ‘cloud’ (ideally off-site on a own/family/friend’s NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.




  • You should take it to a data recovery specialist if the data is really really important but for lightly-damaged sectors, you want ddrescue (oldie but goodie) or HDDSuperClone (no longer developed) or OpenSuperClone (fork of HDDSuperClone, more actively developed).

    You can combine some of these tools with commercial programs like dmde, UFS Explorer, or R-Studio - to target specific files for a quick result - but basically it’s best to get a full disk image off the bad drive onto another drive/image.