

BiglyBT for manual dls on desktop, qBittorrent with the arrs
BiglyBT for manual dls on desktop, qBittorrent with the arrs
100% this. OP, whatever solution you come up with, strongly consider disentangling your backup ‘storage’ from the platform or software, so you’re not ‘locked in’.
IMO, you want to have something universal, that works with both local and ‘cloud’ (ideally off-site on a own/family/friend’s NAS; far less expensive in the long run). Trust me, as someone who came from CrashPlan and moved to Duplicacy 8 years ago, I no longer worry about how robust my backups are, as I can practice 3-2-1 on my own terms.
While you can do command line stuff with CloneZilla, I think what they’re referring to is the TEXT-based guided user interface, which doesn’t seem to differ much at all to the Rescuezilla GUI, which only looks marginally prettier. However, there’s a few other useful tools in there, and a desktop environments, so it’s still a bit nicer to use.
Yep, I guess it depends on how much data of interest is on the drive. You can hook it up to dmde with a ddrescue/OpenSuperClone-mounted drive, which can let you index the filesystem while it streams content to the backup image. It reads and remembers sectors already copied, and you can target specific files/folders so you don’t have to touch most of the drive.
You should take it to a data recovery specialist if the data is really really important but for lightly-damaged sectors, you want ddrescue (oldie but goodie) or HDDSuperClone (no longer developed) or OpenSuperClone (fork of HDDSuperClone, more actively developed).
You can combine some of these tools with commercial programs like dmde, UFS Explorer, or R-Studio - to target specific files for a quick result - but basically it’s best to get a full disk image off the bad drive onto another drive/image.
No that’s either HDD Regenerator or SpinRite. Clonezilla is a sector-by-sector disk imaging program. (SpinRite et al are good for keeping old drives running for longer but if you want to do data recovery and really value your data, ddrescue or HDDSuperClone is what you want.)
It’s good enough for recent releases (note you may have to open a port for passive XDCC) but because it’s not easy to automate, even public trackers + *arrs (w/ VPN) are just more convenient.
Occasionally you might find releases on IRC and not on public trackers, and visa versa, so it’s good to have a backup. I prefer scene releases so it’s easy to find specific stuff with the xdcc search sites (e.g. dot eu and sun), and maybe a 'lil help from srrdb to verify CRCs.
Don’t forget, you can also use SRV records to point a domain to another target, where you can also omit the port number. So connecting to server.org say, can point to mc.server.org:25565 under the hood.
This prolly isn’t what hypixel are doing as everything’s likely on the same network and their router/firewall is just forwarding traffic onto different machines, but SRV is one way to redirect a minecraft connection (and you could combine the technique with subdomains).
Suppose one way around it would be to rent a cheap VPS in the UK and piggy back off the connection?
Otherwise, there’s a s2 ‘NORDiC’ version going around which is actually in English audio - just with various scandiwegian soft subs, so it’s defo out there. DM me if you still struggle to find it.
More people should use BiglyBT and its Swarm Merging feature. You get the ability to seed or download chunks from peers across separate torrent files.
It’s a shame because if more people used it, the BiglyBT devs might add hash-based merging (with v2 torrents) instead of just size-based. Hybrid/v2 merging is still possible, but file size is less reliable and caters to files only larger than 50MB.
Some kinda auto v1/v2/hybrid private<->public torrent maker plugin for BiglyBT would be… bigly.
If qBittorrent/qb-nox is bound to your VPN interface, then 1) your VPN needs to support port forwarding, and 2) forwarding a port on your router is pointless and unnecessary. Your only way around it is to switch VPN or don’t use VPN and then port forward.
Actually, ufw has its own separate issue you may need to deal with. (Or bind ports to localhost/127.0.0.1 as others have stated.)
The next best alternative would be BiglyBT’s Swarm Merging feature (which works similarly, and amazingly well on v1 torrents considering it only stores a precise file size instead of a hash in Vuze/Bigly’s own DHT). I’ve been able to ‘complete’ numerous separate torrents where availability was <1.
BiglyBT already supports v2 but dunno if Swarm Merging works with such torrents yet.
Thank you for posting this, hadn’t heard of it before.
Yes, I also work in IT.
The paid GUI version is extremely cautious on the auto-updates (it’s basically a wrapper for the CLI) - perhaps a bit too cautious. The free CLI version is also very cautious about making sure your backup storage doesn’t break.
For example, they recently added zstd encryption, yet existing storages stay on lz4 unless you force it - and even then, the two compression methods can exist in the same backup destination. It’s extremely robust in that regard (to the point that if you started forcing zstd compression, or created a new zstd backup destination, you can use the newest CLI to copy data to the older lz4 method and revert - just as an example). And of course you can compile it yourself years from now.
The licence is pretty clear - the CLI version is entirely free for personal use (commercial use requires a licence, and the GUI is optional). If you don’t like the licence, that’s fine, but it’s hardly ‘disingenuous’ when it is free for personal use, and has been for many years.
IMHO, Duplicacy is better than all of them at all those things - multi-machine, cross-platform, zstd compression, encryption, incrementals, de-duplication.
Wouldn’t you be on CGNAT though? How are they blocking it - at the DNS level? Have you tried a CNAME record that points your own domain to the actual duckdns domain? Just curious how/why they might be doing this.
If they’re supposed to be binary-identical data (same file checksums), you can use BiglyBTs Swarm Merging feature - without manually copying (which isn’t as reliable due to the start/end of the files not bordering on the chunk boundaries.
If they’ve been modified in any way though, this won’t work. However, you might be able to use its Swarm Discovery to find other torrents with the same data and complete with Swarm Merging.
If you can’t find the original .torrent, one way to find it again is to use BiglyBT client’s Swarm Discoveries feature to search its DHT for the exact file size in bytes (of the main media file within). You may be able to find one or more torrents and simultaneous seed them with Swarm Merging too.
As well as the force recheck method others have mentioned, you can also tell BiglyBT to use existing files elsewhere when adding the torrent, which can copy the data onto there for you without risking overwriting the original files.