• 1 Post
  • 81 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • In general, I prefer unprivileged LXC to a full VM unless there’s some specific requirement that countermands that preference (like running an appliance or a non-Linux OS).

    What I tend to do is create a new container for each service (unless there’s a related stack). If the service runs on Docker, I’ll install that right inside the container and manage it with docker compose. By installing Docker directly from get.docker.com instead of the built in packages, it pretty much works all the time.

    Since each service is in its own container, restoring backups is pretty service-specific. If you wanted some kind of central control plane for docker, you could check out swarm mode.



  • Without knowing a little more, it’s tough to say what’s going on, but I suspect when you recreated the storage, you connected it to a slightly different place from last time. What’s the output of cat /etc/pve/storage.cfg? The dump, images, private, snippets, and template directories are auto-created when you assign those roles to a storage pool in the PVE Datacenter.

    Seeing the content of storage.cfg and maybe mount would help get this sorted, I think.






  • I’m making some assumptions, namely that you’re using an unprivileged LXC container and the mount point is a bind mount.

    Unprivileged LXC shift user ID numbers so that an escape won’t result in root access to the host. The root user (uid 0) in the container is actually uid 100000 from the perspective of the Proxmox host.

    What I usually do is set ownership of my bind mounts to that high-numbered ID (so something like chown -R 100000:100000 /path/to/bind/mount) from Proxmox. Then the root user in the container will be able to set whatever permissions you need directly.


  • Since you’re interested in this kind of DIY, approach, I’d seriously consider thinking the whole process through and writing a simple script for this that runs from your desktop. That will make it trivial to do an automatic backup whenever you’re active on the network.

    Instead of cron, look into systemd timers and you can fire off your script after, say, one minute of being on your desktop, using a monotonic timer like OnUnitActiveSec=60.

    Thinking through the script in pseudo code, it could look something like:

    rsync -avzh $server_source $desktop_destination || curl -d "Backup failed" ntfy.sh/mytopic

    This would pull the back from your server to your desktop and, if the backup failed, use a service such as ntfy.sh to notify you of the problem.

    I think that would pretty much take care of all of your requirements and if you ever decided to switch systems (like using zfs send/recv instead of rsync), it would be a matter of just altering that one script.



  • tvcvt@lemmy.mltoSelfhosted@lemmy.worldHomelab Organization
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.

    For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.



  • If you want an image, it doesn’t matter what the underlying file system is. You should be able to use a tool like Clonezilla and get a 1:1 copy. Depending how you’ve set up partitioning, you could also use sgdisk to set up the proper partitions and zfs send/recv for the new data portion of the drive and install a boot loader. That’s probably the way I’d go in this instance.






  • Not sure how your stack works together, but sudo will let you run particular commands as a different user and you can be pretty specific with the privileges. For example you can have a script that’s only allowed to run docker compose -f /path/to/compose.yml restart containername as a user in the docker group. Maybe there’s some docker-specific approach, but this should work with traditional Unix tools and a little scripting.