• 1 Post
  • 13 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • So currently I haven’t re-added any of the data-storing ZFS pools to the Datacenter storage section (wanted to understand what I’m doing before trying anything). Right now my storage.cfg reads as follows (without having added anything):

    zfspool: virtualizing
            pool virtualizing
            content images,rootdir
            mountpoint /virtualizing
            nodes chimaera,executor,lusankya
            sparse 0
    
    zfspool: ctdata
            pool virtualizing/ctdata
            content rootdir
            mountpoint /virtualizing/ctdata
            sparse 0
    
    zfspool: vmdata
            pool virtualizing/vmdata
            content images
            mountpoint /virtualizing/vmdata
            sparse 0
    
    dir: ISOs
            path /virtualizing/ISOs
            content iso
            prune-backups keep-all=1
            shared 0
    
    dir: templates
            path /virtualizing/templates
            content vztmpl
            prune-backups keep-all=1
            shared 0
    
    dir: backup
            path /virtualizing/backup
            content backup
            prune-backups keep-all=1
            shared 0
    
    dir: local
            path /var/lib/vz
            content snippets
            prune-backups keep-all=1
            shared 0
    

    Under my ZFS pools (same on each node), I have the following:

    The “holocron” pool is a RAIDZ1 combo of 4x8TB HDDs, “virtualizing” is RAID mirrored 2x2TB SSDs, and “spynet” is a single 4TB SSD (NVR storage).

    When you say to “add a fresh disk” - you just mean to add a resource to a CT/VM, right? I trip on the terminology at times, haha. And would it be wise to add the root ZFS pool (such as “holocron”) or to add specific datasets under it (such as "Media or “Documents”)?

    I’m intending to create a test dataset under “holocron” to test this all out before I put my real data through any risk, of course.


  • Ah, I see - this is effectively the same as the first image I shared, but via shell instead of GUI, right?

    For my NFS server CT, my config file is as follows currently, with bind-mounts:

    arch: amd64
    cores: 2
    hostname: bridge
    memory: 512
    mp0: /spynet/NVR,mp=/mnt/NVR,replicate=0,shared=1
    mp1: /holocron/Documents,mp=/mnt/Documents,replicate=0,shared=1
    mp2: /holocron/Media,mp=/mnt/Media,replicate=0,shared=1
    mp3: /holocron/Syncthing,mp=/mnt/Syncthing,replicate=0,shared=1
    net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.1,hwaddr=BC:24:11:62:C2:13,ip=192.168.0.82/24,type=veth
    onboot: 1
    ostype: debian
    rootfs: ctdata:subvol-101-disk-0,size=8G
    startup: order=2
    swap: 512
    lxc.apparmor.profile: unconfined
    lxc.cgroup2.devices.allow: a
    lxc.cap.drop:
    

    For full context, my list of ZFS pools (yes, I’m a Star Wars nerd):

    NAME                                    USED  AVAIL  REFER  MOUNTPOINT
    holocron                               13.1T  7.89T   163K  /holocron
    holocron/Documents                     63.7G  7.89T  52.0G  /holocron/Documents
    holocron/Media                         12.8T  7.89T  12.8T  /holocron/Media
    holocron/Syncthing                      281G  7.89T   153G  /holocron/Syncthing
    rpool                                  13.0G   202G   104K  /rpool
    rpool/ROOT                             12.9G   202G    96K  /rpool/ROOT
    rpool/ROOT/pve-1                       12.9G   202G  12.9G  /
    rpool/data                               96K   202G    96K  /rpool/data
    rpool/var-lib-vz                        104K   202G   104K  /var/lib/vz
    spynet                                 1.46T  2.05T    96K  /spynet
    spynet/NVR                             1.46T  2.05T  1.46T  /spynet/NVR
    virtualizing                           1.20T   574G   112K  /virtualizing
    virtualizing/ISOs                       620M   574G   620M  /virtualizing/ISOs
    virtualizing/backup                     263G   574G   263G  /virtualizing/backup
    virtualizing/ctdata                    1.71G   574G   104K  /virtualizing/ctdata
    virtualizing/ctdata/subvol-100-disk-0  1.32G  6.68G  1.32G  /virtualizing/ctdata/subvol-100-disk-0
    virtualizing/ctdata/subvol-101-disk-0   401M  7.61G   401M  /virtualizing/ctdata/subvol-101-disk-0
    virtualizing/templates                  120M   574G   120M  /virtualizing/templates
    virtualizing/vmdata                     958G   574G    96K  /virtualizing/vmdata
    virtualizing/vmdata/vm-200-disk-0      3.09M   574G    88K  -
    virtualizing/vmdata/vm-200-disk-1       462G   964G  72.5G  -
    virtualizing/vmdata/vm-201-disk-0      3.11M   574G   108K  -
    virtualizing/vmdata/vm-201-disk-1       407G   964G  17.2G  -
    virtualizing/vmdata/vm-202-disk-0      3.07M   574G    76K  -
    virtualizing/vmdata/vm-202-disk-1      49.2G   606G  16.7G  -
    virtualizing/vmdata/vm-203-disk-0      3.11M   574G   116K  -
    virtualizing/vmdata/vm-203-disk-1      39.6G   606G  7.11G  -
    

    So you’re saying to list the relevant four ZFS datasets in there but, instead of as bind-points, as virtual drives (as seen in the “rootfs” line)? Or rather, as “storage backed mount points” from here:

    https://pve.proxmox.com/wiki/Linux_Container#_storage_backed_mount_points

    Hopefully I’m on the right track!



  • Oh I didn’t think the gap window is a bug - I was just acknowledging it, and I’m OK with it.

    Definitely some ideas one day for the future but with my current time, architecture, and folks depending on certain services (and my own sanity with the many months I already spent on this), not really looking to re-do anything or wipe drives.

    Just want to make the best of my ZFS situation for now - I know it can’t do everything that Ceph and GlusterFS can do.



  • Hmm, alright - yeah my other nodes have the same ZFS pools already made.

    For adding a virtual drive, you mean going to this section, and choosing “Add: Hard Disk” then selecting whatever ZFS pool I would have added under the prior screenshot, under the highlighted red “Storage” box? Will the VM “see” the data already in that pool if it is attached to it like this?

    Sorry for my ignorance - I’m a little confused by the “storagename:dataset” thing you mentioned?

    And for another dumb question - when you say “copy the data into a regularly made virtual drive on the guest” - how is this different exactly?

    One other thing comes to mind - instead of adding the ZFS pools to the VMs, what if I added them to my CT that runs an NFS server, via Mount Point (GUI) instead of the bind-mount way I currently have? Of course, I would need to add my existing ZFS pools to the Datacenter “Storage” section in the same way as previous discussed (with the weird content categories).




  • Yep that’s also been a concern of mine - I don’t have replication coming from the other nodes as well.

    When you say let PVE manage all of the replication - I guess that’s what the main focus of this post is - how? I have those ZFS data pools that are currently just bind-mounted to two CTs, with the VMs mapping to them via NFS (one CT being an NFS server). It’s my understanding that bind-mounted items aren’t supported to be replicated alongside the CTs to which they are attached.

    Is there some other, better way to attach them? This is where that italics part comes in - can I just “Add Storage” for these pools and thenadd them via GUI to attach to CTs or VMs, even though they don’t fit those content categories?


  • I think so, but I already have a great deal of data stored on my existing pools - and I wanted the benefits of ZFS. Additionally, it’s my understanding that Ceph isn’t ideal unless you have a number of additional node-to-node direct connections - I don’t have this - insufficient PCIe slots in each node for additional NICs.

    But thanks for flagging - in my original post(s) elsewhere I mentioned in the title that I was seeking to avoid Ceph and GlusterFS, and forgot to mention that here. 🙃




  • Former USAF JAG here (lawyer). I was always a tech geek, undergrad major was in MIS actually, but I didn’t enjoy coding. Always ran Plex on the side, built my own computers, etc. Grew up with my Dad using Linux everywhere (I found this annoying as I just wanted to play games on Windows).

    I didn’t enjoy law (surprise!). I was disillusioned with the criminal justice system too. Quit the law in 2020. Then suddenly had quality time by global happenstance to rethink my life path.

    I work in IT now. Restarted at the bottom of a new career but I’m in deep nerd territory now - Proxmox servers, Home Assistant, networks with VLANs, OPNsense router, 22U server rack, Linux as my daily driver, etc.

    Much happier now.


  • pr0927@lemmy.worldtoSelfhosted@lemmy.worldPost your Servernames!
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    8 months ago

    As a huge fan of Star Wars content from before Disney got involved and poisoned it (notable exceptions of Rogue One, Andor, some of the animated shows, etc.), I utilize warship names from the Expanded Universe (now called “Legends”) - what I like to call True Star Wars.

    My main server is Chimaera. My backup server that also performs as an NVR is Lusankya. My separate mostly-NAS server away from my server rack is Admonitor.

    I have sci-fi themed names (not all Star Wars - two other franchises represented here, virtual kudos to those who can identify) for the storage pools too (using TrueNAS SCALE on all three servers):

    • Chimaera (Main Server)
      • Star-Forge (Apps/VM Pool)
      • Holocron (Data Pool)
    • Lusankya (Backup + NVR Server)
      • Shadow-Broker (Apps Pool)
      • Resurrection (Backup Pool)
      • Spynet (NVR Pool)
    • Admonitor (NAS Server)
      • Mount-Tantiss (Apps Pool)
      • Datacron (Data Pool)