This sounds like something you should pursue on the project’s GitHub page: https://github.com/immich-app/immich
This sounds like something you should pursue on the project’s GitHub page: https://github.com/immich-app/immich
The problem with managing replication outside of PVE is ‘what do you do when you have to move the VM/CT to another node?’ Are you going to move the conf file manually? It’s too much manual work.
Let PVE manage all of the replication. Use Sanoid for stuff that isn’t managed by Proxmox.
Home assistant integration could accomplish this for you. Not sure if it’s less work than regular mobile clients, though.
I think fans of Nix and NixOS would agree.
This is really more of a home networking issue than anything having to do with self-hosting. Please consider posting this in one of the many Lemmy home networking communities.
This is a question probably better-suited for one of the Proxmox communities. But, I’ll give it a try.
Regarding your concerns about new SSDs and old VM configs: why not upgrade to PVE8 on the existing hardware? This would seem to mitigate your concerns about PVE8 restoring VMs from a PVE7 system. Still, I wouldn’t expect it to be a problem either way.
Not sure about your TrueNAS question. I wouldn’t expect any issues unless a PVE8 installs brings with it a kernel driver change that is relevant to hardware.
Finally, there are several config files that would be good to capture for backup. Proxmox itself doesn’t have a quick list, but this link has one that looks about right: https://www.hungred.com/how-to/list-of-proxmox-important-configuration-files-directory/
Is there any actual research? All I see are TikTok videos and Reddit comments.
Love another iOS option.
Nextcloud Photos performs okay, but the interface is very ‘meh’. Plus, the mobile client’s sync is a little unstable. On iOS, there’s no background sync at all.
This seems the correct advice. If the container is on the same host as the data, there’s no need to access the data via Samba. In fact, it’s likely the container doesn’t contain the samba client needed for such connectivity.
Assuming TrueNAS allows the containers to see local data, a bind mount is the way to go.
wtf ?
Right. You kind of want your bare metal OS as vanilla as possible. If you need to nuke and pave, you don’t need to worry about re-applying various configs. Additionally, on a theoretical level, if there’s a bug in something on the bare metal OS, the separation provided by VMs and containers should mean it doesn’t affect the the apps in those VMs / containers.
That seems easier - at least to me - than keeping track of configs in text files or even Ansible playbooks.
This is good stuff. Has it been posted to the project’s GitHub (issue, discussion, etc.)?
Have you considered searching the GitHub issues?
I love the Enbrighten stuff. It’s not WiFi, but it’s local.
IMO, this is a discussion that should be taking place on the project’s GitHub. I’m going to lock the comments so I don’t get any more reports about commenters’ behavior.
I imagine this would be up to the application. What you’re describing would been seen by the OS as the device becoming unavailable. That won’t really affect the OS. But, it could cause problems with the drivers and/or applications that are expecting the device to be available. The effect could range from “hm, the GPU isn’t responding, oh well” to a kernel panic.
Curious how this is distinct from SimpleX.
While this post is in support of a self-hosting platform, the request itself is storage-related. I would recommend that you reach out to https://lemmy.world/c/zfs or https://discourse.practicalzfs.com/.