Seems the chapter for Jellyfin has been “coming soon” for 3 years, too bad.
Just a stranger trying things.
Seems the chapter for Jellyfin has been “coming soon” for 3 years, too bad.
I’m not saying it’s not true, but nowhere on that page is there the word donation. And if it is, the fact that it is described and a license, tied to a server or a user causes a lot of confusion to me, especially when combined with the fact that there is no paywall but that it requires registration.
Why use the term license, server and user? Why not simply say donation and with the option of displaying the support by getting exclusive access to a badge like signal does?
Again, I’m very happy immich is free, it is great software and it deserves support but this is just super confusing to me and the buy.immich.app link does not clarify things nor does that blog post.
Edit: typo
Hi and thank you so much for the fantastic work on Immich! I’m hoping to get a chance to try it out soon, with the first stable release!
One question on the financial support page: is it not a donation? There is a per server and a per user purchase, but I thought immich was exclusively self hosted, is it not? Or is this more like a way to say thanks while giving some hints as to how immich is being used privately? Or is there a way to actually pay to have immich host a server for one?
Thanks for clarifying!
From reading the comments, this is something related to star trek, but for people who’ve never watched the show, what is it?
Allegedly, the 5090 would have 32GB and the 5080 16GB, I don’t see much room for the 5060 to have more than 8GB if the 5070 itself has 12GB?
I would have loved to see the 5080 with 24GB, the 5070 with 16GB and the 5060 with 12GB (at least). And for the 5060 to drop the 128 bit bus…
This would make sense as the ente server doesn’t do much given all the photos are encrypted. All the intelligence is in the client apps.
Thanks for sharing your experience. Was XCP-ng considered as a migration target? Would you have some feedback to share on what made it unsuitable for you? Thank you!
They have a special migration tool from VMWare: https://docs.xcp-ng.org/installation/migrate-to-xcp-ng/#-from-vmware
This is the way.
Thorn, the company backed by Ashton Kutcher and which tried to get its way to monitor all messages in the EU via Chat Control. No thanks.
I hear you, but how much time was Synology given? If it was no time at all (which it seems is what happened here??), that does not even give Synology a chance and that’s what I’m concerned with. If they get a month (give or take), then sure, disclose it and too bad for them if they don’t have a fix, they should have taken it more seriously, but I’m wondering about how much time they were even given in this case.
Was it that the talk was a last minute change (replacing another scheduled talk) so the responsible disclosure was made in a rush without giving synology more time to provide the patch before the talk was presented?
If so, who decided it was a good idea to present something regarding a vulnerability without the fix being available yet?
I’m not sure, I read that ZFS can help in the case of ransomware, so I assumed it would extend to accidental formatting but maybe there’s a key difference.
I think these kind of situations are where ZFS snapshots shine: you’re back in a matter of seconds with no data loss (assuming you have a recent snapshot before the mistake).
Edit: yeah no, if you operate at the disk level directly, no local ZFS snapshot could save you…
This. I will resume my recommendation of Bitwarden.
I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.
I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.
If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.
Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B
Edit2: added links
Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.
Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.
I wouldn’t assume this is done with malice in mind, but maybe this is someone unaware of the importance of a formal license.
I’m wondering, the integrated RAM like Intel did for Lunar Lake, could the same performance be achieved with the latest CAMM modules? The only real way to go integrated to get the most out of it is doing it with HBM, anything else seems like a bad trade-off.
So either you go HBM with real bandwidth and latency gains or CAMM with decent performance and upgradeable RAM sticks. But the on-chip ram like Intel did is neither providing the HBM performance nor the CAMM modularity.
I’m not sure because a bank being absent from such a list could either mean the compatibility is known to be functional or unknown. And that’s very different and I would argue a very critical difference.
As a user I would definitely care most about a list of functional banks and that is what we have. What you propose, while it has its value, would not be actionable to users due to the ambiguity I raised above.