Just a stranger trying things.

  • 6 Posts
  • 172 Comments
Joined 1 year ago
cake
Cake day: July 16th, 2023

help-circle
  • I’m not sure because a bank being absent from such a list could either mean the compatibility is known to be functional or unknown. And that’s very different and I would argue a very critical difference.

    As a user I would definitely care most about a list of functional banks and that is what we have. What you propose, while it has its value, would not be actionable to users due to the ambiguity I raised above.



  • The Hobbyist@lemmy.ziptoSelfhosted@lemmy.worldImmich - 2024 Recap 🎊
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    8 days ago

    I’m not saying it’s not true, but nowhere on that page is there the word donation. And if it is, the fact that it is described and a license, tied to a server or a user causes a lot of confusion to me, especially when combined with the fact that there is no paywall but that it requires registration.

    Why use the term license, server and user? Why not simply say donation and with the option of displaying the support by getting exclusive access to a badge like signal does?

    Again, I’m very happy immich is free, it is great software and it deserves support but this is just super confusing to me and the buy.immich.app link does not clarify things nor does that blog post.

    Edit: typo


  • Hi and thank you so much for the fantastic work on Immich! I’m hoping to get a chance to try it out soon, with the first stable release!

    One question on the financial support page: is it not a donation? There is a per server and a per user purchase, but I thought immich was exclusively self hosted, is it not? Or is this more like a way to say thanks while giving some hints as to how immich is being used privately? Or is there a way to actually pay to have immich host a server for one?

    Thanks for clarifying!













  • I didn’t say it can’t. But I’m not sure how well it is optimized for it. From my initial testing it queues queries and submits them one after another to the model, I have not seen it batch compute the queries, but maybe it’s a setup thing on my side. vLLM on the other hand is designed specifically for the multi co current user use case and has multiple optimizations for it.


  • The Hobbyist@lemmy.ziptoSelfhosted@lemmy.worldSelf-hosting LLMs
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    2 months ago

    I run the Mistral-Nemo(12B) and Mistral-Small (22B) on my GPU and they are pretty code. As others have said, the GPU memory is one of the most limiting factors. 8B models are decent, 15-25B models are good and 70B+ models are excellent (solely based on my own experience). Go for q4_K models, as they will run many times faster than higher quantization with little performance degradation. They typically come in S (Small), M (Medium) and (Large) and take the largest which fits in your GPU memory. If you go below q4, you may see more severe and noticeable performance degradation.

    If you need to serve only one user at the time, ollama +Webui works great. If you need multiple users at the same time, check out vLLM.

    Edit: I’m simplifying it very much, but hopefully should it is simple and actionable as a starting point. I’ve also seen great stuff from Gemma2-27B

    Edit2: added links

    Edit3: a decent GPU regarding bang for buck IMO is the RTX 3060 with 12GB. It may be available on the used market for a decent price and offers a good amount of VRAM and GPU performance for the cost. I would like to propose AMD GPUs as they offer much more GPU mem for their price but they are not all as supported with ROCm and I’m not sure about the compatibility for these tools, so perhaps others can chime in.

    Edit4: you can also use openwebui with vscode with the continue.dev extension such that you can have a copilot type LLM in your editor.