I’m curious, how are you discovering new music this way? my understanding of soulseek and nicotine+ is that they’re great for finding music by artists you already know, but idk how they would work for discovery…?
also at beehaw
I’m curious, how are you discovering new music this way? my understanding of soulseek and nicotine+ is that they’re great for finding music by artists you already know, but idk how they would work for discovery…?
I don’t know anything about GPU design but expandable VRAM is a really interesting idea. Feels too consumer friendly for Nvidia and maybe even AMD though.
I can’t believe someone has paid for that domain name for 23 years… O_O
I like the friendlier feeling of Seaford (the o shapes have a little tilt to them rather than being straight on the grid), but I’m guessing they leaned towards the most “generic” of the five because as a default font you want it to become “invisible” almost. I think a more unique font would stand out and then become a little grating over time given how much it would be seen.
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
Yup; hopefully there are some advances in the training space, but I’d guess that having large quantities of VRAM is always going to be necessary in some capacity for training specifically.
So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.
Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.
I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.
Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.
In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).
Never heard of them, but just looking at a registrar comparison chart, their renewal costs are pretty high. eg. $20 for .wiki
renewal at Porkbun and $30 at Hover. Maybe they bundle in a lot of services along with it that make the price worth it? but unless you’re taking full advantage of those (if they’re offered) then you could def get a better deal elsewhere.
Namecheap has okay starting prices but man their renewal prices aren’t great compared to other registrars.
I just transferred all my domains out of Namecheap into Porkbun. I think Porkbun is 10 to 50 cents more expensive than Cloudflare, but they seemed a bit easier to use and could hold all my TLDs. So far, a way better experience than Namecheap!
I have a personal Discord server that I drop links into - fully intending to get them out of Discord and into my notes someday, though let’s just say I’m quite behind on that.
Mostly I find it useful because I can drop a link on from my phone and quickly access it from my PC, or vice versa. There is some organization into channel types (food, music, games, etc) but these days I just use a general channel as a dumping ground and figure I’ll sort later, ha.