https://agnos.is/posts/tech-recruitment-is-out-of-control.html
This was my experience at the beginning of 2024. It was bad enough that I had to write a blog post about it.
https://agnos.is/posts/tech-recruitment-is-out-of-control.html
This was my experience at the beginning of 2024. It was bad enough that I had to write a blog post about it.
Have you tried Matrix?
OpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.
Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don’t make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.
Otherwise, all settings are the defaults.
I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.
Perplexica works. It can understand ollama and custom OpenAI providers.
Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I’m using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?
You can right click the URL bar for sites that support the OpenSearch XML standard. Which I guess is what they wanted to replace it with. But I don’t really know why they removed the button to a about: config setting. Could at least be a checkbox or something to enable.
Returns the add custom search engine button. Which for some reason, has been hidden by default.
Depends on the continuity and who’s writing it, but often yes. He was notably portrayed this way in the Justice League cartoon.
I use a Misskey fork for micro blogging and I can’t even get Lemmy posts to load. The profiles of communities do, but that’s it.
Ah right. What I really meant to ask was if it can do protocols other than http.
Which I don’t think it can…
Are you able to tunnel ports other than 80 and 443 through Cloudflare?
The fork was originally created because upstream NewPipe elected not to include sponsor block functionality.
Don’t think the snap is an official Mozilla package.
Will existing projects have to adapt their codebases to work with ActivityPods? I assume yes.
So is there a way to follow someone on Threads now? Or at least get one’s instance to load a post? Where are the details of this beyond Zuckerberg’s post?
Didn’t they contribute networking stuff?
I think they opened up the client, but not the server part. They also use some goofy license.
They’re not really open source, no. But they do at least support open standards.
Where can I get a sub 400 AMD card with 26 GB of VRAM?