Thanks! Looks like they don’t specify any fine amounts just saying that it’s probably coming and could be leveled before leadership change in the fining body in EU.
Thanks! Looks like they don’t specify any fine amounts just saying that it’s probably coming and could be leveled before leadership change in the fining body in EU.
Looks paywalled or something, anyone can provide a tldr?
Ah interesting — again happy to help out if there’s anything I can contribute to. I can make a feature request on github if there’s interest.
Is there any interest in getting local models to run using this? I’d rather not use Gemini, and then all the data can reside locally (and not require a login).
I’d be happy to work on this, though I’m a python developer not a typescript one.
I personally love PWAs — why the hate for them? Personally I think more apps should be PWAs instead.
N-95 masks are protective, and to a certain (most likely lesser degree), KN-95 masks are also protective.
I don’t believe this is quite right. They’re capable of following instructions that aren’t in their data but appear like things which were (that is, it can probabilistically interpolate between what it has seen in training and what you prompted it with — this is why prompting can be so important). Chain of thought is essentially automated prompt engineering; if it’s seen a similar process (eg from an online help forum or study materials) it can emulate that process with different keywords and phrases. The models themselves however are not able to perform a is to b therefore b is to a, arguably the cornerstone of symbolic reasoning. This is in part because it has no state model or true grounding, only probabilities you could observe a token given some context. So even with chain of thought, it is not reasoning, it’s just doing very fancy interpolation of the words and phrases used in the initial prompt to generate a prompt that is probably going to give a better answer, not because of reasoning, but because of a stochastic process.
Bookmarked and will come back to this. One thing that may be if interest to add is for AMD cards with 20gb of ram. I’d suppose that it would be Qwen 2.5 34B with maybe less strict quant or something.
Also, it may be interesting to look at the AllenAI molmo related models. I’m kind of planning to do this myself but haven’t had time as yet.
So glad I have Tesla shorts lol
Maybe also egg corn?
https://m.youtube.com/watch?v=F12LSAbos7A&t=467s&pp=ygULTWFsYXByb3Bpc20%3D
Not OP, but I looked her up:
She’s a “former Philippines mayor, accused of ties to Chinese criminal syndicates and money laundering” (Reuters). I guess the tech part is the SIM card thing?
While this is true, algorithmic feeds virtually guarantee that echo chambers exist within a platform already. Fascists won’t leave YouTube because they feel it’s “too woke” or offering varying viewpoints, they’ll leave because the people they already watch there tell them to go to the other service. So I think it’s possible Elon attracts the fascists, destroys YouTube’s ability to monetize that part of their algorithm, and consequently have to improve service for others to try and ensure other fringe echo chambers don’t follow suit.
They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.
For what it’s worth, according to their wwdc note, they’re basically trying to do this.
Not even a summary of what’s on Wikipedia, usually a summary of the top 5 SEO crap webpages for any given query.
Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.
Almost. If you own a share of a company, you own a share of something fungible, namely literal company property or IP. Even if the company went bankrupt, you own a sliver of their real product (real estate, computers, patented processes). So while you may be speculating on the wealth associated with the company, it is not a scam in the sense that it isn’t a non fungible entity. The sole value of crypto currency is in its speculative value, it is not tied in theory or in practice to something of perceptibly equal realized value. A dividend is just giving you return on profit made from realized assets (aforementioned real estate or other company property or processes), but the stock itself is intrinsically tied to the literal ownership of those profit generating assets.
Except, you know, the stock being tied to ownership in a company that sells real goods or services. Definitely problems with how stocks are traded, but they’re quite different from crypto.
I mean you can model a neuronal activation numerically, and in that sense human brains are remarkably similar to hyper dimensional spatial computing devices. They’re arguably higher dimensional since they don’t just integrate over strength of input but physical space and time as well.
I think in general the goal is not to stuff more information into fewer qubits, but to stabilize more qubits so you can hold more information. The problem is in the physics of stabilizing that many qubits for long enough to run a meaningful calculation.
TLDR:
Qwen model is popular and DeepSeek R1 seems good, and they’re based in China. There is investment because they think that’s where the future commerce will be (which to me is a little like the dot com bubble but, whatever). Lack of chips can be problematic.