

Watch time is pretty important on YouTube afaik, initial clocks themselves don’t count for that much
Watch time is pretty important on YouTube afaik, initial clocks themselves don’t count for that much
What? Since when does Valve prohibit companies from redirecting customers to non-Valve purchasing flows? Because that’s what this ruling is about, it says Apple can’t prohibit apps from telling users to go buy off-platform for lower prices. Valve isn’t doing that with Steam afaik, actually I’m not aware of any other platform that does this
Hadn’t heard of that but wouldn’t surprise me
Immich might not hold up yet in every aspect to Google photos, but I was and am still blown away by how much better face detection and grouping works. I cannot believe how ridiculously bad that feature is in Google, you just have to pray that it works, and if it messes up, it’s extremely annoying to fix. In immich, it works exactly as you’d expect.
“The planning thing in poems blew me away,” says Batson. “Instead of at the very last minute trying to make the rhyme make sense, it knows where it’s going.”
How is this surprising, like, at all? LLMs predict only a single token at a time for their output, but to get the best results, of course it makes absolute sense to internally think ahead, come up with the full sentence you’re gonna say, and then just output the next token necessary to continue that sentence. It’s going to re-do that process for every single token which wastes a lot of energy, but for the quality of the results this is the best approach you can take, and that’s something I felt was kinda obvious these models must be doing on one level or another.
I’d be interested to see if there are massive potentials for efficiency improvements by making the model able to access and reuse the “thinking” they have already done for previous tokens
Huh? I’m streaming from my Jellyfin just fine when I’m on the go, with no tailscale or other VPN set up
No, they didn’t. They explicitly said that you’re free to not upgrade for now in the announcement.
They have a section in their TOS that says they can block you from using the printer if you don’t upgrade, which sucks, but that is a generic clause, doesn’t mean they’ll make use of that here, and from their communication I don’t suspect they will, at least for the time being.
not a common feature in proprietary software
Just so you know, the GDPR mandates that you can at any time get a full export of all your personal data from anyone who’s processing it in a common, machine readable format. It is laudable though to have that integrated as a feature in the software, rather than jumping through hoops contacting support etc.
Why not simply say donation
It’s about setting expectations. The wording is chosen because they believe that paying open source developers for their work should be the norm, not the exception. Calling it a donation would not do that justice. Their wording is saying “Here’s the software, we’ll trust you to pay us for it if it brings you value and you can afford it”. It’s an explicit expectation to pay, unless you have good reasons not to, which is also fine but should be the exception. Whereas a donation is very much optional and not the default expectation by nature.
In the end it’s just a semantic difference, it’s just all about making expectations clear even if there is no enforcement around them.
I agree that this way of displaying the data is appropriate, but it would be nice to have a very visible indicator of this. Some kind of highlighted “fold” line or something at the very bottom of the chart, maybe. If I can deduce the units from context, and the trend is more interesting than absolute numbers, then I’m not going to look at the axes most of the time
This is not at all relevant to the comment you’re responding to. Your choice of password manager doesn’t change that whatever system you’re authenticating against still needs to have at least a hash of your password. That’s what passkeys are improving on here
You need both ends of the cable connected, so the phone is out. And even on PC, I’m not sure if it would work with the USB drivers in-between the software and the actual ports
Reading the article, it seems like it will actually be opt-in for everyone
The algorithm is actually tailored to find out if/when you fall asleep while watching videos, and then recommends longer videos in autoplay when it believes you are, because they’ll get to play you more ads and cash out more.
You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.
Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI
Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.
It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.
This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.
That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.
LLMs do not make decisions.
What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.
That kind of window has been around for a long time already. Also, let me introduce you to window awnings
It still protects you from your passwords being compromised in any way except through a compromise of the password manager itself. Yes, it’s worse than keeping them separate, but it’s also still much better than not having 2fa at all.
You’re not wrong, but the way you put it makes it sound a little bit too intentional, I think. It’s not like the camera sees infrared light and makes a deliberate choice to display it as purple. The camera sensor has red, green and blue pixels, and it just so happens that these pixels are receptive to a wider range of the light spectrum than the human eye equivalent, including some infrared. Infrared light apparently triggers the pixels in roughly the same way that purple light does, and the sensor can’t distinguish between infrared light and light that actually appears purple to humans, so that’s why it shows up like that. It’s just an accidental byproduct of how camera sensors work, and the budgetary decision to not include an infrared filter in the lens to prevent it from happening.
I think if we are going to support the idea of an open web, we need to be consistent about it.
Not convinced. This feels like the paradox of tolerance in slightly different shape.
Well, yeah, kind of at this point. LLMs can be interpreted as natural language computers