Knowing Nvidia’s exorbitant pricing, I think I’ll keep Intel’s Arc B580 in my wishlist.
Three raccoons in a trench coat. I talk politics and furries.
Other socials: https://ragdollx.carrd.co/
Knowing Nvidia’s exorbitant pricing, I think I’ll keep Intel’s Arc B580 in my wishlist.
You might just want to use Kaggle tbh
I heard that he’s an ethereal being from another dimension that has already faded away from our plane of existence so the police is wasting its time looking for him and should close the case.
They did test those block towers to see if they were resistant to earthquakes, and they were still standing after a test comparable to the strongest earthquake in California. Though I agree that compared to the other options available it does look way more unsafe and inefficient.
QnQ pwease down’t ask me abouwt Tiananmen Squawe, that’s vewy mean…
It’s a known problem - though of course, because these companies are trying to push AI into everything and oversell it to build hype and please investors, they usually try to avoid recognizing its limitations.
Frankly I think that now they should focus on making these models smaller and more efficient instead of just throwing more compute at the wall, and actually train them to completion so they’ll generalize properly and be more useful.
I would like to propose some changes to that title:
Microsoft CEO’s pay rises 63% to $79m,
despite[because of] devastating year for layoffs: 2550jobs lost[employees were fired by their greedy CEO] in 2024 [because he wanted more money]
Conservatives have already said that they want to inspect children’s genitals, so it’s only a matter of time until they start saying that they want to regularly inspect women’s genitals as well to “protect unborn children” (read: control women and fulfill their sick fetish)
I remember when scientists were more focused on making AI models smaller and more efficient, and research on generative models was focused on making GANs as robust as possible with very little compute and data.
Now that big companies and rich investors saw the potential for profit in AI the paradigm has shifted to “throw more compute at the wall until something sticks”, so it’s not surprising it’s affecting carbon emissions.
Besides that it’s also annoying that most of the time they keep their AIs behind closed doors, and even in the few cases where the weights are released publicly these models are so big that they aren’t usable for the vast majority of people, as sometimes even Kaggle can’t handle them.
Looked up her name on Twitter to see what people were saying about this, that was a mistake 🙄
A lot of people seem to hate her for whatever reason, she was far from perfect, but all things considered I think she did fine as CEO and I never got the hate. It can’t be easy to manage a company as big and complex as YouTube.
“My science-based, 100% dragon MMO is already under development.”
Yeah while I don’t doubt that noise pollution can affect one’s health I have to wonder how much of this is just the placebo effect, like with people complaining that cellphone towers are giving them migranes or rashes.
Don’t a lot of people also keep their tax information as plain text in their PC? If someone’s really worried about that stuff being leaked I think it’s on them to download VeraCrypt or smth, and also not to use ChatGPT for sensitive stuff knowing that OpenAI and Apple will obviously use it as training data.
Maybe they should try using Claude 3.5 Sonnet to write more secure code for their systems. I’ve heard it’s the best LLM out there when it comes to coding 🤡
No but you see he is a visionary! A real life Tony Stark!! He’ll do great things with that money like… Making Twitter X likes private for some reason…? I’m sure that cost a lot of money somehow /s
This did happen a while back, with researchers finding thousands of hashes of CSAM images in LAION-2B. Still, IIRC it was something like a fraction of a fraction of 1%, and they weren’t actually available in the dataset because they had already been removed from the internet.
You could still make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.
Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.
I wish science was a simple as taking the mean and confidence intervals.
Cope. The idea always sucked and made no sense. (Also I just hate Zuck and hope he gets Luigi’d 🙏)