

The episode is “My Screw Up” (S3E14) if anyone is wondering.
I might actually prefer “My Lunch” from S5 as an episode, but they are both fantastic.
The episode is “My Screw Up” (S3E14) if anyone is wondering.
I might actually prefer “My Lunch” from S5 as an episode, but they are both fantastic.
Well, yes and no.
Quantum computers will likely never beat classical computing on classical algorithms, for exactly the reasons you stated, classical just has too much of a head start.
But there are certain problems with quantum algorithms that are exponentially faster than the classical algorithms. Quantum computers will be better on those problems very quickly, but we are still working on building reliable QCs. Also, we currently don’t know very many quantum algorithms with that degree of speedup, so as others have said there isn’t many use cases for QCs yet.
This isn’t a “comic book” universe, but the parahumans story universe (Worm and Ward) fits this pretty well.
Without spoiling too much of the story, characters all get powers in response to traumatic events. The powers they get also tend to reflect the type of trauma that occurred, so if they lost an arm they might get a healing power, or if they were trapped in a burning building they might get the ability to phase through walls and a resistance to fire. All of the powers in the setting tend to follow this approach, and stay within the rules of the setting.
After a few years the orbit will degrade enough that it’ll start to fall back to earth. At that point, the satellite will either burn up completely on re-entry, or partially and the rest will fall to earth.
Either way, each of these satellites will be completely gone from orbit after a few years.
ULA is already a private company. I don’t think the US government has done any of their own work to get to space since the shuttle.
If this actually did lead to faster matrix multiplication, then essentially anything that can be done on a GPU would benefit. That definitely could include games, and physics models, along with a bunch of other applications (and yes, also AI stuff).
I’m sure the papers authors know all of that, but somehow along the line the article just became"faster and better AI"
The above post is referencing/quoting a line from the show “It’s always sunny in Philadelphia”, which is why people up voting it
I agree with many of the other commenters that OP debating their husband might not be the best idea.
But if that’s what they want, “Decoding the gurus” did at least one Rogan specific episode, and I think they do a better job covering and dismantling Rogan’s rhetorical approach than the podcasts above.
Those stats are misleading though. Autopilot only runs on highways, which are much safer per mile even for human drivers.
Tesla are basically comparing their system, which only runs in pristine, ideal conditions, against an average human that has to deal with the real world.
As far as I’m aware they haven’t released safety per mile data from the FSD cars yet, and until they do I will remain skeptical about how much safer it currently is.
Yes, but notably you can design to reduce the risk of leaking hydrogen. If the areas around the tanks are designed to allow any leakage to vent before it reaches dangerous levels, you can reduce the risk. Yes hydrogen is flammable, so tanks of it are dangerous. Jet fuel is also quite flammable, and we’ve used that for a long time.
This is all in contrast to the design of the Hindenburg, which was specifically trying to hold onto a bunch of hydrogen in the flammable regime
I’m guessing that they are (falsely) equating it to the hindenburg, when IMO it wouldn’t be much different safety-wise than current fossil fuel powered planes.
It’s not like they would be filling the wings and luggage compartment with free-floating hydrogen, it stays in it’s tank
Good point! Obviously the solution here is to stop funding the science!
(/s)