A fan of Tesla might think that the automaker just can’t catch a break when it comes to its autonomous driving tech. It’s already subject to several federal investigations over its marketing and deployment of technologies like Autopilot and Full Self-Driving (FSD), and as of last week, we can add another to the list involving around 2.4 million Tesla vehicles. This time, regulators are assessing the cars’ performance in low-visibility conditions after four documented accidents, one of which resulted in a fatality.

The National Highway Traffic Safety Administration (NHTSA) says this new probe is looking at instances when FSD was engaged when it was foggy or a lot of dust was in the air, or even when glare from the sun blinded the car’s cameras and this caused a problem.

What the car can “see” is the big issue here. It’s also what Tesla bet its future on.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    ·
    2 months ago

    Seriously though, wtf is up with Elon not liking LIDAR? I think pretty much every other manufacturer incorporates it into their higher-end driver assist stuff at this point.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      1
      ·
      edit-2
      2 months ago

      First of all, Elon isn’t that smart.

      Second, it would cost more money to put multiple types of sensors on the car. Spending money bad!

      Personal speculation based on Elon’s past behavior follows:

      Plus he wanted to focus on visual recognition stuff likely because it would have multiple possible income streams compared to a sensor that is just good at keeping a car from running into things. Focusing on the visible light spectrum means the possibilities for facial recognition, data collection by a fleet of Teslas, including the ones people bought, taking pictures, etc.

      Basically he wanted to focus on the one thing that seemed more profitable and didn’t want to spend money on that stupid thing that just kept the car from crashing.

      • skyspydude1@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        2 months ago

        I can tell you the real reason: Cameras are cheap for the amount of stuff you can kinda-sorta manage with them. That’s literally it. There’s no other 4D chess game of data collecting or anything else. They’re cheap to add and integrate, and adequate for object detection in typical scenarios. No need to worry about the shape of the bumper or paint effecting the radar, no need to have a bunch of individual ultrasonics integrated into the bumper and the associated wiring/labor costs.

        I worked with them, and there were numerous times where they came to us asking for new sensors because their cameras were too shitty for what they wanted to do, then once they got a quote, they miraculously didn’t need them and figured it out. It happened with corner radars on the Y, it happened with them removing the front radars on everything, and it happened with the ultrasonics.

        They bet it all on cameras as a lie to consumers and defraud investors that their cheap shit-boxes would be income generating Robotaxis. Even worse, their own engineers had hard data showing that removing the radar would directly result in pedestrian/motorcyclist deaths, but they had to keep those bullshit production numbers going, so they took them out and it’s directly resulted in dozens of likely preventable deaths.

        Anyone who’s ever worked with Tesla directly knows they’re an absolute fucking nightmare, and even compared to the shitshow of GM or Stellantis, the absolute blatant disregard for human life at that company is disgusting.

    • andrew_bidlaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 months ago

      He’s probably stuck on his decision to cut on LIDARs and compensate it with machine learning on cam inputs alone. That doesn’t bring him the edge he wanted. Still, he doubles down as he’s not risking anything besides being proclaimed wrong with that decision.

      • astrsk@fedia.io
        link
        fedilink
        arrow-up
        28
        arrow-down
        1
        ·
        2 months ago

        It’s hilarious because every single time he speaks about some unique aspect of starship that goes against conventional rocketry wisdom, like “we don’t need flame trenches. You get more efficiency on flat ground”, we just have to wait a year or two and all of a sudden they’re adding back the thing they tried to do without (see tower 2 flame trench going in as we speak).

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          ·
          2 months ago

          Then they brag about doing the thing everyone else was already doing as if it was some new concept and his Muskrats eat it up.

          • andrew_bidlaw@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 months ago

            He needs to risk something to care. As long as his bubble keeps floating, he can sell everything to institutions, businesses and consumers. With existing baby mittens he is cared by, he can openly scam people and burn money with a flamethrower without any repercussions.

          • astrsk@fedia.io
            link
            fedilink
            arrow-up
            7
            ·
            2 months ago

            It sucks because the talent and skill on display over there is insane and incredible, they really work so hard to achieve never before things. But he has to speak and be the key man PR idiot and diminish those amazing accomplishments.

    • Sequentialsilence@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      2 months ago

      I know a lot of companies go with RADAR over LIDAR because of reliability issues. RADAR is much more reliable because you can do it solid state, where LIDAR either has moving parts or is subject to IR bleed. However solid state LIDAR is finally becoming a thing so LIDAR will start becoming more commonplace in the next few years.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        11
        ·
        2 months ago

        Humans are bad drivers as well. Technology should try to do better than humans, not accept the limitations of humans. When Radar, lidar (and others - possibly including things not invented yet) exist we should use them to make cars safer.

        • burble@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          2 months ago

          You can still do better than human drivers wiith only visible light cameras by using more of them at different heights and angles than a person could pay attention to. I think mixing in other sensors and data sources would still be even better, but they’re already getting more data than a human could.

      • FlowVoid@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Humans can move their heads to avoid glare. They can shield glare from their eyes with visors.

        Tesla cameras currently can’t do either.

      • troed@fedia.io
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        2 months ago

        Musk is of course right. The “only” thing he forgot was that his vision-only model needs full human level artificial intelligence behind it to work.

        Very genius.

  • NutWrench@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    2 months ago

    Maybe Tesla shouldn’t be allowed to call their enhanced cruise control “autopilot.” Everyone knows how “autopilots” are supposed to work.

    • Tarquinn2049@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      2 months ago

      Well, actually, that’s kind of the problem. It actually does more than what real autopilot does already. Autopilot in a plane can’t help the plane not hit moving objects, it’s not context aware at all. It just flies a pre-programmed route and executes pre-programmed maneuvers. Literally the first release was already better than what autopilot really is.

      Planes are only safe because there is never supposed to be anything else anywhere near them. Which makes autopilot super easy. Which is why planes have had it since long before we had any context aware machines.

      Also, if “roadspace” was treated the same as “airspace”, including the amount of training and practice pilots have, as well as “road traffic controllers” like air traffic controllers. Self driving would have had no trouble right from the get-go. Pre-programmed routes, and someone making sure there is a specified gratuitous minimum space between each vehicle. And any violation being immediately harshly reprimanded…

      Aitopilot is relatively easy compared to self-driving, if anything, calling it autopilot was being under ambitious.

    • FlowVoid@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      2 months ago

      Everyone thinks they know.

      But the autopilot on an aircraft or ship is often just a cruise control, maintaining a constant heading, speed, and (for aircraft) altitude. The pilot or skipper remains 100% responsible for course changes and collision avoidance.

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    Sure just use the public for beta testing automobile safety.

    Any reasonable adult should be fine with that.

    Whoops. Sorry for your loss. Hey v2.0 is out!

  • jqubed@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    Unlike the vast majority of its competitors that are giving their cars with autonomous driving capabilities more ways to “see” their surroundings, Tesla removed ultrasonic and other types of sensors in favor of a camera-only approach in 2022.

    This means there isn’t really any redundancy in the system, so if a Tesla with FSD enabled drives through dense fog, it may not have an easy time keeping track of where the road is and staying on it. Vehicles that not only have cameras but also radar and lidar will make more sense of their environment even through dense fog, although these systems are also affected by the elements. Inclement weather seems to sometimes make FSD go rogue.

    I didn’t realize they were using other sensors in the past and dropped them on newer models.

    Older Teslas had a combination of radar and cameras for Autopilot and driver assistance systems. With newer software versions launched after Tesla went down the “Pure Vision” route, it disabled the sensors in the older cars that had them from the factory. So even if you have FSD enabled in an older Tesla that has more than just cameras, only the cameras will be used when the car is driving itself.

    🤦‍♂️

    Didn’t want to develop two different versions of software I guess?

    • notfromhere@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Isn’t vision cameras the only sensor we have to recognize lane markings? This article is bunk making it seem like that’s not industry standard. RADAR can’t see paint on the road. My understanding is neither can LiDAR well enough for real-time lane markings at highway speeds.

      • El Barto@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        It’s not only about seeing the markings. It’s also about recognizing potential colliding objects in less than ideal scenarios.

    • Oderus@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      If FSD noticed poor weather conditions, it will prompt you to take over as it will not just drive you off the road.

    • Bell@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      2 months ago

      The problem was the different sensors could sometimes disagree. Like, vision sees an obstacle but radar isn’t picking it up…which one does the software believe?

      And if you think vision has problems with things like rain and fog, try radar or lidar!

      Not mentioning the downsides of the other sensors always makes me suspicious of an article.

      The key point of going vision-only is that: its what humans do every day. Articles that leave that out also disappoint me.

      • bladerunnerspider@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 months ago

        It’s called consensus. Have three sensors and each get a vote. Typically these sensors are the same and thus can detect a failure or incorrect reading of one. This idea is used in IT around data backups and RAID configurations as well as aviation. And … I personally would just favor the radar. If vision says go and radar says stop… stop and avoid hitting that firetruck parked on the highway. Or that motorcyclist. Or any other bizarre vision-only, fatal crashes that this system has wrought.

        Also humans can hear things. So, not just vision.