• @[email protected]
    link
    fedilink
    English
    40
    edit-2
    6 months ago

    The recent Not Just Bike video about self driving cars is really good about this subject, very dystopic

  • @[email protected]
    link
    fedilink
    English
    186 months ago

    How do you admit to intentionally ignoring traffic laws and not get instantly shutdown by the NTSB?

  • @[email protected]
    link
    fedilink
    English
    316 months ago

    I work in a related field to this, so I can try to guess at what’s happening behind the scenes. Initially, most companies had very complicated non-machine learning algorithms (rule-based/hand-engineered) that solved the motion planning problem, i.e. how should a car move given its surroundings and its goal. This essentially means writing what is comparable to either a bunch of if-else statements, or a sort of weighted graph search (there are other ways, of course). This works well for say 95% of cases, but becomes exponentially harder to make work for the remaining 5% of cases (think drunk driver or similar rare or unusual events).

    Solving the final 5% was where most turned to machine learning - they were already collecting driving data for training their perception and prediction models, so it’s not difficult at all to just repurpose that data for motion planning.

    So when you look at the two kinds of approaches, they have quite distinct advantages over each other. Hand engineered algorithms are very good at obeying rules - if you tell it to wait at a crosswalk or obey precedence at a stop sign, it will do that no matter what. They are not, however, great at situations where there is higher uncertainty/ambiguity. For example, a pedestrian starts crossing the road outside a crosswalk and waits at the median to allow you to pass before continuing on - it’s quite difficult to come up with a one size fits all rule to cover these kinds of situations. Driving is a highly interactive behaviour (lane changes, yielding to pedestrians etc), and rule based methods don’t do so well with this because there is little structure to this problem. Some machine learning based methods on the other hand are quite good at handling these kinds of uncertain situations, and Waymo has invested heavily in building these up. I’m guessing they’re trained with a mixture of human-data + self-play (imitation learning and reinforcement learning), so they may learn some odd/undesirable behaviors. The problem with machine learning models is that they are ultimately a strong heuristic that cannot be trusted to produce a 100% correct answer.

    I’m guessing that the way Waymo trains its motion planning model/bias in the data allows it to find some sort of exploit that makes it drive through crosswalks. Usually this kind of thing is solved by creating a hybrid system - a machine learning system underneath, with a rule based system on top as a guard rail.

    Some references:

    1. https://youtu.be/T_LkNm3oXdE?si=_p499XuQeAlz9BYq
    2. https://youtu.be/RpiN3LyMLB8?si=Rkihso_88VECLUXa

    (Apologies for the very long comment, probably the longest one I’ve ever left)

  • @[email protected]
    link
    fedilink
    English
    46 months ago

    Here we go again, blaming robots for doing the same thing humans do. Only at least the robots don’t flip you off when they try to run you over.

    • @[email protected]
      link
      fedilink
      English
      86 months ago

      What a bullshit argument. One of the arguments for self driving cars is precisely that they are not doing the same thing humans do. And why should they? It’s ludicrous for a company to train them on “social norms” rather than the actual laws of the road. At least when it comes to black and white issues as what is described in the article.

  • @[email protected]
    link
    fedilink
    English
    996 months ago

    People, and especially journalists, need to get this idea of robots as perfectly logical computer code out of their heads. These aren’t Asimov’s robots we’re dealing with. Journalists still cling to the idea that all computers are hard-coded. You still sometimes see people navel-gazing on self-driving cars, working the trolley problem. “Should a car veer into oncoming traffic to avoid hitting a child crossing the road?” The authors imagine that the creators of these machines hand-code every scenario, like a long series of if statements.

    But that’s just not how these things are made. They are not programmed; they are trained. In the case of self-driving cars, they are simply given a bunch of video footage and radar records, and the accompanying driver inputs in response to those conditions. Then they try to map the radar and camera inputs to whatever the human drivers did. And they train the AI to do that.

    This behavior isn’t at all surprising. Self-driving cars, like any similar AI system, are not hard coded, coldly logical machines. They are trained off us, off our responses, and they exhibit all of the mistakes and errors we make. The reason waymo cars don’t stop at crosswalks is because human drivers don’t stop at crosswalks. The machine is simply copying us.

    • @[email protected]
      link
      fedilink
      English
      786 months ago

      All of which takes you back to the headline, “Waymo trains its cars to not stop at crosswalks”. The company controls the input, it needs to be responsible for the results.

      • @[email protected]
        link
        fedilink
        English
        346 months ago

        Some of these self driving car companies have successfully lobbied to stop citys from ticketing their vehicles for traffic infractions. Here they are stating these cars are so much better than human drivers, yet they won’t stand behind that statement instead they are demanding special rules for themselves and no consequences.

    • snooggums
      link
      fedilink
      English
      366 months ago

      The machine can still be trained to actually stop at crosswalks the same way it is trained to not collide with other cars even though people do that.

    • @[email protected]
      link
      fedilink
      English
      206 months ago

      Whether you call in it programming or training, the designers still designed a car that doesn’t obey traffic laws.

      People need to get it out of their heads that AI is some kind of magical monkey-see-monkey-do. AI isn’t magic, it’s just a statistical model. Garbage in = Garbage out. If the machine fails because it’s only copying us, that’s not the machine’s fault, not AI’s fault, not our fault, it’s the programmer’s fault. It’s fundamentally no different, had they designed a complicated set of logical rules to follow. Training a statistical model is programming.

      You’re whole “explanation” sounds like a tech-bro capitalist news conference sound bite released by a corporation to avoid guilt for running down a child in a crosswalk.

      • @[email protected]
        link
        fedilink
        English
        106 months ago

        It’s not apologeia. It’s illustrating the foundational limits of the technology. And it’s why I’m skeptical of most machine learning systems. You’re right that it’s a statistical model. But what people miss is that these models are black boxes. That is the crucial distinction between programming and training that I’m trying to get at. Imagine being handed a 10 million x 10 million matrix of real numbers and being told, “here change this so it always stops at crosswalks.” It isn’t just some line of code that can be edited.

        The distinction between training and programming is absolutely critical here. You cannot hand waive away that distinction. These models are trained like we train animals. They aren’t taught through hard coded rules.

        And that is a fundamental limit of the technology. We don’t know how to program a computer how to drive a car. Instead we only know how to make a computer mimic human driving behavior. And that means the computer can ultimately never peform better than an attentive sober human with some increases reaction time and visibility. But if there is any common errors that humans frequently make, then it will be duplicated in the machine.

        • @[email protected]
          link
          fedilink
          English
          36 months ago

          It’s obvious now that you literally don’t have any idea how programming or machine learning works, thus you think no one else does either. It is absolutely not some “black box” where the magic happens. That attitude (combined with your oddly misplaced condescension) is toxic and honestly kind of offensive. You can’t hand waive away responsibility like this when doing any kind of engineering. That’s like first day ethics-101 shit.

    • @[email protected]
      link
      fedilink
      English
      20
      edit-2
      6 months ago

      I think the reason non-tech people find this so difficult to comprehend is the poor understanding of what problems are easy for (classically programmed) computers to solve versus ones that are hard.

      if ( person_at_crossing ) then { stop }
      

      To the layperson it makes sense that self-driving cars should be programmed this way. Aftter all, this is a trivial problem for a human to solve. Just look, and if there is a person you stop. Easy peasy.

      But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

      To me it’s this disconnect between the common understanding of computer capability and the reality that causes the misconception.

      • @[email protected]
        link
        fedilink
        English
        106 months ago

        I think you could liken it to training a young driver who doesn’t share a language with you. You can demonstrate the behavior you want once or twice, but unless all of the observations demonstrate the behavior you want, you can’t say “yes, we specifically told it to do that”

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        You can use that logic to say it would be difficult to do the right thing for all cases, but we can start with the ideal case.

        • For a clearly marked crosswalk with a pedestrian in the street, stop
        • For a pedestrian in the street, stop.
      • snooggums
        link
        fedilink
        English
        66 months ago

        But for a computer, how do you know? What is a ‘person’? What is a ‘crossing’? How do we know if the person is ‘at/on’ the crossing as opposed to simply near it or passing by?

        Most walkways are marked. The vehicle is able to identify obstructions in the road and things on the side of the road that are moving towards the road just like cross street traffic.

        If (thing) is crossing the street then stop. If (thing) is stationary near a marked crosswalk, stop and go if they don’t move in (x) seconds. If they don’t move in a reasonable amount of time, then go.

        You know, the same way people are supposed to handle the same situation.

          • snooggums
            link
            fedilink
            English
            3
            edit-2
            6 months ago

            Person, dog, cat, rolling cart, bicycle, etc.

            If the car is smart enough to recognize a stationary atop sign then it should be able to ignore a permantly mounted crosswalk sign or indicator light at a crosswalk and exclude those from things that might move into the street. Or it could just stop and wait a couple seconds if it isn’t sure.

            • Dragon Rider (drag)
              link
              fedilink
              English
              36 months ago

              A woman was killed by a self driving car because she walked her bicycle across the road. The car hadn’t been programmed to understand what a person walking a bicycle is. Its AI switched between classifying her as a pedestrian, cyclist, and “unknown”. It couldn’t tell whether to slow down, and then it hit her. The engineers forgot to add a category, and someone died.

              • snooggums
                link
                fedilink
                English
                66 months ago

                It shouldn’t even matter what category things are when they are on the road. If anything larger than gravel is in the road the car should stop.

        • hissing meerkat
          link
          fedilink
          English
          96 months ago

          Most crosswalks in the US are not marked, and in all places I’m familiar with vehicles must stop or yield to pedestrians at unmarked crosswalks.

          At unmarked crosswalks and marked but uncontrolled crosswalks we have to handle the situation with social cues about which direction the pedestrian wants to cross the street/road/highway and if they will feel safer crossing the road after a vehicle has passed than before (almost always for homeless pedestrians and frequently for pedestrians in moderate traffic).

          If waymo can’t figure out if something intends or is likely to enter the highway they can’t drive a car. Those can be people at crosswalks, people crossing at places other than crosswalks, blind pedestrians crossing anywhere, deaf and blind pedestrians crossing even at controlled intersections, kids or wildlife or livestock running toward the road, etc.

      • Ice
        link
        fedilink
        English
        36 months ago

        Difference is that humans (usually) come with empathy (or at least self-preservation) built in. With self-driving cars we aren’t building in empathy and self (or at least passenger) preservation, we’re hard-coding in scenarios where the law says they have to do X or Y.

    • Justin
      link
      fedilink
      English
      66 months ago

      It’s telling that Tesla and Google, worth over 3 trillion dollars, haven’t been able to solve these issues.

    • Noxy
      link
      fedilink
      English
      76 months ago

      That all sounds accurate, but what difference does it make how the shit works if the real world results are poor?

    • @[email protected]
      link
      fedilink
      English
      46 months ago

      Training self driving cars that way would be irresponsible, because it would behave unpredictably and could be really dangerous. In reality, self driving cars use AI for only some tasks for which it is really good at like object recognition (e.g. recognizing traffic signs, pedestrians and other vehicles). The car uses all this data to build a map of its surroundings and tries to predict what the other participants are going to do. Then, it decides whether it’s safe to move the vehicle, and the path it should take. All these things can be done algorithmically, AI is only necessary for object recognition.

      In cases such as this, just follow the money to find the incentives. Waymo wants to maximize their profits. This means maximizing how many customers they can serve as well as minimizing driving time to save on gas. How do you do that? Program their cars to be a bit more aggressive: don’t stop on yellow, don’t stop at crosswalks except to avoid a collision, drive slightly over the speed limit. And of course, lobby the shit out of every politician to pass laws allowing them to get away with breaking these rules.

      • @[email protected]
        link
        fedilink
        English
        26 months ago

        According to some cursory research (read: Google), obstacle avoidance uses ML to identify objects, and uses those identities to predict their behavior. That stage leaves room for the same unpredictability, doesn’t it? Say you only have 51% confidence that a “thing” is a pedestrian walking a bike, 49% that it’s a bike on the move. The former has right of way and the latter doesn’t. Or even 70/30. 90/10.

        There’s some level where you have to set the confidence threshold to choose a course of action and you’ll be subject to some ML-derived unpredictability as confidence fluctuates around it… right?

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          In such situations, the car should take the safest action and assume it’s a pedestrian.

          • @[email protected]
            link
            fedilink
            English
            16 months ago

            But mechanically that’s just moving the confidence threshold to 100% which is not achievable as far as I can tell. It quickly reduces to “all objects are pedestrians” which halts traffic.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              6 months ago

              This would only be in ambiguous situations when the confidence level of “pedestrian” and “cyclist” are close to each other. If there’s an object with 20% confidence level that it’s a pedestrian, it’s probably not. But we’re talking about the situation when you have to decide whether to yield or not, which isn’t really safety critical.

              The car should avoid any collisions with any object regardless of whether it’s a pedestrian, cyclist, cat, box, fallen tree or any other object, moving or not.

  • @[email protected]
    link
    fedilink
    English
    16
    edit-2
    6 months ago

    Speaking as someone who lives and walks in sf daily, they’re still more courteous to pedestrians then drivers and I’d be happy if they replaced human drivers in the city. I’d be happier if we got rid of all the cars but I’ll take getting rid of the psychopaths blowing through intersections.

  • tiredofsametab
    link
    fedilink
    46 months ago

    It is an offense in Japan to not stop if someone is waiting before entering the crosswalk (and technically to progress until they are fully off the entire street, though I’ve had assholes whip around me for not breaking the law). People do get ticketed for it (though not enough, honestly). I wonder what they would do here.

  • KillingTimeItself
    link
    fedilink
    English
    76 months ago

    the funniest thing to me, is that this probably isn’t even the fault of AI, this is probably the fault of software developers too lazy to actually write any semi decent code that would do a good job of (not) being a nuisance.

    • @[email protected]
      link
      fedilink
      English
      46 months ago

      Most developers take pride in what they do and would love to build in all the best features for launch.

      But that’s not possible. There’s a deadline and a finite budget for programmers. Ipso facto, a finite number of dev hours.

    • @[email protected]
      link
      fedilink
      English
      156 months ago

      software developers too lazy company owners too greedy

      Software developers don’t get a say in what gets done or not, profit and cost cutting do.

      Ethics is an important component of what every worker should do for a living, but we’re not there yet.

      • KillingTimeItself
        link
        fedilink
        English
        36 months ago

        Software developers don’t get a say in what gets done or not, profit and cost cutting do.

        i mean that’s true to an extent, but most software development teams are led by a fairly independent group. It’s so abstract you can’t really directly control, ultimately here, there is somebody with some level of authority and knowledge that should know to do better than this, but just isn’t doing it.

        Maybe the higher ups are pressuring them, but you can’t push things back forever, and you most certainly can’t pull features forever, there is only so much you can remove before you are left with nothing.

        • @[email protected]
          link
          fedilink
          English
          36 months ago

          You might have a say in how to implement the requirement, but in this case, if the company decided to follow societal norms and not laws, it’s 100% on the management. You might pin this on devs if they were pressured to release an unfinished product - sometimes the pressure is so big devs are afraid to admit it’s not really done, but in this case, it’s such a crucial part of the project I think it’s one of the first things they worked on.

          Realistically, it’s more profitable not to stop - customers are impatient, other drivers too, and pedestrians are used to that. To maximize profit, I’d rather risk some tickets than annoy other drivers or customers.

        • @[email protected]
          link
          fedilink
          English
          66 months ago

          This hasn’t been true at any of the places I’ve worked.

          There’s always been some pressure from management, usually through project managers or business users, for urgency around certain features, timelines, releases, etc. Sometimes you’ll have a buffer of protection from these demands, sometimes not.

          One place I worked was so consistently relentless about the dev team’s delivery speed that it was a miserable place to work. There was never time to fix the actual pain points because there were always new features being demanded or emergency fixes required because most code bases were a wreck and falling apart.

  • arsCynic
    link
    fedilink
    English
    146 months ago

    Mario needs to set these empty cars on fire.

  • @[email protected]
    link
    fedilink
    English
    176 months ago

    The “social norms” line is likely because it was trained using actual driver data. And actual drivers will fail to stop. If it happens enough times in the training data and the system is tuned to favor getting from A to B quickly, then it will inevitably go “well it’s okay to blow through crosswalks sometimes, so why not most of the time instead? It saves me time getting from A to B, which is the primary goal.”

  • @[email protected]
    link
    fedilink
    English
    86 months ago

    This is STUPID! I can’t WAIT for President MUSK to ELIMINATE all these Pesky Rules preventing AI Cars from MOWING DOWN CHILDREN In Crosswalks!

  • Skvlp
    link
    fedilink
    English
    206 months ago

    Being an Alphabet subsidiary I wouldn’t expect anything less, really.

  • Phoenixz
    link
    fedilink
    English
    406 months ago

    And again… If I break the law, I get a large fine or go to jail. If companies break the law, they at worst will get a small fine

    Why does this disconnect exist?

    Am I so crazy to demand that companies are not only treated the same, but held to a higher standard? I don’t stop ar a zebra, that is me breaking the law once. Waymo programming their cars noy to do that is multiple violations per day, every day. Its a company deciding they’re above the law because they want more money. Its a company deciding to risk the lives of others to earn more money.

    For me, all managers and engineers that signed off on this and worked on this should he jailed, the company should be restricted from doing business for a month, and required to immediately ensure all laws are followed or else…

    This is the only way we get companies to follow the rules.

    Instead though, we just ask compi to treat laws as suggestions, sometimes requiring small payments if they cross the line too far.

    • @[email protected]
      link
      fedilink
      English
      16 months ago

      Do you have an example of a company getting a smaller fine than an individual for the same crime? Generally company fines are much larger.

    • @[email protected]
      link
      fedilink
      English
      96 months ago

      Funny that you don’t mention company owners or directors who are supposed to oversee what happens, in practice are the people putting pressure to make that happen, and are the ones liable in front of the law.

      • Phoenixz
        link
        fedilink
        English
        16 months ago

        I thought that was obviously implied.

        If the CEO signed off on whatever is illegal, jail him or her too.

    • @[email protected]
      link
      fedilink
      English
      156 months ago

      Why does this disconnect exist?

      Because the companies pay the people who make the law.

      Stating the obvious here but it’s the sad truth