Amazon Dives into Self-Driving Cars with a Bet on Aurora

This content has been archived. It may no longer be accurate or relevant.

From Wired:

Amazon Wednesday made perhaps its most significant move yet into the self-driving car space, announcing an investment in autonomous tech developer Aurora. For a company with one of the largest logistics operations on the planet, it’s about time.

“Autonomous technology has the potential to help make the jobs of our employees and partners safer and more productive, whether it’s in a fulfillment center or on the road, and we’re excited about the possibilities,” an Amazon spokesperson said in a statement. Amazon and Aurora declined to disclose the terms of the investment.

. . . .

Amazon has previously dabbled in this space. It was a partner on Toyota’s e-Palette concept project. It pitches its Amazon Web Services unit as a tool for autonomous vehicle developers and hosts an autonomous racing league for 1/18th-scale race cars. In 2017, The Wall Street Journal reported the company had formed a small team focused on driverless tech, and CNBC reported last month that it’s working with robo-trucking startup Embark to move some goods on the I-10 freeway. Amazon is developing robots to schlep groceries and food deliveries.

Link to the rest at Wired

PG hasn’t seen a lot of detailed analysis in the business press about the potential for liability coming back to bite the investors in self-driving cars.

In March of last year, an Uber self-driving car was involved in a fatal vehicular accident. The auto hit and killed a woman who was walking with her bicycle and it appeared that the Uber vehicle’s self-driving systems did not respond in any way calculated to avoid hitting the woman.

The Department of Motor Vehicles in California has issued regulations governing self-driving cars in that state and has some sort of process for issuing permits to operate those vehicles on public roadways.

While contemplating this matter, it occurred to PG that a vehicular murder caused by subtle adjustments to a self-driving car’s software/firmware to cause the vehicle to home in on a particular cell phone signal could make a nice book.

13 thoughts on “Amazon Dives into Self-Driving Cars with a Bet on Aurora”

  1. “While contemplating this matter, it occurred to PG that a vehicular murder caused by subtle adjustments to a self-driving car’s software/firmware to cause the vehicle to home in on a particular cell phone signal could make a nice book.”

    Heh, as one of my tall tales is set in 2332, I have a couple people looking back into the past. As all their cars have a self-driving AI, one was wondering why anyone would want/need to manually drive a car. One of the reasons presented was sometimes you don’t ‘want’ the car stopping – such as a thief or carjacker stepping out to block – knowing the AI will stop the car.

    As the car would record ‘everything’, there’d be a recording of why the occupant thought it better to run over something/someone than to let the AI stop the car. (Of course if you have no idea how to drive in the first place then taking control might put you in even greater danger.)

    • Robert Heinlein wrote an even more horrifying scenario, way back in 1961 with Stranger In a Strange Land (and may have thought of it much earlier, as Stranger took him several years to write).

      There is nothing to prevent a government from taking over a “self-driving” car (particularly as they always demand a back door into any technology). In Stranger, they use that ability to quietly disappear an annoying reporter.

      One can think of even worse, such as “tragic accidents” happening to inconvenient political figures.

      • David Weber’s Honor Harrington series has a group that overuses air car accidents to get rid of ‘problems’.

        If it’s at all possible, bet on mine having an override to their override …

        But I was more likely channeling ol’ Mad Max – where bodies don’t even count as speed bumps (at least no one slows down for them!) 😉

          • No, it was a complaint of the first group that too many people would think ‘they’ had done it when the shadow did one for them. 😉

            (So many wheels within wheels that were set spinning – until the wheels started coming off on someone’s plans!)

  2. I must have too much time on my hands, and watch too much primetime TV shows, as the plot of the hacked AI in a car has shown up in at least four or five shows in the last three years. At least two were tied to “investor demos” where someone had been screwed out of the company and wanted revenge. Another two or so(?) were about revenge against someone and the AI was a clean way to do it. Almost all talked about the ethical challenges of cars making decisions or privacy / security issues, but at only the most basic of levels. Sometimes the writers would get inventive with it being a virus, other times it was remote control of the car, other times it was hacked program coding, etc.

    I seem to recall there was some weird elements with the bicyclist story though…it wasn`t cut and dried “the AI didn`t stop”, I thought.

    For mass roll-outs the big ethical issues are still undecided, and honestly I`m not sure how anyone will solve them at anything less than legislation. The largest being a variation on a classic philosophy/psych test…if you have a choice of a train (car in this case) killing one person on track 1 or killing five people on track 2 or killing the train passengers (or car driver) by derailing, which does the AI choose? Our own instinct is never self-harm except by accident (we swerve to avoid, not realizing what it will do) or impulse (we jump in front of a car to save a child). Replacing those choices with logic that the computer has time to compute requires us to program which choice is better. Yet we`re allowing the cars to hit the road without the answer…because in the event ofa conflict, it prompts for driver intervention.

    Poly

    • My husband works for Cruise. The ethical delimas you’re discussing aren’t a real thing. The vehicle software is not anywhere near that complicated. If the car sees a potential accident, it will try to stop. It doesn’t have an internal debate about who should die.

      • If you mean the AI doesn’t do a debate, you are correct. It isn’t alive, it doesn’t think at all, it just does what it is programmed to do. If there is a potential collision, smart cars are the ones that simply “stop”. Self-driving cars are programmed to “stop”, “swerve”, or “minimize” damage. Some of the regulators are concerned about the swerve parameters, because it won’t let the car enter the opposite lane (even if safe to do so) or go off-road. Others are okay with those parameters which are essentially ethical decisions…do what you can within your own lane, but don’t put others at risk to save the car / driver. Most however are more concerned with the “minimize calculation”. Put crudely, it is safer to hit a smaller object than a larger one, all other things being equal. So, if the car can swerve to avoid a big SUV, but it will hit a small motorcycle, the default programming is to do so. *That* is when the ethical issue comes up — the programming decision that the computer should swerve to lessen the impact, even though the SUV might have been fine while the motorcyclist (i.e. the small object) will be decimated.

        If you mean there’s no such concern, I’m sure Elon Musk, all the lawyers, regulators and advocacy groups, and the car companies themselves will be thrilled to hear it’s “not a thing” since they’ve been spending almost all of their regulatory time talking about it over the last five years, how the computer makes a decision in the event of an imminent crash. Oddly, none of them will approve cars that will simply “stop” and don’t “swerve” or “minimize” unless they are just simply smartcars instead of self-driving ones.

    • I seem to recall Knight Rider’s Kitt got hacked a few times too. 😉

      One type of idiot neither human nor AI can save are ones that just step into traffic expecting everyone to stop for them. I’m talking about the ones that don’t even look up much less looking to see if anything is coming before stepping off the curb. Yes, there may even be a crosswalk there – but they give no indication that they are going to take it – giving the driver/AI no time to react.

      I’ve had it happen to me twice (both before the age of cellphones) and I’m hearing of it more now that the sheeple have those little screens to stare at instead of looking where they’re going …

  3. In March of last year, an Uber self-driving car was involved in a fatal vehicular accident. The auto hit and killed a woman who was walking with her bicycle and it appeared that the Uber vehicle’s self-driving systems did not respond in any way calculated to avoid hitting the woman.

    The woman pushed her bike right in front of the car, coming out of a darkened area into the cars headlights. It’s not clear that even a human driver could have kept from hitting her.

    There are almost 3 million injurious car accidents in the US every year that don’t involve self-driving cars. If you think about it, we are holding autonomous vehicles to a particularly high standard, one we’re not living up to ourselves.

Comments are closed.