i‎ > ‎

7

show us the data —

This startup’s CEO wants to open-source self-driving car safety testing

Post Uber death, robo-taxi startup will share safety documents, processes, and code.

Mark Harris - Apr 24, 2018 11:00 am UTC

A lidar scanner EnlargeVoyage

Uber's fatal car crash last month continues to have repercussions, with self-driving taxi startup Voyage announcing today that it will open-source its safety procedures, documents, and code in the hope of avoiding future deaths.

"I had to spend time after [the Uber crash] calming people down, telling folks at our deployments that it was an isolated incident," says Voyage CEO Oliver Cameron in an exclusive interview with Ars Technica. "But the truth is that everyone in the industry is reinventing the technology and safety processes themselves, which is incredibly dangerous. Open source means more eyes, more diversity, and more feedback."

Starting today, Voyage will begin to share safety requirements, test scenarios, metrics, tools, and code that it has developed for its own Level 4 self-driving taxis. Five Voyage cars are currently deployed carrying passengers within two retirement communities in California and Florida.

The initial release, which Voyage calls Open Autonomous Safety (OAS), will take the form of a GitHub repository containing documents and code. The functional safety requirements are Voyage's interpretation of the ISO 26262 standard for automotive safety, updated for autonomous vehicles. "This is our internal driving test for any particular software build," says Cameron. "It lets us evaluate our designs and look for the different ways they can fail in the real world."

Stress-testing self-driving cars

The functional testing material has scenarios designed to stress-test cars in simulations and on streets. Voyage has developed step-by-step scenarios that detail how its cars should respond to hundreds of situations, primarily focused on suburban environments. Among them is the one that Uber failed when its car ran down Elaine Herzberg in March: a pedestrian jaywalking across a divided road.

"Having a template of what your vehicle should be doing in these situations helps inform how you build your technology," says Cameron. "But it takes a hell of a lot of time to think through all the scenarios and to write the software to the right level of quality. I hope the community adds scenarios for environments we don't yet support, such as high-speed freeways."

Because OAS scenarios are written from the viewpoint of how a car should ultimately behave, they should be applicable regardless of the vehicle being developed or the technology it uses, says Cameron. However, Voyage is also sharing its fault-injection tests—code written for specific components to simulate errors or damage that might take years to show up in reality.

"If you want to replicate taking a baseball bat to your $85,000 lidar, you probably don't want to actually take a baseball bat to your $85,000 lidar," says Cameron. "But you can model what that would look like with fault-injection testing to get to a better place."

Although Voyage's code will only work with specific sensors, many are common devices (such as Velodyne lidars and Bosch ECUs) that other self-driving companies also use. Voyage will also make its training curriculum and safety-driver handbooks available.

"Overall, we think it's a pretty comprehensive set of materials," says Cameron. "We hope that the industry adopts it as a standard or, at the very least, that a few companies will use it and hopefully prevent another Uber incident from occurring."

Being open to safety

This is not the self-driving car industry's—or even Cameron's—first shot at open sourcing. Udacity, the online learning company that Voyage spun out from, is building its own open source self-driving car. Another start-up, Comma.ai, has also released the software for its aftermarket autonomous car kit, following a tangle with regulators. And Chinese search giant Baidu is slowly developing an open source platform called Apollo that it hopes will become the Android of self-driving cars.

Cameron believes that OAS' focus on safety and testing, rather than driving features, will smooth its adoption in a highly competitive industry. One aspect of the release is likely to be controversial, though. Cameron wants companies to standardize and adopt Voyage's metrics for the performance of self-driving cars based on the percentage of any particular trip that is handled by human or robotic systems.

He admits this could prove an uphill struggle. "Everyone is giving different metrics to different people," says Cameron. "Savvy VCs these days know that disengagement rates [how often a car hands control back to a human operator] can be entirely flawed. They'll ask you, hey, what's your real disengagement rate? We know the number you gave to the California Department of Motor Vehicles; now give us your real one."

Jon How, professor of astronautics at MIT, says, "Creating an open source library that enables teams to collaborate on developing even better solutions is a great idea, provided there is some oversight to ensure quality control. Hopefully, their decision might motivate others to also release their tools and test approaches."

Sam Lauzon is an engineer at the University of Michigan's Transportation Research institute, currently developing his own open source automotive cybersecurity software called Uptane. "An open source library can be fantastic for companies to lower implementation time and costs, but at the same time, they're inheriting any bugs or flaws that go along with the distribution," he says.

Cameron agrees that the current OAS release will be rough around the edges: "Like any open source project, it won't be perfect at the beginning. But it does mean that everyone won't have to keep reinventing the wheel, at least from a safety perspective."

reader comments

Share this story

36 Reader Comments

  1. traumadog Ars Scholae Palatinae Makes sense having this data public - not sure what kind of "trade secrets" value there is in solving vehicle safety. And given the level of safety required, I'm sure people who make and own autonomous vehicles will need to eventually report safety data and metrics to a regulating agency, akin to what aircraft manufacturers, airlines, etc do to the FAA.
  2. celerysandwich Ars Scholae Palatinae The more the merrier.
  3. raxadian Ars Centurion This seems like the most decent option.
  4. Fatesrider Ars Tribunus Militum et Subscriptor Having more eyes on these things than a single company can provide is definitely a good idea. Even without having any certainty that it's a problem, the fact is, one company can only do so much, and they tend to get into a corporate mindset that can hinder progress because the focus has narrowed too much - or gone down the wrong path.

    With eyes that aren't even part of the company on the data, there's no "disease vector" that infects the thinking of those NOT in the company who work with the same stuff.

    Collaborative efforts usually yield better results than black box projects - especially when their focus includes safety. I'm glad to see this happening. Otherwise the likelihood would be some unacceptable variability in the reliability of AV's between different car companies. That won't be good for anyone.

    Some things shouldn't be trade secrets. Safety is probably #1 on that list.
  5. WesGordon Ars Centurion Better transparency will certainly help. More cooperation across the industry seems essential too. A clear explanation for the Uber failure, for example, would go a long way – Uber's spin on the conditions wasn't very encouraging. On the face it, that incident seems like a total failure of everything this technology promises. But why? Could this mistake/failure be repeated elsewhere, or was Uber just cutting corners? Public visibility should go hand-in-hand with testing on public roads.
  6. dwrd Ars Centurion Please tell me they have a plan to get Waymo and GM onboard. Without buy-in from the industry leaders, the only alternative will be regulation.
  7. dio82 Ars Tribunus Angusticlavius The big problem is that neural networks are inherently black boxes and they have lots of very hard cliff-edge effects. As soon as you run into a situation for which there is not a lot of data to train on, essentially an unknown-unknown, the neural network will fail HARD.

    There is no scientific way to prove the non-presence of a thing, and this is essentially what is being asked for here.

    The way how this is solved in other safety critical industries is to develop performance envelopes where one can guarantee under absolute certainty that the machine will never leave certain boundaries.

    If there were an easy way to prove safety of autonomous driving, UL, SGS, TUVs and so on would be lobbying hard for oversight of that proof. But right now, they are scratching their heads and trying hard to reduce the AI-knowledge gap.
  8. TracerDX Ars Centurion et Subscriptor I dunno about you guys but this is one open source project I'd be wary of.

    Quote:I hope the community adds scenarios for environments we don't yet support, such as high-speed freeways.


    That's all well and good until it's time to assign blame in an accident. I wouldn't want to have my name on that code repository when that time comes. Who needs corporate responsibility when you can blame unpaid volunteers?
  9. Action Attack Goose Wise, Aged Ars Veteran et Subscriptor It's all about data. My guess is they figure the more people or companies they get running their stuff, the more data is in the ecosystem for the models to train on. Open source is a fine idea, but it's absolutely a means to an end.
  10. Bongle Ars Scholae Palatinae TracerDX wrote:I dunno about you guys but this is one open source project I'd be wary of.

    Quote:I hope the community adds scenarios for environments we don't yet support, such as high-speed freeways.


    That's all well and good until it's time to assign blame in an accident. I wouldn't want to have my name on that code repository when that time comes. Who needs corporate responsibility when you can blame unpaid volunteers?
    If it's just a test-case list, I'm not sure how you can see a risk of liability.

    If it was the actual control code then yeah, I wouldn't necessarily want to be contributing code without knowing who was going to be using it.
  11. omdm1 Seniorius Lurkius The link to OAS goes nowhere on their site (I get a 404).
  12. IamSpartacus Ars Centurion It should've been like this from the beginning. Autonomous vehicles will have a huge impact on our future transportation grid. Such dramatic change, with such potential for harm needs to be open. Well done.
  13. gkorper Smack-Fu Master, in training This problem should be solved by regulation. Seriously if you want a permit to operate autonomous vehicles on *public* roads then one of the requirements should be that you are required to release the raw sensor data for the 30 seconds proceeding and following all disengagements and/or accidents. Too bad that due to that first sentence this will probably get down-voted to oblivion.
  14. halse Ars Tribunus Militum IIHS (Insurance Institute for Highway Safety) should consider getting into setting a standard test for autonomous vehicles. They seem quite reliable, without discernible bias, and already safety test pretty much every vehicle.
  15. Sure Smack-Fu Master, in training Quote:Post Uber death

    Ah, one can dream
  16. Bongle Ars Scholae Palatinae halse wrote:IIHS (Insurance Institute for Highway Safety) should consider getting into setting a standard test for autonomous vehicles. They seem quite reliable, without discernible bias, and already safety test pretty much every vehicle.
    A fully standard test might be pretty easy to game, as the whole VW diesel fiasco shows.

    Make your test too dynamic and real-world ("The IIHS intern drives your car for a week, and we measure its emissions"), and valid manufacturers might fail, while half-assed manufacturers might get lucky and pass.

    Make your test too strictly prescribed (as emissions tests are/were), and manufacturers will detect testing conditions and "teach to the test".

    They could build a mini city in the desert and set 200 AVs loose in it with random destinations. That might be a good test of vehicle-vehicle behaviour.
  17. AlexisR200X Ars Tribunus Militum opfreakx wrote:opensource the blockchain of the 90s. What problem cant they solve?

    Opensource might be an old concept but it makes far more sense to use it here and to its credit it has helped define how much of the internet of today works. Blockchain on the other hand... Has great potential but so far it has only enabled speculative betting on cryptocurrencies, scams like ransomware malware and the corrupt or hacker riddled exchanges and a super wasteful operation model and pretty crushing losses for the fools who have fallen for it. (Not to mention the fueling the ongoing pricing craze of GPUs, RAM and PSUs.)

    Honestly ts rather unfair to even compare the two.
  18. Azethoth666 Ars Praefectus Fatesrider wrote:Having more eyes on these things than a single company can provide is definitely a good idea. Even without having any certainty that it's a problem, the fact is, one company can only do so much, and they tend to get into a corporate mindset that can hinder progress because the focus has narrowed too much - or gone down the wrong path.

    With eyes that aren't even part of the company on the data, there's no "disease vector" that infects the thinking of those NOT in the company who work with the same stuff.

    Collaborative efforts usually yield better results than black box projects - especially when their focus includes safety. I'm glad to see this happening. Otherwise the likelihood would be some unacceptable variability in the reliability of AV's between different car companies. That won't be good for anyone.

    Some things shouldn't be trade secrets. Safety is probably #1 on that list.
    If we do go down that road then it also needs to be the standard required for allowing a vehicle on the road. Each crash results in an NTSB etc. investigation that yields new scenarios.

    However, it all sounds like this guys bullshit way to make money despite failing at making a self driving car so far. Build a self driving car or GTFO. Why would I want someone that is not making a self driving car in charge of testing?

    I would rather see the real self driving car companies collaborate on a way to share scenarios across their different sensor suites and other techniques. Not everything will be broadly applicable. Or a way to specify scenarios that a vehicle needs to pass using its particular sensor suite and software. So a way to import something into their simulation software.

    Or maybe, we study the real world outcomes. Is this vehicle safe or not? How safe? Right now Tesla is statistically better than humans, but it fails at stuff I do not fail at like driving in a forward direction without hitting anything. Ever.
  19. river-wind Ars Praefectus et Subscriptor For anyone who hasn't read it, Mobile Eye's attempt at formulating a standard method of creating predictably safe autonomous vehicles:

    Formal Paper:
    https://arxiv.org/pdf/1708.06374.pdf

    More easily digestible version:
    https://newsroom.intel.com/newsroom/wp- ... rategy.pdf
  20. ProfessorGuy Ars Scholae Palatinae Anyone ever buy a used car? You know, one with some of the equipment broken or missing. So one of the headlights is out and the blinkers don't work, at least it'll still get me to work.

    But with autonomous vehicles, 'good enough' is useless, throw them away. So how long will these cars last? Chucking working cars in the garbage because one delicate system goes offline seems less than efficient.
  21. ProfessorGuy Ars Scholae Palatinae Azethoth666 wrote:Right now Tesla is statistically better than humans
    Careful, this is not true. Perhaps they are better than humans, but statistics does NOT tell us that. Not yet anyway.
  22. JohnW1234 Smack-Fu Master, in training dio82 wrote:The big problem is that neural networks are inherently black boxes and they have lots of very hard cliff-edge effects. As soon as you run into a situation for which there is not a lot of data to train on, essentially an unknown-unknown, the neural network will fail HARD.

    There is no scientific way to prove the non-presence of a thing, and this is essentially what is being asked for here.

    The way how this is solved in other safety critical industries is to develop performance envelopes where one can guarantee under absolute certainty that the machine will never leave certain boundaries.

    If there were an easy way to prove safety of autonomous driving, UL, SGS, TUVs and so on would be lobbying hard for oversight of that proof. But right now, they are scratching their heads and trying hard to reduce the AI-knowledge gap.

    I get a little concerned by companies' focus on identifying the obstacle. Every self-driving car should be designed to not crash into solid objects, right? In the case of the Uber fatality, there was a solid object in front of the car, which the LIDAR could pick up, but the car still hit the person. That should have been prevented regardless of the neural network, right? In the case of objects moving into the path of the car, can the car not track an unknown object on a collision course? That seems like the base case, not the advanced one.
  23. traumadog Ars Scholae Palatinae ProfessorGuy wrote:Anyone ever buy a used car? You know, one with some of the equipment broken or missing. So one of the headlights is out and the blinkers don't work, at least it'll still get me to work.

    But with autonomous vehicles, 'good enough' is useless, throw them away. So how long will these cars last? Chucking working cars in the garbage because one delicate system goes offline seems less than efficient.

    You do know auto repair shops exist for a reason. AND, many States mandate a safety inspection every year, where mission critical items - like headlights - get flagged for repair.

    Edit: I mean, why would replacing a sensor be any different than replacing a headlight? And at some point, I expect these items to be as standardized as headlight bulbs.
  24. river-wind Ars Praefectus et Subscriptor JohnW1234 wrote:I get a little concerned by companies' focus on identifying the obstacle. Every self-driving car should be designed to not crash into solid objects, right? In the case of the Uber fatality, there was a solid object in front of the car, which the LIDAR could pick up, but the car still hit the person. That should have been prevented regardless of the neural network, right? In the case of objects moving into the path of the car, can the car not track an unknown object on a collision course? That seems like the base case, not the advanced one.

    You're absolutely correct, and a problem shown already to be correctly handled by other company's systems. Uber's software failed hard in that case - there is no good reason the car should have hit the pedestrian, whether it recognized her as human, a bike, a human bike hybrid, or as an unknown. She was a trackable, moving solid object which should have been avoided.
  25. dehildum Ars Scholae Palatinae TracerDX wrote:I dunno about you guys but this is one open source project I'd be wary of.

    Quote:I hope the community adds scenarios for environments we don't yet support, such as high-speed freeways.


    That's all well and good until it's time to assign blame in an accident. I wouldn't want to have my name on that code repository when that time comes. Who needs corporate responsibility when you can blame unpaid volunteers?

    This is not a new problem. There are many life critical systems in software already. I personally have been in a situation where I have had to answer the question "Did your code kill this person?" In my case, I could confidently state that no, it did not.

    Coding at this level requires skill that do not exist with agile code development. That is not to say that agile developers could not have the skills, but the agile development methodology itself is probably fundamentally incompatible with high reliability code as is basically bypasses the extensive fault consideration that other design methodologies allow for. Fault handling tends to be added on at the end or during testing, in my experience with these projects. A similar problem occurs with documentation, it shows up at the end, and is usually incomplete, meaning that testing groups end up using unit test cases instead of independently writing comprehensive test suites and performing proper validation of the code.

    It is possible that I have simply seen badly run agile projects, but at this point I have seen a lot across different companies and industries, and the issue seems to be fundamental.
  26. andygates Ars Scholae Palatinae river-wind wrote:For anyone who hasn't read it, Mobile Eye's attempt at formulating a standard method of creating predictably safe autonomous vehicles:

    Formal Paper:
    https://arxiv.org/pdf/1708.06374.pdf

    More easily digestible version:
    https://newsroom.intel.com/newsroom/wp- ... rategy.pdf

    Seems that these standards will swirl around for a bit before getting glommed together into a legislative standard.
  27. Dr Gitlin Ars Legatus Legionis et Subscriptor omdm1 wrote:The link to OAS goes nowhere on their site (I get a 404).

    The link works for me. https://voyage.auto/open-autonomous-safety/
  28. Dvon-E Smack-Fu Master, in training Of note is the Open Cars project of the Open Research Institute, Inc. The first paper on Open Cars was published in 2017 in the Berkeley Technology Law Journal. The paper was a collaboration between ORI president Bruce Perens and Berkeley Law professor Lothar Determann.

    From the abstract:
    Quote:In our article, we examine facts and arguments regarding how open the car can, should and potentially will be, as a matter of technology, economics, public policy and law. To make our points, we will tell a tale of two cars: It may be open, it may be closed. It may be the best of cars, it may be the worst of cars. We do not aim for an exact prediction or recommendation regarding the degree of openness for future cars. Rather, we intend to start or contribute to the public discussion, and contribute to the strategic planning of companies, by highlighting the economic and policy interests as well as legal rules regarding the opening or closing of automotive designs.
  29. Goofazoid Ars Tribunus Militum et Subscriptor Not sure if this is legit, or too scifi: What if there is a vulnerability in the code that is spotted, but instead of reporting it, someone develops an exploit for it?
    e.g. The "hacker" uses the faulty code to take control (remote control or even just a set of instructions that take over) a large automated vehicle, say a bus, and then drives it through a crowd.

    Is this a realistic possibility?
  30. parnasus Smack-Fu Master, in training Quote:...you probably don't want to actually take a baseball bat to your $85,000 lidar...

    Reading this, it dawned on me just how open to vandalism and down-right criminal (nefarious) activity a self-driving vehicle can be. Can you imagine a jilted spouse disabling the lidar for insurance or a bunch of street thugs with those baseball bats mentioned above taking out a bunch of taxis?

    Edit: forgot the word "activity"
  31. Action Attack Goose Wise, Aged Ars Veteran et Subscriptor Goofazoid wrote:Not sure if this is legit, or too scifi: What if there is a vulnerability in the code that is spotted, but instead of reporting it, someone develops an exploit for it?
    e.g. The "hacker" uses the faulty code to take control (remote control or even just a set of instructions that take over) a large automated vehicle, say a bus, and then drives it through a crowd.

    Is this a realistic possibility?

    Possible? Sure. Likely to happen in the wild? Probably not. The version actually deployed in a vehicle would likely look a tad different and take the ideas from the open source instance rather than a copy/paste of the code base.
  32. Goofazoid Ars Tribunus Militum et Subscriptor Action Attack Goose wrote:Goofazoid wrote:Not sure if this is legit, or too scifi: What if there is a vulnerability in the code that is spotted, but instead of reporting it, someone develops an exploit for it?
    e.g. The "hacker" uses the faulty code to take control (remote control or even just a set of instructions that take over) a large automated vehicle, say a bus, and then drives it through a crowd.

    Is this a realistic possibility?

    Possible? Sure. Likely to happen in the wild? Probably not. The version actually deployed in a vehicle would likely look a tad different and take the ideas from the open source instance rather than a copy/paste of the code base.

    thanks, that's why I asked
  33. ReaderBot Ars Praefectus dehildum wrote:TracerDX wrote:I dunno about you guys but this is one open source project I'd be wary of.

    Quote:I hope the community adds scenarios for environments we don't yet support, such as high-speed freeways.


    That's all well and good until it's time to assign blame in an accident. I wouldn't want to have my name on that code repository when that time comes. Who needs corporate responsibility when you can blame unpaid volunteers?

    This is not a new problem. There are many life critical systems in software already. I personally have been in a situation where I have had to answer the question "Did your code kill this person?" In my case, I could confidently state that no, it did not.

    Coding at this level requires skill that do not exist with agile code development. That is not to say that agile developers could not have the skills, but the agile development methodology itself is probably fundamentally incompatible with high reliability code as is basically bypasses the extensive fault consideration that other design methodologies allow for. Fault handling tends to be added on at the end or during testing, in my experience with these projects. A similar problem occurs with documentation, it shows up at the end, and is usually incomplete, meaning that testing groups end up using unit test cases instead of independently writing comprehensive test suites and performing proper validation of the code.

    It is possible that I have simply seen badly run agile projects, but at this point I have seen a lot across different companies and industries, and the issue seems to be fundamental.

    What does agile development have to do with anything in this article?
  34. ReaderBot Ars Praefectus Goofazoid wrote:Not sure if this is legit, or too scifi: What if there is a vulnerability in the code that is spotted, but instead of reporting it, someone develops an exploit for it?
    e.g. The "hacker" uses the faulty code to take control (remote control or even just a set of instructions that take over) a large automated vehicle, say a bus, and then drives it through a crowd.

    Is this a realistic possibility?

    Nobody is talking open-sourcing the self-driving code. This article discusses open-sourcing the test cases.
  35. BloodNinja Ars Praetorian Quote:"An open source library can be fantastic for companies to lower implementation time and costs, but at the same time, they're inheriting any bugs or flaws that go along with the distribution," he says.

    As opposed to rolling their own bugs and flaws. Bugs invented here are so much better than other people's bugs.

    /s

    You're inheriting other people's bugs, as well as their ability to find, and fix, those bugs.

    There is literally no down side to this. It doesn't prevent firms from writing proprietary software, or fork the OSS version.

You must login or create an account to comment.

Channel Ars Technica

← Previous story Next story →

#auto

Subpages (5): b j l m s
Comments