PHOTO BY YENDER FONSECA FROM PEXELS
In recent years, autonomous vehicles have moved from a fanciful science fiction topic to actual reality, with real cars driving on real streets with real people inside. However, as per usual with technology, when concept meets reality, things never go as intended. In particular, this past year has seen multiple fatal incidents involving autonomous vehicles at a range of levels of autonomy, from Tesla’s AutoPilot, which is nominally a Level 3 autonomous vehicle (read here for more insight into autonomous vehicle levels), and Uber’s self-driving car which is somewhere in the area of Level 4 or Level 5 autonomy.
In the Tesla incidents, the drivers set their autopilot feature to fully operate the vehicle with limited human interaction, leading to several fatal collisions with other vehicles or barriers when the autonomous system proved to be less than capable. In the Uber incident, the vehicle hit and killed a pedestrian, causing the first pedestrian fatality caused by an autonomous vehicle. While it's no doubt early days for this technology, it's also early days for legal related issues. Just who is at fault when these vehicles cause deaths? Is it the operator of the car? The manufacturer of the vehicle? The software or hardware technology vendors that enable autonomous systems? The passengers? The pedestrians? It turns out that answering this question of liability is just as tricky as understanding the autonomous technology.
The Uber AV Fatality
In this specific incident, on March 18, 2018, an autonomous car operated by Uber during real-world testing with a human emergency driver behind the wheel, struck and killed Elaine Herzberg in what is believed to be the first recorded pedestrian fatality involving a self-driving vehicle. The accident happened around 10 pm when the pedestrian stepped into the road while walking a bike outside of a crosswalk. Neither the Uber vehicle nor the driver noticed the pedestrian until it was too late, causing her to be struck by the car. As a result of this tragic incident, Uber quickly suspended its self-driving operations while it engaged in investigations into what happened. Real-world testing has since resumed.
This real-world fatality rattled the self-driving industry. Why didn’t the car stop and notice the pedestrian? Why didn’t the backup driver step in to prevent the collision? Who is at fault? What can be learned or corrected from this incident? Does technology have the liability here or the various humans involved?
Humans or Technology at Fault?
Before we can dive into technology issues, we have to identify whether the people are fundamentally at fault here for this incident. From what was known at the time, the individual crossed into the road without using a crosswalk and was walking a bike at the time, which might have confused the internal system responsible for identifying potential hazards. While the car and backup driver should have been able to see the pedestrian, is it possible that the pedestrian might have liability in this case? Perhaps, but reasonable human drivers would have been paying reasonable attention to the road and most likely noticed the pedestrian, at least swerving or braking to avoid a last-minute collision.
Likewise, perhaps the human backup driver is at fault here. After all, the car was not operating without any humans at all in the vehicle. In the United States, current law requires a person to be in a moving car where they can control the wheel. In this situation, the person was there but the autonomous mode was engaged in the car. The driver was here; supposedly ready to take over control at any moment given problems in the autonomous system. This is not fully autonomous Level 5 operation, but rather limited autonomy that still relies on the human as a backup. In this case, the human failed to provide that backup. So is the human driver the liable party here? Perhaps, but is it reasonable to assume that a human can go from being completely unaware of their surroundings, counting on the machine to do all the work, and then be asked to step in at a critical moment to handle a life or death issue with very little notice? It's hard to see how the human brain can instantly go from being unaware to acutely aware and able to handle a problem in such a short amount of time. Perhaps the entire assumption and expectation of the human backup driver is unreasonable.
So ruling out the two humans involved, the only remaining parties are the technology and vehicle. Part of the reason why this incident garnered so much attention is due to questions over whether the artificial intelligence that drives the car is ready to handle the real world. Currently, AI technology for cars is very nascent and still very much in development. Anything can go wrong. Perhaps the machine learning models haven't been fully trained in real-world situations. Maybe the sensors weren't capable enough to notice the pedestrian walking a bike in the middle of the road at night. Maybe there were faults in the visual or sensor technology itself, such as glitches or smudges on lenses or other perception issues. Maybe the computer system inside the vehicle didn't have enough time to process the fast-moving scene.
Is Uber at Fault?
Uber had reports and incidents before this fatal accident regarding early tests of its self-driving vehicles. In Arizona, a number of its vehicles were involved in minor fender-benders and other traffic incidents that showcase the immature state of the technology. Perhaps before we jump to conclusions about the technology, perhaps Uber itself is liable? Arizona has attracted a lot of self-driving cars due to less strict regulations and reporting for autonomous vehicles. It could be that Uber was pushing their cars too far, too soon.
Did Uber know that their technology was not ready for prime time, but put them into real-world use anyway resulting in a fatality? Is this a case of corporate negligence where the humans and vehicles are not directly at fault but the company is? With the number of incidents that were reported before the fatal incident, did Uber do a good enough job of stepping back and looking at what happened in previous cases before continuing forward? Some might say not, giving Uber ultimate liability.
A lot of speculation after the accident went into whether the AI and machine learning technology that powers autonomous vehicles is ready for the road. AI cars are a lot more sensitive than other vehicles. Should there be more testing, advancement, and regulation before these cars go on the road? With other vehicles in other industries, such as airplanes, you wouldn't put a test plane out where it could hit another plane for example. In the case of autonomous vehicles, there are a lot of moving parts.
Machine learning is what powers autonomous vehicle systems, and they must be trained on lots of real-world data to be able to operate in real-world environments. They need to be able to recognize all the features of the road environment including roads, curbs, traffic control signals, signs, as well as all the potential hazards they can encounter including other vehicles, pedestrians, and other potential obstructions. Not only do they need to be able to recognize the world around them, but also they need to act on that recognition, making the right decisions on how to navigate or deal with obstructions.
Likewise, these autonomous vehicles are dependent on a wide array of sensors to work. Their computer vision systems are powered by cameras that can potentially be damaged, whacked out of alignment, covered in mud or salt or debris, or otherwise impaired. They also use LIDAR and other sensors that can potentially malfunction, have physical or electrical faults, or other issues that impair their data. As those working with AI systems know too well, bad data in a good model still results in bad output. You need to have good data everywhere to get good results. Well, the real world isn’t known for always having good data. So perhaps the complex interaction of all this technology, the immature and constantly evolving machine learning models, sensitive hardware, and other factors might lead to brittleness in the real world, and perhaps all those technology vendors might be at fault here.
Other Potential Issues of Liability
Uber didn't have passengers in the car at the time of the fatal crash, but there is the idea that a passenger could contribute to the incident. A passenger could have been interested in taking a selfie in the car or causing a ruckus because they are in a self-driving car or otherwise distracting the driver/supervisor so they were not able to stop the car in time. At all times a driver is responsible for their car, so even if there had been passengers in the car, it still comes down to the driver to monitor the road.
Bad road design could have caused the accident too. Humans have reported to the city and news companies that the intersection where the accident happened is confusing. The bike lane cuts across a turn lane. If this were true the road design could be the fault of the State of Arizona or the city. But perhaps that might be shifting the blame just a bit too far.
The Complex Web of Liability
Maybe it's not such a good idea to pin the fault on just one person or one factor. The totality of the circumstances may add up to the reality that fault lies with many parties. Any time someone dies people want to have someone to blame. What they don't want to hear is the possibility that no one is at fault. Accidents do happen and perhaps all of this is just the inevitability of using machines instead of people to drive cars.
What makes an AI-involved accident any different from the number of accidents that happen on the road every day? One of the biggest differences is the fact that autonomous cars are new. AI operating vehicles is very rare to begin with, and so a fatality like this is a super rare occurrence. People have been worried about the possibility of this sort of incident since self-driving cars were first developed. Now that it has happened, there's a lot of attention being put on it. Anything that self-driving cars do is already under the spotlight. Losing a human life is such a noteworthy occurrence that the world's media microscope will necessarily be focused on it.
Where Do We Go From Here?
After the time of the accident, companies and consumers were wondering whether or not this would be the end of self-driving vehicles. Self-driving cars have always been a step in the future, just one that many of us thought was much further out. Many people viewed Uber temporarily putting their self-driving program on hold as a sign that AI may not continue to be allowed to drive cars. But things didn't stop. They just paused. Many companies now have self-driving cars and trucks on the road. Uber's self-driving cars have already returned to the road.
Tragically, this incident is likely to be just a learning step in the development of autonomous cars. Before these autonomous vehicles gain widespread adoption, perhaps even more incidents like this will occur. There will likely be yet more autonomous vehicle accidents and fatalities. For the future of self-driving cars, there is a lot to think about. Laws, regulations, and development vary greatly, between states and countries. We need to both work on the proper development of these cars and regulations but also learn from the experiences we have with AI-powered cars. The continued discussion will need to be had if we want to make autonomous vehicles a reality.
Ronald Schmelzer, columnist, is senior analyst and founder of the Artificial Intelligence-focused analyst and advisory firm Cognilytica, and is also the host of the AI Today podcast, SXSW Innovation Awards Judge, founder and operator of TechBreakfast demo format events, and an expert in AI, Machine Learning, Enterprise Architecture, venture capital, startup and entrepreneurial ecosystems, and more. Prior to founding Cognilytica, Ron founded and ran ZapThink, an industry analyst firm focused on Service-Oriented Architecture (SOA), Cloud Computing, Web Services, XML, & Enterprise Architecture, which was acquired by Dovel Technologies in August 2011.
Ron is a Parallel Entrepreneur, having started and sold a number of successful companies. The companies Ron has started and run have collectively employed hundreds of people, raised over $60M in Venture funding and exits in the millions. Ron was founder and chief organizer of TechBreakfast – the largest monthly morning tech meetup in the nation with over 50,000 members and 3000+ attendees at the monthly events across the US including Baltimore, DC, NY, Boston, Austin, Silicon Valley, Philadelphia, Raleigh and more.
He was also founder and CEO at Bizelo, a SaaS company focused on small business apps, and was Founder and CTO of ChannelWave, an enterprise software company which raised $60M+ in VC funding and subsequently acquired by Click Commerce, a publicly traded company. Ron founded and was CEO of VirtuMall and VirtuFlex from 1994-1998, and hired the CEO before it merged with ChannelWave.
Ron is a well-known expert in IT, Software-as-a-Service (SaaS), XML, Web Services, and Service-Oriented Architecture (SOA). He is well regarded as a startup marketing & sales adviser, and is currently mentor & investor in the TechStars seed stage investment program, where he has been involved since 2009. In addition, he is a judge of SXSW Interactive Awards and served on standards bodies such as RosettaNet, UDDI, and ebXML.
Ron is the lead author of XML And Web Services Unleashed (SAMS 2002) and co-author of Service-Orient or Be Doomed (Wiley 2006) with Jason Bloomberg. Ron received a B.S. degree in Computer Science and Engineering from Massachusetts Institute of Technology (MIT) and MBA from Johns Hopkins University.