Tek Eye Logo

Tek Eye

High Profile Driverless Car Crashes

This article lists the early crashes that involved driverless cars, i.e., a car that is not being driven by a human. In reality, there is no such thing as a driverless car, the driver is either a human or a computer. Therefore, the term self-driving car is more appropriate. However, the term driverless car seems to be in common use. In some cars, the driverless mode has to be engaged, for example, Tesla's Autopilot mode. Thus, this article does not cover the many crashes that have occurred in cars that support driverless mode but are driven by humans when crashes occur.

Car Crash

The data currently available suggests that driverless cars are now safer than human-driven vehicles, though research is ongoing. Why? Some humans are bad at driving. Humans can be distracted, drive too fast and do not think through their actions. Humans also get ill and tired, and can even fall asleep. Whereas computer drivers are not interested in looking at a mobile phone, they stick to the speed limits, calculate their actions billions of times per second and never tire. However, computers can get ill (break down) and can have bugs in their software, i.e., no computer or human driver can ever be perfect.

Another problem for computers is the need to deal with the millions of different road conditions and possible incidents that could occur. This requires a level of intelligence (often lacking in some human drivers). Computers use some form of Artificial Intelligence (AI), often Machine Learning (ML), to drive a vehicle. This is followed by an enormous amount of testing. There are efforts to automate testing so the ML can be tested in far more scenarios than is physically possible by driving a vehicle around (though that will always need to happen). An example of a testing initiative is the Safety Pool Scenario Database, a set of curated driving scenarios for testing and validation of Advanced Driver-assistance Systems (ADAS), which would include self-driving.

List of Early Self-Driving Crashes

Date Maker Vehicle Cause Deaths
2018-03-23 Tesla Model X Collision with faulty crash barrier, see 1 1
2018-03-18 Volvo XC90 Collision with a woman walking a cycle across a road, see 2 1
2018-01-22 Tesla Model S Collision with a stationary fire truck, see 3 0
2017-03-24 Volvo XC90 Human driven car hits and overturns autonomous Uber car 0
2016-05-07 Tesla Model S Collision with a tractor-trailer at an intersection, see 4 1

Further Information on Some of the Crashes

  1. Autopilot engaged, follow distance set to minimum, visual warnings and an audible warning given, see the Tesla blog post archived on the Wayback Machine.
  2. Autonomously driven Uber knocked over a woman wheeling a cycle across the road; the woman appeared out of the darkness without warning. The pedestrian died from her injuries.
  3. Autopilot engaged, vehicle speed 65 mph. It appears the Tesla was travelling behind a large pickup truck that swerved unexpectedly to avoid the fire truck. The driver did not respond to the situation.
  4. Autopilot engaged, the NHTSA Report did not find it at fault, the vehicle's cruise speed had been set to 74 mph.

Data on ADAS and ADS

The U.S. National Highway Traffic Safety Administration (NHTSA) now collates data on incidents involving vehicles fitted with ADAS and ADS features. That is Advanced Driving Assistance Systems (ADAS), for example, active lane-keeping assist, and Automated Driving Systems (ADS), i.e., self-driving systems. There was a data report released on June 15th 2022. See the release's news item NHTSA Releases Initial Data on Safety Performance of Advanced Vehicle Technologies.

Update - Driverless Car Issues Continue

Several years on from the first reports of car crashes involving driverless car technology, the technology still has issues. Authorities in San Francisco halted Cruise driverless taxis after a pedestrian was hit by another car but then dragged under a Cruise vehicle. I wrote a paper examining the incident: A Robotaxi Artificial Intelligence Safety Failure. The examination of the Cruise accident and the aftermath showed that driverless technology extends beyond the technology in the vehicle. It encompasses the management, testing, and incident handling processes that are needed when things go wrong. If these procedures are weak, there are consequences for driverless vehicle manufacturers.

My paper pointed out that AV technology is unable to handle black swan events, unusual events that many human drivers would be able to deal with, but automated driving systems would require human intervention. One example is the coning of robotaxis to disrupt their operation:

Robotaxi coning

Photo by Flickr user BuddyL

Whereas a human driver can use their general intelligence to figure out how to deal with a black swan event, self-driving cars can only deal with situations that they have previously learnt. Even then, disruptions to this learnt environment can cause issues, as with the coning example, others include:

Human drivers need to pass a driving test before being allowed to drive unsupervised. If we expect a car's self-driving capability to exceed human-level driving, then the automated driving system should be able to pass a human-level driving test. In the United Kingdom (UK), as in many parts of the world, humans must pass the Hazard Perception Test (HPT) and a practical vehicle test. It would make sense for a self-driving system to be able to pass similar tests. Another interesting consideration is how regulators determine when a manufacturer's system has achieved self-driving when but another manufacturer's has not. One manufacturer's system may successfully handle a black swan event, but another's does not. As my paper suggested, maybe a single global standardised self-driving AI system is required instead of the current race between multiple companies. After all, a human can jump into many different vehicles and drive them. Should a self-driving system be equally adaptable?

A universal AV AI

There would be advantages to regulators for a single universal self-driving system:

  • Only one system to be validated and verified as safe for self-driving by regulators.
  • It would build upon the diverse experience of multiple organisations and engineers, i.e., pooled knowledge.
  • Testing in a global range of environments and Operational Design Domains (ODDs) beyond the limited geographic regions of current testing areas.
  • A standardised method for a system to handle unknown edge cases would be developed.
  • Trust and confidence in automated driving technology is not universal, hence the coning incident. The Insurance Institute for Highway Safety in America has found automated driving systems to be poor, maybe a different approach to help with consumer trust is required.

Self-driving cars are being used, but they are not universally deployable. The ability to deploy in any ODD straight from the factory is still decades away, as black swan events can stop a self-driving car from operating.

See Also

  • For a full list of all the articles in Tek Eye see the full site alphabetical Index

Author:  Published:  Updated:  

ShareSubmit to TwitterSubmit to FacebookSubmit to LinkedInSubmit to redditPrint Page

Do you have a question or comment about this article?

(Alternatively, use the email address at the bottom of the web page.)

 This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

markdown CMS Small Logo Icon ↓markdown↓ CMS is fast and simple. Build websites quickly and publish easily. For beginner to expert.


Articles on:

Android Programming and Android Practice Projects, HTML, VPS, Computing, IT, Computer History, ↓markdown↓ CMS, C# Programming, Using Windows for Programming


Free Android Projects and Samples:

Android Examples, Android List Examples, Android UI Examples



Tek Eye Published Projects