With the push for fully autonomous cars increasingly evident, how far down the road are we in anticipating and mitigating the risks involved, asks George Hall

The future of automotive transport is fast becoming a driverless prospect, but there are still many hurdles to cross to ensure that the safety of road users is at the forefront of all technological advances such as artificial intelligence (AI) driven vehicles.
 
A race between two driverless electric cars recently took place in Buenos Aires. Both cars were created by Roborace, a company looking to set up regular races between driverless cars for entertainment purposes – but also to better understand and develop driverless technology. Straight away the benefits of this can be seen, especially from a health and safety perspective, as if anything goes awry on the track, no driver equals no injury.
Unfortunately for the Roborace’s two Devbot cars, a crash did occur. One of the cars attempted to take a corner too fast and clipped it, causing damage to the exterior of the car and ultimately causing it to crash out. As this race (or experiment) took place prior to the Buenos Aires Formula E race, there were spectators and race staff around the course which could have resulted in injury. 
 
Control measures
 
This risk of injury was successfully mitigated through several control measures, such as a speed limiter being activated on the cars (one car topping out at 116mph) and crash barriers being in place. Alongside this, other control measures could be put in place, for instance emergency braking systems and safe distances kept from the track.
 
Both cars were controlled by AI software – computer programs designed to drive the car, observe and monitor the local environment and, ultimately, ensure a safe drive. Devbot 2, however, did not manage this and crashed while travelling at high speed. Computers aren’t perfect, which I’m sure we all appreciate at some level.
 
Despite the car crashing and not finishing the race, because only a computer program was in control, there were no human injuries to consider or deal with. Specialists can review what happened prior to the crash in the manner of a dashboard cam, but with much greater detail. Going forward, all that needs to be done is to get a new chassis up and running and redownload the AI into the car, hopefully having learned from its mistakes the first time round. After all, the whole point is for these cars to learn and become better.
 
Assessing risk
 
The worrisome aspect is the fact that the cars are AI controlled. Of course, AI can make life easier, but the concern is that each AI controlled vehicle will, in essence, be different from the next. In a real life situation on the road, how can a driver in control of a car predict what a driverless car is going to do if they are always learning, and vice versa? One car may do something completely different to another when put in a similar situation, in the same way that human drivers are prone to reacting differently. 
 
Risk assessing against unpredictability is certainly challenging. In addition, any company investing in driverless technology will have to ask itself whether any risk scores they attain should be considered as low as reasonably practicable (ALARP). This term is often used in the regulation and management of safety critical and safety involved systems, the principle being that the residual risk shall be reduced as far as is reasonably practicable. 
 
Over time, through risk mitigation, the risk of death due to dangerous driving will reduce as a chaotic element (humans) will be removed from the equation. The introduction of autonomous cars will be a slow culture shift for the world to contend with, although now that Dubai has approval for single occupant drones, perhaps lessons learned there will cascade down to road level.
 
Expect the unexpected 
 
Regarding risk management, this is a good mantra to follow. Another is, ‘if it could go wrong, it will’. If we do begin to assess against the unexpected occurring, then I am sure some level of successful mitigation will be in place for driverless automotive transport. A recent report on the Deepwater Horizon disaster summed it up nicely when it stated that these days, with risk, most organisations/individuals are focused on the possibility of successfully mitigating an issue, whereas more organisations/individuals should be focusing on the possibility of failure instead.
 
But then, should the principle of ALARP start creeping in, a business has to make the call as to when it stops researching mitigating measures or controls. If a risk cannot be further mitigated against due to it being a financial sinkhole or requiring too many resources that far outstrip the risk appetite of the company in charge, then issues could arise.
 
No company in charge of driverless technology would ever mark a potential risk that may impact on human lives as ALARP. The more tests that are done, through the slow introduction and culture shift discussed previously, the better the technology becomes until one day the majority of journeys will be fully undertaken by faithful AI companions. After all, wouldn’t it be nice after a day at work to simply ask your car to take 
you home?
 
Malicious attacks
 
A worst case scenario for something like this would be a hack attack on a connected car. Instead of driving you home, the car just goes where it is told. This could be a harmless prank by a friend or a chilling predecessor to a terrorist attack.
 
Airlines and their regulatory bodies, such as ICAO and EASA, are already considering how to mitigate against a terrorism driven hack on their planes and the components that make them up; a false flight plan uploaded to the aircraft’s systems could result in a major safety incident. Shipping companies have also been on the receiving end of hacks that could have endangered life – take the example of Maersk and the Petya virus.
 
Should a vehicle have control stripped away and placed in the hands of a terrorist organisation, the loss of life could be catastrophic. Especially at the speed with which things can be shared across the globe these days. Terrorist cells could, upon discovery of a loophole or successful hack, share how they achieved control with other members of their organisation, resulting in a hugely coordinated attack.
 
Consider another vision of the not so distant future, with autonomous heavy goods vehicles and trucks driving up and down major arterial roadways: if control was stripped from the control centre or embedded AI and given to those wishing to cause harm, the safety issue is magnified. Larger body and greater mass equals more destructive power.
 
If this were to occur, would a vehicle manufacturer have the capability to roll out a software fix to close the loophole? This risk must surely have been considered and mitigated against. With the transfer of liability from the driver to the manufacturer, all risks need to be discussed, assessed and managed.
 
Progress of technology
 
The International Risk Governance Council (IRGC) has been discussing autonomous cars as part of its breadth of conventions. In Zurich in July 2016, in conjunction with the EPFL (École Polytechnique Fédérale de Lausanne) Transport Center, a workshop was organised to help improve the understanding of risks and opportunities associated with driverless technology. Interestingly, one of the main points taken away from the event was that from a safety perspective, the technology required for fully autonomous driving is not yet ready.
 
Jumping back to the Roborace cars and the fact that one of them did crash, it is felt that society as a whole is more willing to accept accidents caused by human error, so one point to consider is: how many accidents would be deemed acceptable for autonomous vehicles? With risk management being key here, one accident is one too many. Performing meaningful investigation and analysis after any safety incident should result in mitigating activities or completely preventative measures being introduced, for example, a lessons learned environment. 
 
The Roborace cars learn from their driving and even their mistakes. It may be possible to download the AI within the car and plug it back in as many times as is required, but this cannot be done with human passengers. Perhaps more time is required for the cars to crash in a safe environment and learn to drive. 
 
Considering all of this, who could ever sit and decide that these types of risks have been assessed and mitigated against as much as possible, or have applied the principal of ALARP? An organisation would need to be extremely confident of its software and vehicles from a safety perspective. Accidents will occur, and as the saying goes, ‘if it can happen, it will’.
 
Heavy investment
 
A huge shift is coming with heavy research expenditure in this area, due to the expected benefits of driverless cars far outweighing the associated risks. For instance, transport could be opened up to those with limited mobility and driving in dangerous areas or conditions could be undertaken by a trusty AI. In Great Britain in 2014, 240 deaths and 1,070 serious injuries were caused by drink driving. A driverless vehicle could dramatically reduce this. If driverless cars become mandated by law, this figure could be removed entirely.
 
So is the world ready yet for entirely driverless cars? Even those regions that are going to introduce this technology soon are maintaining a seat and a driver to spot things that the inbuilt AI is unable to deal with. As with anything in life, practice makes perfect – the more this technology is applied and studied, the safer the industry should become. 
 
Were I a risk manager or analyst within an automotive company at this time, I would have serious reservations about saying that what has been done so far is enough. However, I am reasonably confident that in the near future it will be not only our cars that are autonomous. Perhaps the automotive industry should take a leaf out of the aviation industry’s book, where all incidents are shared globally within the industry so that all can learn from each other.
 
George Hall is a safety, quality and risk management consultant at Ideagen. For more information, view page 5

Also in this issue