What self-driving cars can teach us about software testing
By Natalie Mendes
This is a guest post written by Hamesh Chawla, VP of Engineering, at Zephyr. Hamesh is responsible for the worldwide engineering function at Zephyr and brings over 18 years of engineering, mobile, and networking technology experience to his role. Outside of work, Hamesh enjoys golfing, playing tennis, and spending time with his two daughters.
Autonomous vehicles, also known as self-driving cars, are not that far away. In fact, you should expect to see fleets of driverless vehicles on the streets in less than a decade. According to one report, there’ll be 21 million self-driving cars on the world’s roads by 2035. Fully autonomous vehicles that can operate without any human control or even monitoring are expected to make up the majority of new cars sold by 2050.
It will take extraordinary computational power to deploy safe autonomous vehicles at this scale. Intel CEO, Brian Krzanich, recently said he expects each autonomous vehicle will generate approximately 4,000 GB – or 4 terabytes – of data a day. By comparison, the average person, through the use of PCs, phones, smartwatches, and the like, currently generates 650 MB of data a day.
Since self-driving cars need to react to road changes within a split-second, safety is critical for both software and hardware deployed in autonomous vehicles. This is why almost everyone involved in the automotive supply chain, from chip manufacturers to real-time operating system (RTOS) vendors, tool companies, and app developers are offering products compliant with the functional safety standard ISO 26262, which is currently being revised to incorporate autonomous vehicle safety concerns.
ISO 26262 defines functional safety as the absence of unreasonable risk, including injuries and even death, due to hazards caused by malfunctioning electrical and electronic (E/E) systems To be compliant with ISO 26262, software needs to be integrated into a given hardware platform and within a given system (such as steering or braking) before it will be approved.
Because software testing for ISO 26262-complaint software is much more rigorous than the test-fail-patch bug hunt often found on many software projects, it’s useful to look more closely at ISO 26262’s methodical approach to safety analysis.
A key concept in ISO 26262 is the automotive safety integrity level (ASIL), a measurement of the risk imposed by a specific system component. As risk increases, more stringent methods must be employed to ensure safety.
The ASIL for each component in a system is determined by three factors: severity, probability, and controllability. ASIL gives guidance for choosing ways to reach a certain level of integrity of each component in the vehicle. For example, if the brakes fail to engage when the brake pedal is pressed, the driver or vehicle controller must be able to use an emergency brake.
The ASIL designation is made at the beginning of the development process and helps determine how thorough the testing must be for each component. In general, the higher the ASIL score, the more rigorous the testing must be. This is further spelled out in Section 6 of ISO 26262, which has recommendations on software design, unit testing, integration testing, and verification of software safety requirements. To be fully compliant with ISO 26262, software organizations need to demonstrate a functional safety management plan, a quality management plan, as well as evidence of a safety culture and people who are trained and responsible for enforcing that culture in both the development and production phases.
ISO 26262 covers the entire automotive product development process including such activities as requirements specification, design, implementation, integration, verification, validation, and configuration. ISO 26262 uses a V-model framework that ties each type of testing to a corresponding design or requirement document.
The V-model reference framework
An essential characteristic of the V-model is that the right side of the V provides a traceable way to check how the left side turned out (verification and validation). The V-Model reference framework should not be confused with the V-model development process, which is a variant of the traditional sequential and linear Waterfall methodology, where verification is done during the development phases of the software lifecycle (Safety Requirements, Architectural Design, Unit Design and Coding on the V-Model above) and validation is done during the testing phases (Unit Testing, Integration Testing, Safety Acceptance Testing).
While ISO 26262 poses stringent requirements for the development of safety-critical applications and, in particular, in documenting testing activities, the standard does not specify any particular methodology, only that development organizations must specify and document all methodologies, best practices, and guidelines used. In other words, there are many ways to traverse the V-model reference framework, including using V-model, Waterfall and Agile development processes.
In fact, any sequential or linear development methodology which assumes requirements are known, correct, complete, and unambiguously specified, presents challenges for autonomous vehicles. This is because the software that runs self-driving cars is constantly learning from real-world driving data, getting better at recognizing things like road hazards or other vehicles and using sophisticated machine-learning algorithms to decide, for example, whether to brake or steer around an obstacle. All of this information is collected, controlled and analyzed in enormous multi-terabyte, cloud-based data sets and is constantly being updated in order to make the entire fleet smarter.
A modern high-end car runs on roughly 100 million lines of code, and this number is planned to grow to 200-300 million in the near future. Intel, which recently bought the self-driving car firm Mobileye for $15.3 billion, has identified three key pillars needed to support an automated vehicle future: the car (including in-vehicle computing and human-machine interfaces, or HMIs), the cloud and data center, and the communications that connect them, specifically 5th-generation (5G) mobile networks.
Craig Hurst, the strategy head of Intel’s Transportation Solutions Division, recently blogged that bug-free, secure software is paramount to each of these pillars, as well as:
Agile product life cycle methodologies (that) allow for the collective learned enhancements (like maps, traffic data, parking and enhanced visual and cognitive acuity via deep learning updates), security patches and service upgrades with secure software over the air (SOTA) updates.
Developing and testing complex autonomous vehicle applications in a dynamic environment can hardly be done using a linear and sequential development approach, which is why Intel is relying on agile processes to cut development time and improve the efficiency of its software lifecycle. Agile processes, which value working software over comprehensive documentation, can be tricky to implement in the highly-regulated automotive industry that needs documentation to verify ISO 26262 compliance.
One way to ensure thorough documentation throughout the agile lifecycle is to use advanced test management software together with an issue tracking system such as Jira. This type of combined platform can be used to help create tests, plan execution cycles, link to defects and track quality metrics, with reports that can be easily exported to help with compliance audits. The integrated platform has the added advantage of helping developers and testers on agile teams in regulated domains collaborate better.
As our modern society starts to depend more and more on software in our everyday lives, testing and validation will become increasingly more important. And what better place to learn about the value of testing than from self-driving cars, where every line of code determines the safety and security of its driver and passengers.