As the auto industry strives to improve safety and edge towards high-level automated driving, the complexity of proving that electronic vehicle controls will perform safely is exploding. Simulation’s role in validating safety thus is expanding, prompting many tool providers to move to scalable, cloud-based architectures that run operations in parallel to shorten analysis times.
Automakers and Tier 1s have acknowledged the value of simulation as an integral tool for getting the myriad software and hardware elements of automated driving to work in collaboration. High-level automated driving requires vehicle controls to “understand” many complex external elements, which have far more variables than conventional on-vehicle systems such as engine controls. Cameras, radar and lidar must monitor pedestrians, vehicles and highway markings.
One word: parallelize
As a ballooning number of sensors provide input to electronic controls that make critical driving decisions, managing the immense volume of data involved in simulating and validating operations is increasingly time-consuming. Verifying the operation of sensors and control functions is becoming as important as designing the systems being validated.
“People talk about better lidar or artificial intelligence, saying they will fix the problems of autonomous vehicles,” said Jeff Blackburn, head of business development at Metamoto. “We feel the key technology for getting a well-behaved autonomous vehicle to market will be simulation. The best way to handle simulations is with cloud-based, massively-scalable tools that can run lots of simulations in parallel.”
Startups like Metamoto and Cognata are gaining adherents for this scalable, cloud-based approach, which they say is a fundamentally different architecture than used by traditional design software suppliers. Many Tier 1s and OEMs are using clouds for simulation, though some major players are using proprietary clouds. Using computer servers that can located anywhere lets engineering teams conduct multiple virtual tests simultaneously.
“When you look at the number of functional requirements and the number of things an automated-driving system has to do, it’s outgrown the capabilities of traditional techniques,” said Heikki Lane, director of strategy & policy at Cognata. “The cloud allows access to provide near-infinite scaling. When customers move to the cloud, they can parallelize operations instead of running them serially.”
Traditional development-software suppliers are not standing still, of course. They’re enhancing existing tools while also following the trend to parallel architectures. Conventional workstations and on-premise data centers will remain a central tool for modeling, simulation and validation, but they’re being augmented with more parallelized architectures.
“We’re doing a lot with perception algorithm development, looking at ways to speed up calculations,” said Matthieu Worm, program lead for autonomous driving at Siemens Digital Industries Software. “We’re also developing more software for cluster environments and cloud environments.”
All the activity in simulation underscores the complexity of determining whether self-driving technologies provide safety in even the most complex driving situations. Though design tools can generate highly-realistic tests that precisely mimic real-world operations, even simulation-software suppliers don’t think virtual testing will eliminate physical driving tests.
“I don’t think physical tests will be reduced, they may well grow,” Lane said. “The amount of simulation on top of that will continue to grow.”
Those simulations now go well beyond the vehicle. Traditional suppliers such as Dassault and Siemens address the broad spectrum of mobility—from silicon to smart-city planning. That broad reach can even extend to roadway systems such as stoplights.
“We go from chip to city, starting with simulating the chips used in design, taking those chips and building systems and going to the city level,” Worm said. “At the city level, we’re a big supplier of infrastructure equipment.”
Test commonality is crucial
The complexity of testing many vehicle types in multiple driving scenarios poses a major challenge for those tasked with ensuring autonomous vehicles are safe. Many of those involved with development and validation believe that some common tests will be needed as a foundation to determine whether vehicle systems perform safely.
“It’s important to do repeatable tests,” Worm said. “Everyone needs to do the same tests with different vehicles.
The need for common tests extends to the critical regulatory aspect of vehicle certification. The National Highway Traffic Safety Admin. (NHTSA) and industry are expected to work together to ensure that all vehicles are tested using the same criterion. Given the complexity of ensuring that vehicles respond well in myriad situations, no set of tests can guarantee complete safety—but few standard tests may have to be used to demonstrate a base level of competence.
“If simulations are going to be used as part of the vehicle validation process, I feel you have to have an agreed-upon library of scenarios,” Blackburn said. “I don’t think NHTSA can develop a testing technique for autonomous driving and it’s not like NHTSA will let automotive companies self-verify their vehicles.”
Not all sensor outputs are equal
While regulators grapple with their aspects of vehicle safety, engineering teams are wrestling with the creation of those safety systems. Simulating sensors highlights the challenges that arise.
Today, most sensor modules are essentially black boxes that give automakers and suppliers information about objects, sending the electronic controls information that’s important for decision making. The lack of information about outputs makes it difficult for simulation suppliers to know whether they’re accurately representing sensor outputs. Most sensor suppliers have been hesitant about divulging the code they use—but that may be changing.
“We’re interested in taking a given number of scenes and seeing if our model gives us the same results on the output side,” Blackburn said. “We’ve got formal and informal relationships with radar, camera and lidar providers, we asked them to provide a scenario, then we shipped them our raw data from that scenario and asked them to compare it to their actual data. It’s an iterative process.”
As with most developmental aspects of autonomous technologies, there are many subtleties that can alter both virtual and physical sensor outputs. Weather conditions are a factor for cameras, for example. Lidar’s addition of light bouncing off the target brings a number of extra parameters.
“With lidar, something that’s sometimes missed is that you need to know whether the beam is hitting a wet or dry surface, whether it’s a pedestrian or metal,” Blackburn said. “The software needs to account for material properties.”
Artificial intelligence will be an important element, helping tool providers develop tests and aiding system developers to improve their data-gathering and analysis techniques; AI can assist companies building scenarios for testing, helping them vary conditions and create tests that might be difficult to run in the real world.
When it’s used in vehicle systems, AI will pose many challenges for those tasked with validating safety, since its response to stimuli can change depending on conditions—meaning more potential variables. Once more, it’s likely that companies will come together to create some standard scenarios that will test the technology’s output to determine that it’s safe.