GUIDELINES FOR AUTONOMOUS AND CONNECTED VEHICLE TESTING
Fully Evaluating Performance and Safety Considerations is Critical to Successful Deployment
As wireless communications technology opens the door to limitless possibilities within the transportation sector, manufacturers must consider how validation testing can help ensure the safety and performance of connected and automated vehicles. As the technology behind these vehicles and components expands and evolves, it does so quickly—more quickly than standards can change. This means manufacturers and developers won’t always have firm standards or requirements in place. However, it is critical to ensure the safety and performance of automated and connected vehicles and their components.
In the absence of firm, established standards, manufacturers and designers are left to navigate this new industry on their own. However, even without regulatory requirements, there are several considerations, guiding principles and certifications that can be applied to this new frontier of the automotive industry.
Automated Vehicle Overview
An automated vehicle is one that is able to monitor and react to its environment without input or direction from a driver or passenger. The Society for Automotive Engineers (SAE) International has separated vehicle automation into 6 levels, ranging from 0 to 5, as shown in Table 1.
Zero autonomy; driver performs all driving tasks.
Vehicle is controlled by the driver, but some driving assist features may be included in the vehicle design.
Vehicle has combined automated functions, such as acceleration or steering, but the driver must remain engaged with the task of driving and monitor the environment at all times.
Driver is necessary, but not required to monitor the environment. Driver must be ready to take control of the vehicle with notice, at all times.
Vehicle is capable of performing all driving functions under certain conditions. Driver may have the option to control the vehicle.
Vehicle is capable of performing all driving functions under all conditions. Driver may have the option to control the vehicle.
Table 1: Vehicle automation levels, per the SAE
According to the National Conference of Legislatures, nearly half of all U.S. states and the District of Columbia have adopted some form of self-driving legislation. Additionally, five state governors have issued executive orders to encourage testing of automated vehicles on their roads. The National Highway Traffic and Safety Administration (NHTSA) of the U.S. Department of Transportation developed nine separate draft test procedures that were released for public comment in 2019 (comments on the draft are due on March 2020).
Absent federal regulatory requirements for automated and connected vehicles, manufacturers need to be able to develop their own plans in order to ensure the safety and performance of automated technologies. This includes ensuring that any and all testing requirements at the state level are fulfilled, and doing general checks to ensure functionality of the vehicle and its components as they interact with other cars and with wireless technologies present in daily life (such as mobile phones, Bluetooth, WiFi, etc.). There are several considerations to address when it comes to developing and implementing testing.
When testing automated vehicles, there are several options for testing environments that OEMs should consider. Each environment type offers its own benefits as well as drawbacks. The most thorough testing plan will integrate multiple environments for robust data and results.
In-laboratory testing allows for components and full vehicles to be evaluated for electrical safety, wireless interoperability, functionality, connectivity, environmental conditions and overall performance. These settings allow experts to subject equipment to rigorous testing in a highly-controlled manner.
On-road testing allows for vehicles and various parts to be tested in real-world conditions, subjecting them to elements like weather, geography, light or darkness, road conditions, infrastructure and more. These real-world analyses can be valuable in assessing safety and performance over an extended period, providing a realistic view of a product’s lifespan and functionality.
Proving ground analysis can also be used to evaluate an automobile and/or its components on the road, but in a predictable, controlled setting. This type of evaluation allows on-road testing to be conducted in a controlled manner but also allows for specific variables that may be encountered under actual conditions, such as direct sunlight, weather conditions, tunnels, on-ramps, speed changes, traffic lights and other potential obstacles.
By combining these methods, evaluations can provide a more complete picture of performance and safety. For example, after initial proving ground testing, a vehicle or component can be brought to a lab and exposed to a harsh durability test. Then subsequent proving ground testing can be done to fully understand how the durability test affected the performance of the product. Especially valuable are accelerated stress tests and failure mode verification tests.
Lidar and Radar
When it comes to testing automated systems and overall performance of the automobile, manufacturers must evaluate various sensing technologies, two of the most common of which are radar and lidar. Lidar systems can produce data that is highly beneficial to the automated vehicle’s algorithms, but durability challenges may result in decreased resolution, poor performance or possibly sensor failure under certain harsh conditions. Accelerated stress testing provides a repeatable method of improving design robustness and eliminating these issues.
Accelerated stress testing determines failure modes by subjecting samples to different levels of multiple stress sources, applied simultaneously. It simulates real-life conditions as well as additional and elevated conditions to gather data that helps ensure product life, functionality and reliability. A form of accelerated stress testing, failure mode verification exposes samples or prototypes to a set of amplified environments and/or stresses to produce multiple unique failure modes, their sequence and distribution. A sensor can easily be tested in the field, sent to a lab for accelerated stressing, and returned to the field for performance evaluation. This can produce valuable information regarding the durability and accuracy of the lidar system over time and can include third-party failure analysis to help improve sensor design.
The ability for radar to detect an object depends on the item’s size, shape and material composition. Most objects are visible to a radar system unless steps have been taken to reduce that visibility (think of stealth technology), but that is typically not something an autonomous car will have to deal with. Radio waves bouncing off the objects make visibility possible. So, if objects are too thin, transparent to the radar frequency or non-conductive, they may go undetected or be incorrectly identified. Radar systems may also have difficulty detecting objects when there are elevation changes, such as entrance and exit ramps, or when the road has a sharp curve, because objects have unique geometry (flat trailer with axle far forward). These scenarios or “edge cases” can be recreated on the test track for repeatable system evaluation.
Some lower-priced lidars systems can exhibit poorer quality range and/or resolution, giving the system less time and information with which to interpret the surrounding environment. It takes about 100 meters for a vehicle to come to a full stop from 70 mph when decelerating at 0.5 g, so a lidar system needs to accurately detect objects nearly twice as far to react safely. If your vehicle is designed for a lower speed environment, you may not need improved range as the safe stopping distance is much shorter. However, lower speed environments are often more difficult to navigate, with complex intersections, pedestrians and an increased number of objects to detect and track, many of which are significantly smaller than vehicles. Benchmark testing of different lidar systems with various targets of differing sizes and reflectivity can help in the lidar selection process, as can on-road field testing.
Most automated vehicles employ cameras as they can offer a robust solution for object detection. However, cameras function best under good lighting conditions and are less reliable when there is significant lighting contrast or when a vehicle transitions from one lighting condition to a different one. Like the human eye, cameras cannot instantly adjust from very bright to very dark (or vice versa) and may have difficulty differentiating objects of similar contrast. So, for example, when the vehicle emerges from tunnel during the daytime, thus moving from dark to light, the camera may not work as well.
It is important to consider the changes in lighting conditions that an automated vehicle will encounter during regular operations. Extensive track testing will be needed in these different edge cases in order to ensure optimal cameral functionality. Additional over-the-road testing can be used to validate an algorithm’s ability to recognize objects. Infrared cameras used to detect pedestrians in the dark can also be evaluated using test dummies with heated mesh clothing that radiates a heat signature. Cameras should also undergo extensive performance testing in the lab to ensure their quality, both as a standalone device and as an integrated component within the vehicle’s vision system.
As a side note, considerations should also be made for weather conditions, such as how a camera will operate with rain drops or snow over the lens.
Most automated and connected vehicles will employ GPS systems; however, geographical elements such as solar flares, tunnels, buildings, canyons or simply reception can impact the connection and reception of the GPS signal. This means the vehicle needs to be able to rely on dead reckoning navigation from the last known GPS position as a backup. The simplest dead reckoning algorithms may only use wheel speed and steering angle, while others may also incorporate compasses, inertial measurement units (IMUs) or other advanced sensors to try to reduce positional error. It is important to test additional sensors and algorithms and to use data acquisition hardware and real-time kinematic (RTK) systems to track true vehicle position versus assumed position in these situations.
With a variety of connected components and automated features, it is critical to ensure their interoperability within the vehicle as well as with components outside the car, both for security and performance needs. The connected car relies on devices and components that exchange, share and interpret data. Interoperability ensures that these devices and components can form an integrated ecosystem within the vehicle, seamlessly communicating with one another.
Within a connected car, interoperability testing should consider other devices, access control, default and/or hard-coded credentials, legacy firmware updates, text data transmission/storage and unneeded open ports. Testing components for interoperability ensures they will work together in a secure manner, without sacrificing performance. Information security management systems utilizing the four-stage “plan, do, check, act” (PDCA) system can be employed to test interoperability, as follows:
Plan: This phase involves identifying improvement opportunities with the product and systems. Evaluating the current process and pinpointing causes of failures will allow you to develop an action plan that can be implemented when the need arises.
Do: In this phase of the process, it is time to implement the identified improvements, collect analytics and data, and document issues and failures. It is important to keep all the information on hand for future use.
Check: During the “check” phase, any results from the previous stages must be reviewed and analyzed. After the analysis is complete, it is time to identify whether the necessary improvements were made. If they were not, return to the “plan” and “do” phases until the required improvements have been implemented. As with other steps, documentation is important.
Act: Based on the previous stages’ observations and failures, this is the time to implement changes to whatever did not work and continue practices that did. It is important to continue to reiterate the PDCA process, starting at this phase.
One of the most effective ways to test for interoperability is to place products in a simulated environment to check for issues as well as how other devices or components in a system impact each other. On-road or proving ground testing can be an effective tool in determining interoperability. Additionally, software testing is a vital component of interoperability as it uses manual and simulated test processes to verify that the product meets all functional, performance, security and quality requirements. This will include test plans, simulations and analysis.
All connected devices must be evaluated for cybersecurity concerns, and connected vehicles are no exception. Components must be evaluated for security vulnerabilities, flaw remediation and patching, non-secured (unencrypted) communications and malware infection. There are several ways that manufacturers can mitigate cybersecurity risk in connected devices:
Develop and/or use certified products that are correctly configured. Doing so can help address issues with malicious insiders, bad code, botnets and potential ransomware. This will also aid in the identification of security flaws early in the product development cycle, thereby reducing mitigation costs later and building product reputation.
Obtain certification of information security management systems. Use the four-stage “plan, do, check, act” process outlined in ISO/IEC 27001 to reduce risks related to stolen or hacked information and malicious insiders. An information security management system can preserve the confidentiality, integrity and availability of information by applying a risk management process applied to the design of processes, information systems and controls.
Employ threat risk assessments vulnerability assessments and penetration testing. A threat risk assessment can indicate weak points and security controls that can then be used to establish fixes. Vulnerability assessments can identify latent vulnerabilities and provide recommendations for improvement. Penetration testing can illustrate how vulnerabilities can be exploited. All three can be conducted together or separately, depending upon the individual component’s need.
Conduct supply chain assurance assessment and certification. One compromised link within a supply chain can affect many organizations and these attacks are becoming more mainstream and are expected to continue. Running assurance assessments and certification processes within the supply chain can help ensure the safety of a product and protect against bad code and software issues.
For vehicles in motion, communication will be a constant concern. Not only do automobiles need to interact with each other, they will increasingly need to interact with infrastructure and other smart devices. Dedicated short-range communications (DSRC) is a wireless technology that allows automobiles to conduct this communication.
DSRC regulatory requirements come from a variety of sources, including the Federal Communication Commission (FCC), the U.S. Department of Transportation (DOT), the Federal Information Processing Standards (FIPS), the Institute of Electrical and Electronics Engineers (IEEE), and validation tests specified by OEMs. These specifications address many different aspects of wireless communication, including (but not limited to):
Wireless access in vehicular environments
V2X safety communications
Radio reception range
On board unit evaluations for:
Connector test and validation
Electronics and electrical components
General automotive electronic equipment
DSRC testing includes laboratory testing on components to ensure their interoperability, connectivity, security, performance and electrical safety. Real-world testing over the road or on proving grounds can also allow for the evaluation of components within a vehicle and test their interaction while operating in their intended environments and with communication established with roadside units (RSUs).
As the automotive industry faces a truly pioneering time of change, it can be daunting to imagine not only the possibilities of automated and connected vehicles, but also the risks. Without clear standards and regulatory requirements, navigating the testing of these automobiles and their components can be confusing. However, employing testing in the lab and in the field can serve as a good starting point. Additionally, staying informed of developments within the industry is essential. As the technology evolves, the standards will begin to follow, making it important to remain aware of proposed and approved regulations and changes. With education and preparation, ensuring the safety and performance of automated and connected vehicles can be done, helping to tap into the future of transportation.
Ralph Buckingham is Intertek’s director of connected and autonomous technologies and also manages the daily operations requirements of The American Center for Mobility proving grounds. He participates on various standard development committees, contributing to the evolution of telematics, connected vehicle, and autonomous vehicle testing procedures.