How Aircraft Are Informing the Work on Self-Driving Cars
UA researchers are using aviation's high standards in an effort to increase our confidence in the safety of robotic cars.

By Emily Dieckman, UA College of Engineering
Dec. 11, 2017

joerger_2017_11_20_006.JPG

NavLab manager Nick Morris  and undergraduate researcher Nicholas Patzke set up a test vehicle that is tracked by a Vicon camera and a lidar system, seen on the monitors in the foreground.
NavLab manager Nick Morris and undergraduate researcher Nicholas Patzke set up a test vehicle that is tracked by a Vicon camera and a lidar system, seen on the monitors in the foreground. (left)


Passengers climbing into self-driving cars — also known as highly automated vehicles, or HAVs — need to believe that their vehicles can avoid potential hazards. So Mathieu Joerger, a University of Arizona assistant professor of aerospace and mechanical engineering, and researchers at the Illinois Institute of Technology are building on a knowledge of aircraft navigation standards to improve and guarantee HAV safety.

"We want to predict safety risk, but to aviation standards," said Joerger, who joined the UA College of Engineering in August 2016.

A three-year National Science Foundation grant of nearly $900,000, awarded in September 2016, is funding the work of Joerger and his co-investigator, Matthew Spenko, an associate professor of mechanical engineering at IIT.

The researchers are evaluating the integrity of HAV position, heading and velocity estimates that self-driving vehicles use to stay in their lane and avoid hazards. Integrity is a measure of how much trust humans can place in the information provided by sensors. Thanks to decades of research, high integrity levels already have been established in commercial aircraft.

Like airplanes, one of the sensors used by autonomous vehicles is the global positioning system, or GPS, which provides information about a vehicle's position on Earth. But unlike airplanes, GPS cannot always track HAVs when they pass under bridges, through tunnels or near high buildings. And HAVs need to know their positions relative to nearby cars, structures and pedestrians. There is also less wiggle room: The margin of error for safely determining the position of an aircraft is about 10 meters, whereas cars have a safe buffer zone of less than a meter.

Needed: Multiple Sensors

This calls for other sensors. Optical sensors and inertial measurement units, or IMUs, measure velocity and acceleration. Radio and light waves — in the form of radar and lidar systems — determine the size, speed and distance of other objects.

"It's pretty complicated to just get the desired information, and evaluating safety on top of that — which is the risk of making a mistake in the process — is a level of complication above," Joerger said.

When learning to drive, humans develop a sense of how much time they need to avoid potential hazards, such as driving out of lane or off-road, and avoiding pedestrians in crosswalks or objects blocking the road. These avoidance maneuvers become intuitive over time, but getting machines to operate in dynamic environments in the same way is no easy task.

Joerger and Spenko are researching a new navigation safety method, receding horizon integrity, which involves continually predicting integrity levels, so that HAVs are aware not only of their current surroundings and safety levels but also of what maneuvers they may need to take to remain safe throughout the trip.

While systems to measure the integrity of GPS already exist, this study will be the first to quantify high integrity measurements for non-GPS sensors.

"Bringing safety to lidar-based navigation is something that nobody has done," Joerger said.

The methods the team is developing to determine safety levels of lidar will be applicable to any range-based sensor — even ones that don't exist yet.

In Joerger's lab, a miniature train topped with a lidar sensor travels along a figure-eight track surrounded by motion-capture Vicon cameras. The cameras are the same type of system used to create computer-generated characters based on real actors' movement. Think Avatar, or Gollum from "Lord of the Rings."

As the train goes around, researchers track its location depicted by the Vicon cameras — which they know is accurate — and the lidar sensor. The fewer the discrepancies, the higher the integrity of the lidar-based navigation. Of primary concern for safety-critical applications are tough-to-observe discrepancies occurring as rarely as once in a billion samples. Therefore, the testbed is design to operate automatically over long periods of many days.

Predicting Integrity Levels

The research team will integrate multiple sensors into the system and, over the next two years, begin using the data it collects to predict future integrity levels. The team also plans to use the UA Cognitive and Autonomous Test car, or CAT vehicle, to assess the lidar system. The IIT robotics lab ultimately will take a test vehicle on the road to validate the research.

"To be able to contribute to something that will be around in our lifetime is really cool," said Nick Morris, who spent a year in the UA lidar lab as an undergraduate and was hired on as lab manager after earning his aerospace engineering degree last May.

The National Highway Traffic Safety Administration reports that in 2015 there were approximately 3,000 billion miles driven in the United States. In the same year, just over 35,000 people were killed in crashes on U.S. roadways. That's only one fatality for every 100 million miles driven.

Joerger believes HAVs have the potential to improve safety levels, save fuel, and benefit populations such as the elderly and disabled — but he doesn't think they will be replacing human drivers any time soon.

"It's still a long way from matching the human driver performance," he said.

Unlike companies that already are road testing self-driving vehicles with human passengers, these experts are using research to ensure the safety of HAVs before they hit the road with people in them.

"My approach is, before we start doing this, why don't we use the best we can learn —from aviation, for example," Joerger said. "Testing will never give you anywhere close to 3,000 billion miles. So we need analytical methods. We need math."

Share

Resources for the media

Mathieu Joerger

UA College of Engineering

520-621-2235

joerger@email.arizona.edu