Understanding the Decision-Making Process of Autonomous Vehicles

Shahin Atakishiyev is a deep learning researcher at the university of Alberta in Canada. One of his biggest achievements has been shedding light on how autonomous vehicles decide what to do. His groundbreaking research paper, which was the basis for that accomplishment, just appeared in this month’s IEEE Transactions on Intelligent Transportation Systems. It looks…

Tina Reynolds Avatar

By

Understanding the Decision-Making Process of Autonomous Vehicles

Shahin Atakishiyev is a deep learning researcher at the university of Alberta in Canada. One of his biggest achievements has been shedding light on how autonomous vehicles decide what to do. His groundbreaking research paper, which was the basis for that accomplishment, just appeared in this month’s IEEE Transactions on Intelligent Transportation Systems. It looks at the role that explainable artificial intelligence (AI) can play in making self-driving cars safer and more reliable.

Atakishiyev and his team ran those simulations to get a better sense of how A/Vs should be able to react on the road. By using a deep learning model, they were able to analyze the driving behavior of an autonomous vehicle in various scenarios. According to Amos et al’s research, the most important thing is giving real-time rationales for a car’s decisions. This transparency helps empower passengers to intervene when it’s needed.

For the final webinar, our research team presented a case study on a Tesla Model S. They put the vehicle through tests to see how it would react to a changed speed limit sign. The findings showed that real-time feedback would allow riders to spot bad decision-making by these vehicles in advance. Atakishiyev emphasized the importance of delivering clear explanations to users, stating, “Ordinary people, such as passengers and bystanders, do not know how an autonomous vehicle makes real-time driving decisions.”

As AVs continue to integrate into our society, it is vital that we educate passengers and others on their operation and safety. Atakishiyev’s research uncovers that we have the ability to signal driving behavior from a variety of different platforms. Examples of these range from audio, to visualization, to text, and even vibrations. He noted, “Explanations can be delivered via audio, visualization, text, or vibration, and people may choose different modes depending on their technical knowledge, cognitive abilities, and age.” This variety provides for an adaptable experience that can adjust to the specific preferences of each user.

The biggest challenge in reporting it out, as Atakishiyev outlined, was figuring out what you should and shouldn’t tell passengers. Of course each person will have their own preferred level of detail, even/odd complexity, and speed of information flow. “I would say explanations are becoming an integral component of AV technology,” he remarked, indicating a growing recognition of the need for transparency in autonomous systems.

The research highlights the need to evaluate autonomous vehicles’ decision-making process post-error. Atakishiyev hopes that this kind of analysis can help create more protective vehicles in the future. By understanding how and why an autonomous vehicle makes certain choices, researchers can refine algorithms and improve overall safety measures.