Robert Stewart Distinguished Lecture: Why is it so hard to make self-driving cars? Trustworthy autonomous systems', with Joseph Sifakis

Friday, April 1, 2022 - 1:00pm to 2:00pm
Event Type: 

Red banner advertising Robert Stewart Distinguished Lecture from Joseph Sifakis on 'Why is it so hard to make self-driving cars? Trustworthy autonomous systems'

Why is it so hard to make self-driving cars? Trustworthy autonomous systems

Joseph Sifakis, Verimag Laboratory

Bio: Profesor Joseph Sifakis is Emeritus Research Director at Verimag Laboratory. His current area of interest is trustworthy autonomous systems design with focus on self-driving cars. In 2007, he received the Turing Award for his contribution to the theory and application of model checking. He is a member of the French Academy of Sciences, of the French National Academy of Engineering, of Academia Europea, of the American Academy of Arts and Sciences, of the National Academy of Engineering and of the Chinese Academy of Sciences. He is a Grand Officer of the French National Order of Merit, a Commander of the French Legion of Honor. He has received the Leonardo da Vinci Medal in 2012.

Abstract:  Why is self-driving so hard? Despite the enthusiastic involvement of big technological companies and the massive investment of many billions of dollars, all the optimistic predictions about self-driving cars “being around the corner” went utterly wrong. I argue that these difficulties emblematically illustrate the challenges raised by the vision for trustworthy autonomous systems. These are critical systems intended to replace human operators in complex organizations, very different from other intelligent systems such as game-playing robots or intelligent personal assistants. I discuss complexity limitations inherent to autonomic behavior but also to integration in complex cyber-physical and human environments. I argue that existing critical systems engineering techniques fall short of meeting the complexity challenge. I also argue that emerging end-to-end AI-enabled solutions currently developed by industry, fail to provide the required strong trustworthiness guarantees.

I advocate a hybrid design approach combining model-based and data-based techniques and seeking tradeoffs between performance and trustworthiness. I also discuss the validation problem emphasizing the need for rigorous simulation and testing techniques allowing technically sound safety evaluation.

I conclude that building trustworthy autonomous systems goes far beyond the current AI vision. To reach this vision, we need a new scientific foundation enriching and extending traditional systems engineering with data-based techniques.