Rolf Ernst, Technical University Braunschweig, Germany
Selma Saidi, Technische Universität Dortmund, Germany
Dirk Ziegenbein, Robert Bosch GmbH, Germany
Sebastian Steinhorst, Technical University of Munich, Germany
Jyotirmoy Deshmukh, University of Southern California, United States
Christian Laugier, INRIA Grenoble, France
Two-Day Special Initiative
Fueled by the progress of artificial intelligence, autonomous systems become more and more integral parts of many Internet of Things (IoT) and Cyber-Physical Systems (CPS) applications, such as automated driving, robotics, avionics and industrial automation. Autonomous systems are self-governed and self-adaptive systems that are designed to operate in an open and evolving environment that has not been completely defined at design time. This poses a unique challenge to the design and verification of dependable autonomous systems. The DATE Special Initiative on Autonomous Systems Design (ASD) on Thursday and Friday will include high-profile keynotes and panel discussions, as well as peer-reviewed papers, invited contributions and interactive sessions addressing these challenges.
The Thursday of the DATE Special Initiative on Autonomous Systems Design (ASD) will start with an opening session where industry leaders from Airbus, Porsche and Robert Bosch will talk about their visions of autonomous systems, the challenges they see in the development of autonomous systems as well as how autonomous systems will impact the business in their industries. These input will be discussed in an open floor panel with eminent speakers from academia. After the opening session, three sessions will present peer-reviewed papers on “Reliable Autonomous Systems: Dealing with Failure & Anomalies”, “Safety Assurance of Autonomous Vehicles” and “Designing Autonomous Systems: Experiences, Technology and Processes”. Furthermore, a special session will discuss latest research on “Predictable Perception for Autonomous Systems”.
The Friday Interactive Day of the DATE Special Initiative on Autonomous Systems Design (ASD) features keynotes from industry leaders as well as interactive discussions initiated by short presentations on several hot topics. Presentations from General Motors and BMW on predictable perception, as well as a session on dynamic risk assessment will fuel the discussion on how to maximize safety in a technically feasible manner. Speakers from TTTech and APEX.AI will present insights into Motionwise and ROS2 as platforms for automated vehicles. Further sessions will highlight topics such as explainable machine learning, self-adaptation for robustness and self-awareness for autonomy, as well as cybersecurity for connected vehicles.
Organizers: Rolf Ernst, TU Braunschweig, DE Selma Saidi, TU Dortmund, DE
Fueled by the progress of artificial intelligence, autonomous systems become more and more integral parts of many Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) applications, such as automated driving, robotics, avionics and industrial automation. Autonomous systems are self-governed and self-adaptive systems that are designed to operate in an open and evolving environment that has not been completely defined at design time. This poses a unique challenge to the design and verification of dependable autonomous systems. In this opening session of the DATE Special Initiative on Autonomous Systems Design, industry leaders will talk about their visions of autonomous systems, the challenges they see in the development of autonomous systems as well as how autonomous systems will impact the business in their industries. These inputs will be discussed in an open floor panel. Panelists: – Thomas Kropf (Robert Bosch GmbH) – Pascal Traverse (Airbus) – Juergen Bortolazzi (Porsche AG) – Peter Liggesmeyer (Fraunhofer IESE) – Joseph Sifakis (University of Grenoble/VERIMAG) – Sandeep Neema (DARPA)
10.1 Reliable Autonomous Systems: Dealing with Failure & Anomalies
Session co-chair: Rasmus Adler, Fraunhofer IESE, DE
Organizers: Rolf Ernst, TU. Braunschweig, DE Selma Saidi, TU Dortmund, DE
Autonomous Systems need novel approaches to detect and handle failures and anomalies. The first paper introduces an approach that adapts the placement of applications on a vehicle platform via adjustment of optimization criteria under safety goal restrictions. The second paper presents a formal worst-case failover timing analysis for online verification to assure safe vehicle operation under failover safety constraints. The third paper proposes an explanation component that observes and analyses an autonomous system and tries to derive explanations for anomalous behavior.
Time
Label
Presentation Title Authors
08:00 CET
10.1.1
C-PO: A CONTEXT-BASED APPLICATION-PLACEMENT OPTIMIZATION FOR AUTONOMOUS VEHICLES Speaker: Tobias Kain, Volkswagen AG, Wolfsburg, Germany, DE Authors: Tobias Kain1, Hans Tompits2, Timo Frederik Horeis3, Johannes Heinrich3, Julian-Steffen Müller4, Fabian Plinke3, Hendrik Decke5 and Marcel Aguirre Mehlhorn4 1Volkswagen AG, AT; 2Technische Universitat Wien, AT; 3Institut für Qualitäts- und Zuverlässigkeitsmanagement GmbH, DE; 4Volkswagen AG, DE; 5Volkswagen AG, Wolfsburg, DE Abstract Autonomous vehicles are complex distributed systems consisting of multiple software applications and computing nodes. Determining the assignment between these software applications and computing nodes is known as the application-placement problem. The input of this problem is a set of applications, their requirements, a set of computing nodes, and their provided resources. Due to the potentially large solution space of the problem, an optimization goal defines which solution is desired the most. However, the optimization goal used for the application-placement problem is not static but has to be adapted according to the current context the vehicle is experiencing. Therefore, an approach for a context-based determination of the optimization goal for a given instance of an application-placement problem is required. In this paper, we introduce C-PO, an approach to address this issue. C-PO ensures that if the safety level of a system drops due to an occurring failure, the optimization goal for the successively executed application-placement determination aims to restore the safety level. Once the highest safety level is reached, C-PO optimizes the application placement according to the current driving situation. Furthermore, we introduce two methods for dynamically determining the required level of safety.
08:15 CET
10.1.2
WORST-CASE FAILOVER TIMING ANALYSIS OF DISTRIBUTED FAIL-OPERATIONAL AUTOMOTIVE APPLICATIONS Speaker: Philipp Weiss, TU Munich, DE Authors: Philipp Weiss1, Sherif Elsabbahy1, Andreas Weichslgartner2 and Sebastian Steinhorst1 1TU Munich, DE; 2AUDI AG, DE Abstract Enabling fail-operational behavior of safety-critical software is essential to achieve autonomous driving. At the same time, automotive vendors have to regularly deliver over-the-air software updates. Here, the challenge is to enable a flexible and dynamic system behavior while offering, at the same time, a predictable and deterministic behavior of time-critical software. Thus, it is necessary to verify that timing constraints can be met even during failover scenarios. For this purpose, we present a formal analysis to derive the worst-case application failover time. Without such an automated worst-case failover timing analysis, it would not be possible to enable a dynamic behavior of safetycritical software within safe bounds. We support our formal analysis by conducting experiments on a hardware platform using a distributed fail-operational neural network. Our randomly generated worst-case results are as close as 6.0% below our analytically derived exact bound. Overall, our presented worstcase failover timing analysis allows to conduct an automated analysis at run-time to verify that the system operates within the bounds of the failover timing constraint such that a dynamic and safe behavior of autonomous systems can be ensured.
DECENTRALIZED AUTONOMOUS ARCHITECTURE FOR RESILIENT CYBER-PHYSICAL PRODUCTION SYSTEMS Speaker: Laurin Prenzel, TU Munich, DE Authors: Laurin Prenzel and Sebastian Steinhorst, TU Munich, DE Abstract Real-time decision-making is a key element in the transition from Reconfigurable Manufacturing Systems to Autonomous Manufacturing Systems. In Cyber-Physical Production Systems (CPPS) and Cloud Manufacturing, most decision-making algorithms are either centralized, creating vulnerabilities to failures, or decentralized, struggling to reach the performance of the centralized counterparts. In this paper, we combine the performance of centralized optimization algorithms with the resilience of a decentralized consensus. We propose a novel autonomous system architecture for CPPS featuring an automatic production plan generation, a functional validation, and a two-stage consensus algorithm, combining a majority vote on safety and optimality, and a unanimous vote on feasibility and authenticity. The architecture is implemented in a simulation framework. In a case study, we exhibit the timing behavior of the configuration procedure and subsequent reconfiguration following a device failure, showing the feasibility of a consensus-based decision-making process.
08:31 CET
10.1.3
ANOMALY DETECTION AND CLASSIFICATION TO ENABLE SELF-EXPLAINABILITY OF AUTONOMOUS SYSTEMS Speaker: Verena Klös, TU Berlin, DE Authors: Florian Ziesche, Verena Klös and Sabine Glesner, TU Berlin, DE Abstract While the importance of autonomous systems in our daily lives and in the industry increases, we have to ensure that this development is accepted by their users. A crucial factor for a successful cooperation between humans and autonomous systems is a basic understanding that allows users to anticipate the behavior of the systems. Due to their complexity, complete understanding is neither achievable, nor desirable. Instead, we propose self-explainability as a solution. A self-explainable system autonomously explains behavior that differs from anticipated behavior. As a first step towards this vision, we present an approach for detecting anomalous behavior that requires an explanation and for reducing the huge search space of possible reasons for this behavior by classifying it into classes with similar reasons. We envision our approach to be part of an explanation component that can be added to any autonomous system.
PROVABLY ROBUST MONITORING OF NEURON ACTIVATION PATTERNS Speaker and Author: Chih-Hong Cheng, DENSO AUTOMOTIVE Deutschland GmbH, DE Abstract For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worstcase estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a finegrained decision on the neuron value interval.
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
Time
Label
IP.ASD_1.1
DECENTRALIZED AUTONOMOUS ARCHITECTURE FOR RESILIENT CYBER-PHYSICAL PRODUCTION SYSTEMS Speaker: Laurin Prenzel, TU Munich, DE Authors: Laurin Prenzel and Sebastian Steinhorst, TU Munich, DE Abstract Real-time decision-making is a key element in the transition from Reconfigurable Manufacturing Systems to Autonomous Manufacturing Systems. In Cyber-Physical Production Systems (CPPS) and Cloud Manufacturing, most decision-making algorithms are either centralized, creating vulnerabilities to failures, or decentralized, struggling to reach the performance of the centralized counterparts. In this paper, we combine the performance of centralized optimization algorithms with the resilience of a decentralized consensus. We propose a novel autonomous system architecture for CPPS featuring an automatic production plan generation, a functional validation, and a two-stage consensus algorithm, combining a majority vote on safety and optimality, and a unanimous vote on feasibility and authenticity. The architecture is implemented in a simulation framework. In a case study, we exhibit the timing behavior of the configuration procedure and subsequent reconfiguration following a device failure, showing the feasibility of a consensus-based decision-making process.
IP.ASD_1.2
PROVABLY ROBUST MONITORING OF NEURON ACTIVATION PATTERNS Speaker and Author: Chih-Hong Cheng, DENSO AUTOMOTIVE Deutschland GmbH, DE Abstract For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worstcase estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a finegrained decision on the neuron value interval.
Session chair: Sebastian Steinhorst, TU Munich, DE
Session co-chair: Simon Schliecker, Volkswagen AG, DE
Organizers: Rolf Ernst, TU. Braunschweig, DE Selma Saidi, TU Dortmund, DE
Safety of autonomous vehicles is a core requirement for their social acceptance. Hence, this session introduces three technical perspectives on this important field. The first paper presents a statistics-based view on risk taking when passing by pedestrians such that automated decisions can be taken with probabilistic reasoning. The second paper proposes a hardening of image classification for highly automated driving scenarios by identifying the similarity between target classes. The third paper improves the computational efficiency of safety verification of deep neural networks by reusing existing proof artifacts.
Time
Label
Presentation Title Authors
09:30 CET
11.1.1
AUTOMATED DRIVING SAFETY – THE ART OF CONSCIOUS RISK TAKING – MINIMUM LATERAL DISTANCES TO PEDESTRIANS Speaker: Bert Böddeker, private, DE Authors: Bert Böddeker1, Wilhard von Wendorff2, Nam Nguyen3, Peter Diehl4, Roland Meertens5 and Rolf Johannson6 1private, DE; 2SGS-TÜV Saar GmbH, DE; 3Hochschule München für angewandte Wissenschaften, DE; 4Private, DE; 5private, NL; 6Private, SE Abstract The announced release dates for Automated Driving Systems (ADS) with conditional (SAE-L3) and high (SAE-L4) levels of automation according to SAE J3016 are getting closer. Still, there is no established state of the art for proving the safety of these systems. The ISO 26262 for automotive functional safety is still valid for these systems but only covers risks from malfunctions of electric and electronic (E/E) systems. A framework for considering issues caused by weaknesses of the intended functionality itself is standardized in the upcoming release of the ISO 21448 – Safety of the Intended Functionality (SOTIF). Rich experience regarding limitations of safety performance of complex sensors can be found in this standard. In this paper, we highlight another aspect of SOTIF that becomes important for higher levels of automation, especially, in urban areas: `conscious risk taking’. In traditional automotive systems, conflicting goal resolutions are generally left to the car driver. With SAE-level 3 or at latest SAE-level 4 ADS, the driver is not available for decisions anymore. Even ‘safe drivers’ do not use the safest possible driving behavior. In the example of occlusions next to the street, a driver balances the risk of occluded pedestrians against the speed of the traffic flow. Our aim is to make such decisions explicit and sufficiently safe. On the example of crossing pedestrians, we show how to use statistics to derive a conscious quantitative risk-based decision from a previously defined acceptance criterion. The acceptance criterion is derived from accident statistics involving pedestrians.
09:45 CET
11.1.2
ON SAFETY ASSURANCE CASE FOR DEEP LEARNING BASED IMAGE CLASSIFICATION IN HIGHLY AUTOMATED DRIVING Speaker: Himanshu Agarwal, HELLA GmbH & Co. KGaA, Lippstadt, Germany and Carl von Ossietzky University Oldenburg, Germany, DE Authors: Himanshu Agarwal1, Rafal Dorociak2 and Achim Rettberg3 1HELLA GmbH & Co. KGaA and Carl von Ossietzky University Oldenburg, DE; 2HELLA GmbH & Co. KGaA, DE; 3University of Applied Science Hamm-Lippstadt & University Oldenburg, DE Abstract Assessing the overall accuracy of deep learning classifier is not a sufficient criterion to argue for safety of classification based functions in highly automated driving. The causes of deviation from the intended functionality must also be rigorously assessed. In context of functions related to image classification, one of the causes can be the failure to take into account during implementation the classifier’s vulnerability to misclassification due to high similarity between the target classes. In this paper, we emphasize that while developing the safety assurance case for such functions, the argumentation over the appropriate implementation of the functionality must also address the vulnerability to misclassification due to class similarities. Using the traffic sign classification function as our case study, we propose to aid the development of its argumentation by: (a) conducting a systematic investigation of the similarity between the target classes, (b) assigning a corresponding classifier vulnerability rating to every possible misclassification, and (c) ensuring that the claims against the misclassifications that induce higher risk (scored on the basis of vulnerability and severity) are supported with more compelling sub-goals and evidences as compared to the claims against misclassifications that induce lower risk.
10:00 CET
11.1.3
CONTINUOUS SAFETY VERIFICATION OF NEURAL NETWORKS Speaker: Rongjie Yan, Institute of Software, Chinese Academy of Sciences, CN Authors: Chih-Hong Cheng1 and Rongjie Yan2 1DENSO AUTOMOTIVE Deutschland GmbH, Eching, Germa, DE; 2Institute of Software, Chinese Academy of Sciences, CN Abstract Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a 1/10-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image.
Session co-chair: Rolf Ernst, TU. Braunschweig, DE
Autonomy is in the air: on one hand, automation is clearly a lever to improve safety margins; on another hand technologies are maturing, pulled by the automotive market. In this context, Airbus is building a concept airplane from a blank sheet with the objective to improve human-machine teaming for better overall performance. Foundation of this new concept is that when they are made aware of the “big picture” with enough time to analyze it, humans are still the best to make strategic decisions. Autonomy technologies are the main enabler of this concept. Benefit are expected both in a two-crew cockpit and eventually in Single Pilot Operations. Bio: Pascal Traverse is General Manager for the Autonomy “fast track” at Airbus. Autonomy is a top technical focus area for Airbus. The General Manager creates a vision, coordinates R&T activities with the objective to accelerate the increase of knowledge in Airbus. Before his nomination last year, Pascal was coordinating Airbus Commercial R&T activities related to the cockpit and flight operations. Earlier in his carrier, Pascal participated in the A320/A330/A340/A380 Fly-by-Wire developments, certification harmonization with FAA and EASA, management of Airbus safety activities and even of qualities activities in the A380 Final Assembly Line. Pascal has Master and Doctorate’s degrees in embedded systems from N7, conducted research in LAAS and UCLA and is a 3AF Fellow.
Time
Label
Presentation Title Authors
15:00 CET
K.5.1
AUTONOMY: ONE STEP BEYOND ON COMMERCIAL AVIATION Speaker and Author: Pascal Traverse, Airbus, FR Abstract Autonomy is in the air: on one hand, automation is clearly a lever to improve safety margins; on another hand technologies are maturing, pulled by the automotive market. In this context, Airbus is building a concept airplane from a blank sheet with the objective to improve human-machine teaming for better overall performance. Foundation of this new concept is that when they are made aware of the “big picture” with enough time to analyze it, humans are still the best to make strategic decisions. Autonomy technologies are the main enabler of this concept. Benefit are expected both in a two-crew cockpit and eventually in Single Pilot Operations.
12.1 Designing Autonomous Systems: Experiences, Technology and Processes
Session co-chair: Philipp Mundhenk, Robert Bosch GmbH, DE
Organizers: Rolf Ernst, TU. Braunschweig, DE Selma Saidi, TU Dortmund, DE
This session discusses technology innovation, experiences and processes in building autonomous systems. The first paper presents Fünfliber, a nano-sized Unmanned Aerial Vehicle (UAV) composed of a mudular open-hardware robotic platform controlled by a Parallel Ultra-low power system-on-chip (PULP) capable of running sophisticated autonomous DNN-based navigation workloads. The second paper presents an abstracted runtime for managing adaptation and integrating FPGA-accelerators to autonomous software framework , a show-case study integration into ROS is demonstrated. The third paper discusses current processes in engineering dependable collaborative autonomous systems and new buisness models based on agile approaches for innovative management.
Time
Label
Presentation Title Authors
16:00 CET
12.1.1
FüNFLIBER-DRONE: A MODULAR OPEN-PLATFORM 18-GRAMS AUTONOMOUS NANO-DRONE Speaker: Hanna Müller, Integrated Systems Laboratory, CH Authors: Hanna Mueller1, Daniele Palossi2, Stefan Mach1, Francesco Conti3 and Luca Benini4 1Integrated Systems Laboratory – ETH Zurich, CH; 2Integrated Systems Laboratory – ETH Zurich, Switzerland, Dalle Molle Institute for Artificial Intelligence – University of Lugano and SUPSI, CH; 3Department of Electrical, Electronic and Information Engineering – University of Bologna, Italy, IT; 4Integrated Systems Laboratory – ETH Zurich, Department of Electrical, Electronic and Information Engineering – University of Bologna, CH Abstract “Miniaturizing an autonomous robot is a challenging task – not only the mechanical but also the electrical components have to operate within limited space, payload, and power. Furthermore, the algorithms for autonomous navigation, such as state-of-the-art (SoA) visual navigation deep neural networks (DNNs), are becoming increasingly complex, striving for more flexibility and agility. In this work, we present a sensor-rich, modular, nano-sized Unmanned Aerial Vehicle (UAV), almost as small as a five Swiss Franc coin – called Funfliber – with a ¨ total weight of 18g and 7.2cm in diameter. We conceived our UAV as an open-source hardware robotic platform, controlled by a parallel ultra-low power (PULP) system-on-chip (SoC) with a wide set of onboard sensors, including three cameras (i.e., infrared, optical flow, and standard QVGA), multiple Time-of-Flight (ToF) sensors, a barometer, and an inertial measurement unit. Our system runs the tasks necessary for a flight controller (sensor acquisition, state estimation, and low-level control), requiring only 10% of the computational resources available aboard, consuming only 9mW – 13x less than an equivalent Cortex M4-based system. Pushing our system at its limit, we can use the remaining onboard computational power for sophisticated autonomous navigation workloads, as we showcase with an SoA DNN running at up to 18Hz, with a total electronics’ power consumption of 271mW”
16:15 CET
12.1.2
RUNTIME ABSTRACTION FOR AUTONOMOUS ADAPTIVE SYSTEMS ON RECONFIGURABLE HARDWARE Speaker: Alex R Bucknall, University of Warwick, GB Authors: Alex R. Bucknall1 and Suhaib A. Fahmy2 1University of Warwick, GB; 2KAUST, SA Abstract Autonomous systems increasingly rely on on-board computation to avoid the latency overheads of offloading to more powerful remote computing. This requires the integration of hardware accelerators to handle the complex computations demanded by data-intensive sensors. FPGAs offer hardware acceleration with ample flexibility and interfacing capabilities when paired with general purpose processors, with the ability to reconfigure at runtime using partial reconfiguration (PR). Managing dynamic hardware is complex and has been left to designers to address in an ad-hoc manner, without first-class integration in autonomous software frameworks. This paper presents an abstracted runtime for managing adaptation of FPGA accelerators, including PR and parametric changes, that presents as a typical interface used in autonomous software systems. We present a demonstration using the Robot Operating System (ROS), showing negligible latency overhead as a result of the abstraction.
16:30 CET
IP.ASD_2.1
SYSTEMS ENGINEERING ROADMAP FOR DEPENDABLE AUTONOMOUS CYBER-PHYSICAL SYSTEMS Speaker and Author: Rasmus Adler, Fraunhofer IESE, DE Abstract Autonomous cyber-physical systems have enormous potential to make our lives more sustainable, more comfortable, and more economical. Artificial Intelligence and connectivity enable autonomous behavior, but often stand in the way of market launch. Traditional engineering techniques are no longer sufficient to achieve the desired dependability; current legal and normative regulations are inappropriate or insufficient. This paper discusses these issues, proposes advanced systems engineering to overcome these issues, and provides a roadmap by structuring fields of action.
16:31 CET
12.1.3
DDI: A NOVEL TECHNOLOGY AND INNOVATION MODEL FOR DEPENDABLE, COLLABORATIVE AND AUTONOMOUS SYSTEMS Speaker: Eric Armengaud, Armengaud Innovate GmbH, AT Authors: Eric Armengaud1, Daniel Schneider2, Jan Reich2, Ioannis Sorokos2, Yiannis Papadopoulos3, Marc Zeller4, Gilbert Regan5, Georg Macher6, Omar Veledar7, Stefan Thalmann8 and Sohag Kabir9 1Armengaud Innovate GmbH, AT; 2Fraunhofer IESE, DE; 3University of Hull, GB; 4Siemens AG, DE; 5Lero @DKIT, IE; 6Graz University of Technology, AT; 7AVL List GmbH, AT; 8University of Graz, AT; 9University of Bradford, GB Abstract Digital transformation fundamentally changes established practices in public and private sector. Hence, it represents an opportunity to improve the value creation processes (e.g., “industry 4.0”) and to rethink how to address customers’ needs such as “data-driven business models” and “Mobility-as-a-Service”. Dependable, collaborative and autonomous systems are playing a central role in this transformation process. Furthermore, the emergence of data-driven approaches combined with autonomous systems will lead to new business models and market dynamics. Innovative approaches to reorganise the value creation ecosystem, to enable distributed engineering of dependable systems and to answer urgent questions such as liability will be required. Consequently, digital transformation requires a comprehensive multi-stakeholder approach which properly balances technology, ecosystem and business innovation. Targets of this paper are (a) to introduce digital transformation and the role of / opportunities provided by autonomous systems, (b) to introduce Digital Depednability Identities (DDI) – a technology for dependability engineering of collaborative, autonomous CPS, and (c) to propose an appropriate agile approach for innovation management based on business model innovation and co-entrepreneurship
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
Time
Label
Presentation Title Authors
17:00 CET
IP.ASD_2.1
SYSTEMS ENGINEERING ROADMAP FOR DEPENDABLE AUTONOMOUS CYBER-PHYSICAL SYSTEMS Speaker and Author: Rasmus Adler, Fraunhofer IESE, DE Abstract Autonomous cyber-physical systems have enormous potential to make our lives more sustainable, more comfortable, and more economical. Artificial Intelligence and connectivity enable autonomous behavior, but often stand in the way of market launch. Traditional engineering techniques are no longer sufficient to achieve the desired dependability; current legal and normative regulations are inappropriate or insufficient. This paper discusses these issues, proposes advanced systems engineering to overcome these issues, and provides a roadmap by structuring fields of action.
13.1 Predictable Perception for Autonomous Systems
Session chair: Soheil Samii, General Motors R&D,, US
Session co-chair: Qing Rao, BMW AG, DE
Organizers: Selma Saidi, TU Dortmund, DE Rolf Ernst, TU Braunschweig, DE
Modern autonomous systems – such as autonomous vehicles or robots – consist of two major components: (a) the decision making unit, which is often made up of one or more feedback control loops, and (b) a perception unit that feeds the environmental state to the control unit and is made up of camera, radar and lidar sensors and their associated processing algorithms and infrastructure. While there has been a lot of work on the formal verification of the decision making (or the control) unit, the ultimate correctness of the autonomous system also heavily relies on the behavior of the perception unit. The verification of the correctness of the perception unit is however significantly more challenging and not much progress has been made here. This is because the algorithms used by perception units now increasingly rely on machine learning techniques (like deep neural networks) that run on a complex hardware made up CPU+accelerator platforms. The accelerators are made up of GPUs, TPUs and FPGAs. This combination of algorithmic + implementation platform complexity and heterogeneity currently makes it very difficult to provide either functional or timing correctness guarantees of the perception unit, while both of these guarantees are needed to ensure the correct functioning of the control loop and the overall autonomous system. This is a part of the overall challenge of verifying the correctness of autonomous systems. This session will feature four invited talks, with each of them focusing on different aspects of this problem – some on the timing predictability of the perception unit, others on the functional correctness of the processing algorithms used in the perception units, and the remaining on the reliability and the performance/cost tradeoffs involved in designing perception units for autonomous systems.
Time
Label
Presentation Title Authors
17:30 CET
13.1.1
TIMING-PREDICTABLE VISION PROCESSINGFOR AUTONOMOUS SYSTEMS Speaker: Tanya Amert, UNC Chapel Hill, US Authors: Tanya Amert1, Michael Balszun2, Martin Geier3, F. Donelson Smith4, Jim Anderson4 and Samarjit Chakraborty5 1University of North Carolina at Chapel HIll, US; 2TU-München, DE; 3TU Munich, DE; 4University of North Carolina at Chapel Hill, US; 5UNC Chapel Hill, US Abstract Vision processing for autonomous systems today involves implementing machine learning algorithms and vision processing libraries on embedded platforms consisting of CPUs, GPUs and FPGAs. Because many of these use closed-source proprietary components, it is very difficult to perform any timing analysis on them. Even measuring or tracing their timing behavior is challenging, although it is the first step towards reasoning about the impact of different algorithmic and implementation choices on the end-to-end timing of the vision processing pipeline. In this paper we discuss some recent progress in developing tracing, measurement and analysis infrastructure for determining the timing behavior of vision processing pipelines implemented on state-of-the-art FPGA and GPU platforms.
17:45 CET
13.1.2
BOUNDING PERCEPTION NEURAL NETWORK UNCERTAINTY FOR SAFE CONTROL OF AUTONOMOUS SYSTEMS Speaker: Qi Zhu, Northwestern University, US Authors: Zhilu Wang1, Chao Huang2, Yixuan Wang1, Clara Hobbs3, Samarjit Chakraborty4 and Qi Zhu1 1Northwestern University, US; 2Department of Electrical and Computer Engineering, Northwestern University, US; 3University of North Carolina at Chapel Hill, US; 4UNC Chapel Hill, US Abstract Future autonomous systems will rely on advanced sensors and deep neural networks for perceiving the environment, and then utilize the perceived information for system planning, control, adaptation, and general decision making. However, due to the inherent uncertainties from the dynamic environment and the lack of methodologies for predicting neural network behavior, the perception modules in autonomous systems often could not provide deterministic guarantees and may sometimes lead the system into unsafe states (e.g., as evident by a number of high-profile accidents with experimental autonomous vehicles). This has significantly impeded the broader application of machine learning techniques, particularly those based on deep neural networks, in safetycritical systems. In this paper, we will discuss these challenges, define open research problems, and introduce our recent work in developing formal methods for quantitatively bounding the output uncertainty of perception neural networks with respect to input perturbations, and leveraging such bounds to formally ensure the safety of system control. Unlike most existing works that only focus on either the perception module or the control module, our approach provides a holistic end-to-end framework that bounds the perception uncertainty and addresses its impact on control.
18:00 CET
13.1.3
HARDWARE- AND SITUATION-AWARE SENSING FOR ROBUST CLOSED-LOOP CONTROL SYSTEMS Speaker: Dip Goswami, Eindhoven University of Technology, NL Authors: Sayandip De1, Yingkai Huang2, Sajid Mohamed1, Dip Goswami1 and Henk Corporaal3 1Eindhoven University of Technology, NL; 2Electronic Systems Group, Eindhoven University of Technology, NL; 3TU/e (Eindhoven University of Technology), NL Abstract While vision is an attractive alternative to many sensors targeting closed-loop controllers, it comes with high time-varying workload and robustness issues when targeted to edge devices with limited energy, memory and computing resources. Replacing classical vision processing pipelines, e.g., lane detection using Sobel filter, with deep learning algorithms is a way to deal with the robustness issues while hardware-efficient implementation is crucial for their adaptation for safe closed-loop systems. However, while implemented on an embedded edge device, the performance of these algorithms highly depends on their mapping on the target hardware and situation encountered by the system. That is, first, the timing performance numbers (e.g., latency, throughput) depends on the algorithm schedule, i.e., what part of the AI workload runs where (e.g., GPU, CPU) and their invocation frequency (e.g., how frequently we run a classifier). Second, the perception performance (e.g., detection accuracy) is heavily influenced by the situation – e.g., snowy and sunny weather condition provides very different lane detection accuracy. These factors directly influence the closed-loop performance, for example, the lane-following accuracy in a lane-keep assist system (LKAS). We propose a hardware- and situation-aware design of AI perception where the idea is to define the situations by a set of relevant environmental factors (e.g., weather, road etc. in an LKAS). We design the learning algorithms and parameters, overall hardware mapping and its schedule taking the situation into account. We show the effectiveness of our approach considering a realistic LKAS case-study on heterogeneous NVIDIA AGX Xavier platform in a hardware-in-the-loop framework. Our approach provides robust LKAS designs with 32% better performance compared to traditional approaches.
18:15 CET
13.1.4
ORCHESTRATION OF PERCEPTION SYSTEMS FOR RELIABLE PERFORMANCE IN HETEROGENEOUS PLATFORMS Speaker: Soumyajit Dey, Indian Institute of Technology Kharagpur, IN Authors: Anirban Ghose1, Srijeeta Maity2, Arijit Kar1 and Soumyajit Dey3 1Indian Institute of Technology, Kharagpur, IN; 2student, IN; 3IIT Kharagpur, IN Abstract “Delivering driving comfort in this age of connected mobility is one of the primary goals of semi-autonomous perception systems increasingly being used in modern automotives. The performance of such perception systems is a function of execution rate which demands on-board platform-level support. With the advent of GPGPU compute support in automobiles, there exists an opportunity to adaptively enable higher execution rates for such Advanced Driver Assistant System tasks (ADAS tasks) subject to different vehicular driving contexts. This can be achieved through a combination of program level locality optimizations such as kernel fusion, thread coarsening and core level DVFS techniques while keeping in mind their effects on task level deadline requirements and platform-level thermal reliability. In this communication, we present a future-proof, learning-based adaptive scheduling framework that strives to deliver reliable and predictable performance of ADAS tasks while accommodating for increased task-level throughput requirements.”
Organizers: Rolf Ernst, TU Braunschweig, DE Selma Saidi, TU Dortmund, DE
In the virtual reception session of the DATE Special Initiative on Autonomous Systems Design, we will enjoy (1) the talk of Dr. Gabor Karsai (Vanderbilt University) on “Towards Assurance-based Learning-enabled Cyber-Physical Systems”, (2) introduce the Friday topics, (3) exchange thoughts on autonomous systems design, and (4) collect feedback regarding the special initiative.
Time
Label
Presentation Title Authors
18:30 CET
ASD.REC.1
TOWARDS ASSURANCE-BASED LEARNING-ENABLED CYBER-PHYSICAL SYSTEMS Speaker and Author: Gabor Karsai, Vanderbilt University, US Abstract This talk will provide an overview of the DARPA Assured Autonomy program and give project examples.
The Friday Interactive Day of the DATE Special Initiative on Autonomous Systems Design (ASD) features keynotes from industry leaders as well as interactive discussions initiated by short presentations on several hot topics. Presentations from General Motors and BMW on predictable perception, as well as a session on dynamic risk assessment will fuel the discussion on how to maximize safety in a technically feasible manner. Speakers from TTTech and APEX.AI will present insights into Motionwise and ROS2 as platforms for automated vehicles. Further sessions will highlight topics such as explainable machine learning, self-adaptation for robustness and self-awareness for autonomy, as well as cybersecurity for connected vehicles.
“Argo AI’s mission and technology at a glance” by Alexandre Haag, Managing Director, Argo AI GmbH
ASDW05-02 Dynamic Risk Assessment in Autonomous Systems
Session Start Fri, 09:00 Session End Fri, 09:55
Organizers / Chairs:
Peter Liggesmeyer, Fraunhofer IESE
Rasmus Adler, Fraunhofer IESE
Richard Hawkins, University of York
Session Abstract: An autonomous system is capable of independently achieving a predefined goal in accordance with the demands of the current situation. In safety-critical applications, the operational situations may demand some actions from the system in order to keep risks at an acceptable level. This motivates the implementation of algorithms that estimate, assess and control risks during operation. In particular, the risk assessment at runtime is challenging as it implies moral decision making about acceptability of risks: “How safe is safe enough?”. However, it is also challenging to find a suitable notion of risk. IEC and IEC standards define the term “risk” differently following two “root” definitions: “combination of the probability of occurrence of harm, and the severity of that harm” and “effect of uncertainty on objectives”. The first definition is related to the way how integrity levels like SIL and ASIL are determined at design-time. In the session, we will discuss in how far existing design-time approaches can be adopted to implement an autonomous risk management at runtime. For instance, is it reasonable to implement algorithms that determine integrity levels at runtime?
Speakers:
Detlev Richter, TüV SüD: Digital twin-based hazard analysis at runtime for resilient production
Simon Burton, Fraunhofer IKS: Prerequisites for dynamic risk management
Patrik Feth, Sick AG: Sensors for Dynamic Risk Assessment
Michael Woon, retrospect: Being Certain of Uncertainty in Risk
ASDW05-03 Cybersecurity for Connected Autonomous Vehicles
Session Start Fri, 10:00 Session End Fri, 10:55
Organizers / Chairs:
Sebastian Steinhorst, Technical University of Munich, Germany
Mohammad Hamad, Technical University of Munich, Germany
Session Abstract: Today’s vehicles are increasingly connected and tomorrow’s vehicles will be automated, autonomous, capable of sensing their environment and navigating through cities without human input. This comes at the cost of a new set of threats and cyber-attacks that can yield high recall costs, property loss, and even jeopardize human safety. In this session, three partners of the nIoVe H2020 EU project will present security challenges and solutions to improve future autonomous vehicles’ security. The session starts with a short presentation about insights on the challenges and limitations of providing cyberthreat protections for public transport autonomous shuttles. The second talk discusses the need to make autonomous vehicles proactively able to react to intrusions and the challenges to achieving such a capability. The last talk addresses further cyber-security challenges that nIoVe aims to solve for autonomous vehicles. The session will continue with an open discussion to discuss all the introduced challenges and other related aspects.
Talks:
Niels Nijdam, University of Geneva (UNIGE), Switzerland The Perils of Cybersecurity in Connected Automated Vehicles
Mohammad Hamad, Technical University of Munich (TUM), Germany Toward a Multi-layer Intrusion Response System for Autonomous Vehicles
Konstantinos Votis, Institute/Centre for Research and Technologies Hellas (CERTH/ITI), Greece Cyber-security Solutions for Autonomous Vehicles
ASDW05-04 Self-adaptive safety- and mission-critical CPS: wishful thinking or absolute necessity?
Session Start Fri, 11:00 Session End Fri, 11:55
Organizers / Chairs:
Andy Pimentel (University of Amsterdam, Netherlands)
Martina Maggio (University of Saarland, Germany)
Session Abstract: Due to the increasing performance demands of mission- and safety-critical Cyber Physical Systems (CPS), these systems exhibit a rapidly growing complexity, manifested by an increasing number of (distributed) computational cores and application components connected via complex networks. However, with the growing complexity and interconnectivity of these systems, the chances of hardware failures as well as disruptions due to cyber-attacks will also quickly increase. System adaptivity, for example in the form of dynamically remapping of application components to processing cores, represents a promising technique to handle this challenging scenario. In this session, we address the (consequences of the) idea of deploying runtime adaptivity to mission- and safety-critical CPS, yielding dynamically morphing systems, to establish robustness against computational hurdles, component failures, and cyber-attacks.
Speakers:
Clemens Grelck (University of Amsterdam, Netherlands) The TeamPlay Coordination Language for Dependable Systems
Sasa Misailovic (University of Illinois at Urbana-Champaign, USA) Programming Systems for Helping Developers Cope with Uncertainty
Stefanos Skalistis (Raytheon Technologies, Ireland) Certification challenges of adaptive avionics systems
ASDW05-05 Predictable Perception
Session Start Fri, 14:00 Session End Fri, 14:55
Organizers / Chairs:
Samarjit Chakraborty (U North Carolina, Chapel Hill, USA)
Petru Eles (Linköping University, SE)
Session Abstract: Modern autonomous systems – such as autonomous vehicles or robots – consist of two major components: (a) the decision making unit, which is often made up of one or more feedback control loops, and (b) a perception unit that feeds the environmental state to the control unit and is made up of camera, radar and lidar sensors and their associated processing algorithms and infrastructure. While there has been a lot of work on the formal verification of the decision making (or the control) unit, the ultimate correctness of the autonomous system also heavily relies on the behavior of the perception unit. The verification of the correctness of the perception unit is however significantly more challenging and not much progress has been made here. This is because the algorithms used by perception units now increasingly rely on machine learning techniques (like deep neural networks) that run on a complex hardware made up CPU+accelerator platforms. The accelerators are made up of GPUs, TPUs and FPGAs. This combination of algorithmic + implementation platform complexity and heterogeneity currently makes it very difficult to provide either functional or timing correctness guarantees of the perception unit, while both of these guarantees are needed to ensure the correct functioning of the control loop and the overall autonomous system. This is a part of the overall challenge of verifying the correctness of autonomous systems.
Speakers:
Qing Rao (BMW, Munich, Germany) New Era in Autonomous Driving and the Role of IT – Will Traditional Carmakers Keep Pace?
Soheil Samii (General Motors R&D, USA) Dependable sensing system architecture for predictable perception in autonomous vehicles
Deepak Shankar (Mirabilis Design, USA) Design Tools for Predictable Hw/Sw Architectures for Autonomous Vehicles
Cong Liu (UT Dallas, USA) Towards Timing-Predictable & Robust Autonomy in Autonomous Embedded Systems
Hamed Tabkhi (University of North Carolina at Charlotte, USA) Toward AI-in-the-Loop Autonomous Safety System – Algorithmic and Timing Challenges
ASDW05-06 Perspicuous Computing
Session Start Fri, 15:00 Session End Fri, 15:55
Organizers:
M. Christakis (MPI SWS)
H. Hermanns (U Saarland, Germany)
Session Abstract: From autonomous vehicles to Industry 4.0, from smart homes to smart cities – cyber-physical technology increasingly participates in actions and decisions that affect humans. However, our understanding of how these applications interact and what is the cause of a specific automated decision is lagging far behind. This comes with a gradual loss in understanding. The root cause of this problem is that contemporary systems do not have any built-in concepts to explicate their behaviour. They calculate and propagate outcomes of computations, but are not designed to provide explanations. They are not perspicuous. The key to enable comprehension in a cyber-physical world is a science of perspicuous computing. This session will discuss the foundational, the industrial and the societal dimensions of the perspicuous computing challenge. It is organized by the Center for Perspicuous Computing – TRR 248 – a Collaborative Research Center funded by the German Research Foundation DFG.
Session Structure:
Introduction: “Enabling comprehension in a cyber-physical world with the human in the loop” Holger Hermanns, Universität des Saarlandes
Panel: “Is industry or society in need for perspicuous computing? Both? Or neither?”
Panel Moderator: Christel Baier, Technische Universität Dresden
Panelists:
Bernd Finkbeiner CISPA Helmholtz Center for Information Security
Christof Fetzer Technische Universität Dresden
Raimund Dachselt Technische Universität Dresden
Prof. Rupak Majumdar Max Planck Institute for Software Systems
Dr. Lena Kästner Universität des Saarlandes, representing EIS
ASDW05-07 Production Architectures & Platforms for Automated Vehicles
Session Start Fri, 16:00 Session End Fri, 16:55
Organizer / Chair: Rolf Ernst, TU Braunschweig, Germany
Session Abstract: Highly automated vehicles need high performance HW/SW platforms to execute complex software systems for safety critical functions. This is a usually underestimated challenge when automation comes to production vehicles. The session starts with two short presentations of platform architectures that approach the resulting design quality and safety challenge with different methods. The session will continue with an open discussion of these and possibly other approaches.
Speakers:
W. Steiner, TTTech, Austria MotionWise – A Brief Introduction and Outlook
D. Pangercic, APEX.AI, USA Open-source and Developer Centric SW Platform for the New Breed of Vehicles
ASDW05-08 Self-Awareness for Autonomy
Session Start Fri, 17:00 Session End Fri, 17:55
Organizer / Chair: Nikil Dutt (UC Irvine, USA)
Session Abstract: Self-awareness principles promise to endow autonomous systems with high degrees of adaptivity and resilience, borrowing from an abundance of examples in biology and nature. However, the engineering of dependable and predictable autonomous systems pose significant challenges for explainability, testing, and bounding safe behaviors. This session begins with short presentations by academic and industry speakers on these topics, and is followed by an interactive discussion with the audience. The first presentation by Prof. Andreas Herkersdorf (TU Munich) discusses how transparent machine learning techniques can be coupled with self-awareness to improve dependability in autonomous systems. The second presentation by Dr. Ahmed Nassar (Nvidia) addresses issues in training and testing of self-aware autonomous agents. The third presentation by Dr. Prakash Sarathy (Northrup Grumman) describes how to bound the emergent behavior of autonomous systems using a self-aware dataflow computing paradigm. The session is then followed by an open discussion between the audience and the speakers on research challenges and future directions at the intersection of self-awareness and autonomy.
Speakers:
A. Herkersdorf, TU Munich, Germany. Transparent ML as a means of enhancing dependable autonomy
A. Nassar, Nvidia, USA. Continuous Training and Testing of Autonomous Agents: The Road to Self-Awareness
P. Sarathy, Northrop Grumman, USA. Self-aware dataflow computing for Bounded Behavior Assurance