9.1 Autonomous Systems Design: Opening Panel
Date: Thursday, 04 February 2021
Time: 07:00 – 08:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/ZjsTrqcyv7ocgh8y9
Session chair:
Rolf Ernst, TU Braunschweig, DE
Session co-chair:
Selma Saidi, TU Dortmund, DE
Organizers:
Rolf Ernst, TU Braunschweig, DE
Selma Saidi, TU Dortmund, DE
Fueled by the progress of artificial intelligence, autonomous systems become more and more integral parts of many Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) applications, such as automated driving, robotics, avionics and industrial automation. Autonomous systems are self-governed and self-adaptive systems that are designed to operate in an open and evolving environment that has not been completely defined at design time. This poses a unique challenge to the design and verification of dependable autonomous systems. In this opening session of the DATE Special Initiative on Autonomous Systems Design, industry leaders will talk about their visions of autonomous systems, the challenges they see in the development of autonomous systems as well as how autonomous systems will impact the business in their industries. These inputs will be discussed in an open floor panel. Panelists: – Thomas Kropf (Robert Bosch GmbH) – Pascal Traverse (Airbus) – Juergen Bortolazzi (Porsche AG) – Peter Liggesmeyer (Fraunhofer IESE) – Joseph Sifakis (University of Grenoble/VERIMAG) – Sandeep Neema (DARPA)
10.1 Reliable Autonomous Systems: Dealing with Failure & Anomalies
Date: Thursday, 04 February 2021
Time: 08:00 – 09:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/Gz6kTsdb9rNGixqMQ
Session chair:
Rolf Ernst, TU. Braunschweig, DE
Session co-chair:
Rasmus Adler, Fraunhofer IESE, DE
Organizers:
Rolf Ernst, TU. Braunschweig, DE
Selma Saidi, TU Dortmund, DE
Autonomous Systems need novel approaches to detect and handle failures and anomalies. The first paper introduces an approach that adapts the placement of applications on a vehicle platform via adjustment of optimization criteria under safety goal restrictions. The second paper presents a formal worst-case failover timing analysis for online verification to assure safe vehicle operation under failover safety constraints. The third paper proposes an explanation component that observes and analyses an autonomous system and tries to derive explanations for anomalous behavior.
Time | Label | Presentation Title Authors |
---|---|---|
08:00 CET | 10.1.1 | C-PO: A CONTEXT-BASED APPLICATION-PLACEMENT OPTIMIZATION FOR AUTONOMOUS VEHICLES Speaker: Tobias Kain, Volkswagen AG, Wolfsburg, Germany, DE Authors: Tobias Kain1, Hans Tompits2, Timo Frederik Horeis3, Johannes Heinrich3, Julian-Steffen Müller4, Fabian Plinke3, Hendrik Decke5 and Marcel Aguirre Mehlhorn4 1Volkswagen AG, AT; 2Technische Universitat Wien, AT; 3Institut für Qualitäts- und Zuverlässigkeitsmanagement GmbH, DE; 4Volkswagen AG, DE; 5Volkswagen AG, Wolfsburg, DE Abstract Autonomous vehicles are complex distributed systems consisting of multiple software applications and computing nodes. Determining the assignment between these software applications and computing nodes is known as the application-placement problem. The input of this problem is a set of applications, their requirements, a set of computing nodes, and their provided resources. Due to the potentially large solution space of the problem, an optimization goal defines which solution is desired the most. However, the optimization goal used for the application-placement problem is not static but has to be adapted according to the current context the vehicle is experiencing. Therefore, an approach for a context-based determination of the optimization goal for a given instance of an application-placement problem is required. In this paper, we introduce C-PO, an approach to address this issue. C-PO ensures that if the safety level of a system drops due to an occurring failure, the optimization goal for the successively executed application-placement determination aims to restore the safety level. Once the highest safety level is reached, C-PO optimizes the application placement according to the current driving situation. Furthermore, we introduce two methods for dynamically determining the required level of safety. |
08:15 CET | 10.1.2 | WORST-CASE FAILOVER TIMING ANALYSIS OF DISTRIBUTED FAIL-OPERATIONAL AUTOMOTIVE APPLICATIONS Speaker: Philipp Weiss, TU Munich, DE Authors: Philipp Weiss1, Sherif Elsabbahy1, Andreas Weichslgartner2 and Sebastian Steinhorst1 1TU Munich, DE; 2AUDI AG, DE Abstract Enabling fail-operational behavior of safety-critical software is essential to achieve autonomous driving. At the same time, automotive vendors have to regularly deliver over-the-air software updates. Here, the challenge is to enable a flexible and dynamic system behavior while offering, at the same time, a predictable and deterministic behavior of time-critical software. Thus, it is necessary to verify that timing constraints can be met even during failover scenarios. For this purpose, we present a formal analysis to derive the worst-case application failover time. Without such an automated worst-case failover timing analysis, it would not be possible to enable a dynamic behavior of safetycritical software within safe bounds. We support our formal analysis by conducting experiments on a hardware platform using a distributed fail-operational neural network. Our randomly generated worst-case results are as close as 6.0% below our analytically derived exact bound. Overall, our presented worstcase failover timing analysis allows to conduct an automated analysis at run-time to verify that the system operates within the bounds of the failover timing constraint such that a dynamic and safe behavior of autonomous systems can be ensured. |
08:30 CET | IP.ASD_1.1 | DECENTRALIZED AUTONOMOUS ARCHITECTURE FOR RESILIENT CYBER-PHYSICAL PRODUCTION SYSTEMS Speaker: Laurin Prenzel, TU Munich, DE Authors: Laurin Prenzel and Sebastian Steinhorst, TU Munich, DE Abstract Real-time decision-making is a key element in the transition from Reconfigurable Manufacturing Systems to Autonomous Manufacturing Systems. In Cyber-Physical Production Systems (CPPS) and Cloud Manufacturing, most decision-making algorithms are either centralized, creating vulnerabilities to failures, or decentralized, struggling to reach the performance of the centralized counterparts. In this paper, we combine the performance of centralized optimization algorithms with the resilience of a decentralized consensus. We propose a novel autonomous system architecture for CPPS featuring an automatic production plan generation, a functional validation, and a two-stage consensus algorithm, combining a majority vote on safety and optimality, and a unanimous vote on feasibility and authenticity. The architecture is implemented in a simulation framework. In a case study, we exhibit the timing behavior of the configuration procedure and subsequent reconfiguration following a device failure, showing the feasibility of a consensus-based decision-making process. |
08:31 CET | 10.1.3 | ANOMALY DETECTION AND CLASSIFICATION TO ENABLE SELF-EXPLAINABILITY OF AUTONOMOUS SYSTEMS Speaker: Verena Klös, TU Berlin, DE Authors: Florian Ziesche, Verena Klös and Sabine Glesner, TU Berlin, DE Abstract While the importance of autonomous systems in our daily lives and in the industry increases, we have to ensure that this development is accepted by their users. A crucial factor for a successful cooperation between humans and autonomous systems is a basic understanding that allows users to anticipate the behavior of the systems. Due to their complexity, complete understanding is neither achievable, nor desirable. Instead, we propose self-explainability as a solution. A self-explainable system autonomously explains behavior that differs from anticipated behavior. As a first step towards this vision, we present an approach for detecting anomalous behavior that requires an explanation and for reducing the huge search space of possible reasons for this behavior by classifying it into classes with similar reasons. We envision our approach to be part of an explanation component that can be added to any autonomous system. |
08:46 CET | IP.ASD_1.2 | PROVABLY ROBUST MONITORING OF NEURON ACTIVATION PATTERNS Speaker and Author: Chih-Hong Cheng, DENSO AUTOMOTIVE Deutschland GmbH, DE Abstract For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worstcase estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a finegrained decision on the neuron value interval. |
IP.ASD_1 Interactive Presentations
Date: Thursday, 04 February 2021
Time: 09:00 – 09:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/DoZTZhX3vc3YiysKh
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
Time | Label |
---|---|
IP.ASD_1.1 | DECENTRALIZED AUTONOMOUS ARCHITECTURE FOR RESILIENT CYBER-PHYSICAL PRODUCTION SYSTEMS Speaker: Laurin Prenzel, TU Munich, DE Authors: Laurin Prenzel and Sebastian Steinhorst, TU Munich, DE Abstract Real-time decision-making is a key element in the transition from Reconfigurable Manufacturing Systems to Autonomous Manufacturing Systems. In Cyber-Physical Production Systems (CPPS) and Cloud Manufacturing, most decision-making algorithms are either centralized, creating vulnerabilities to failures, or decentralized, struggling to reach the performance of the centralized counterparts. In this paper, we combine the performance of centralized optimization algorithms with the resilience of a decentralized consensus. We propose a novel autonomous system architecture for CPPS featuring an automatic production plan generation, a functional validation, and a two-stage consensus algorithm, combining a majority vote on safety and optimality, and a unanimous vote on feasibility and authenticity. The architecture is implemented in a simulation framework. In a case study, we exhibit the timing behavior of the configuration procedure and subsequent reconfiguration following a device failure, showing the feasibility of a consensus-based decision-making process. |
IP.ASD_1.2 | PROVABLY ROBUST MONITORING OF NEURON ACTIVATION PATTERNS Speaker and Author: Chih-Hong Cheng, DENSO AUTOMOTIVE Deutschland GmbH, DE Abstract For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worstcase estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a finegrained decision on the neuron value interval. |
11.1 Safety Assurance of Autonomous Vehicles
Date: Thursday, 04 February 2021
Time: 09:30 – 10:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/WymRmxKbnn387f8Di
Session chair:
Sebastian Steinhorst, TU Munich, DE
Session co-chair:
Simon Schliecker, Volkswagen AG, DE
Organizers:
Rolf Ernst, TU. Braunschweig, DE
Selma Saidi, TU Dortmund, DE
Safety of autonomous vehicles is a core requirement for their social acceptance. Hence, this session introduces three technical perspectives on this important field. The first paper presents a statistics-based view on risk taking when passing by pedestrians such that automated decisions can be taken with probabilistic reasoning. The second paper proposes a hardening of image classification for highly automated driving scenarios by identifying the similarity between target classes. The third paper improves the computational efficiency of safety verification of deep neural networks by reusing existing proof artifacts.
Time | Label | Presentation Title Authors |
---|---|---|
09:30 CET | 11.1.1 | AUTOMATED DRIVING SAFETY – THE ART OF CONSCIOUS RISK TAKING – MINIMUM LATERAL DISTANCES TO PEDESTRIANS Speaker: Bert Böddeker, private, DE Authors: Bert Böddeker1, Wilhard von Wendorff2, Nam Nguyen3, Peter Diehl4, Roland Meertens5 and Rolf Johannson6 1private, DE; 2SGS-TÜV Saar GmbH, DE; 3Hochschule München für angewandte Wissenschaften, DE; 4Private, DE; 5private, NL; 6Private, SE Abstract The announced release dates for Automated Driving Systems (ADS) with conditional (SAE-L3) and high (SAE-L4) levels of automation according to SAE J3016 are getting closer. Still, there is no established state of the art for proving the safety of these systems. The ISO 26262 for automotive functional safety is still valid for these systems but only covers risks from malfunctions of electric and electronic (E/E) systems. A framework for considering issues caused by weaknesses of the intended functionality itself is standardized in the upcoming release of the ISO 21448 – Safety of the Intended Functionality (SOTIF). Rich experience regarding limitations of safety performance of complex sensors can be found in this standard. In this paper, we highlight another aspect of SOTIF that becomes important for higher levels of automation, especially, in urban areas: `conscious risk taking’. In traditional automotive systems, conflicting goal resolutions are generally left to the car driver. With SAE-level 3 or at latest SAE-level 4 ADS, the driver is not available for decisions anymore. Even ‘safe drivers’ do not use the safest possible driving behavior. In the example of occlusions next to the street, a driver balances the risk of occluded pedestrians against the speed of the traffic flow. Our aim is to make such decisions explicit and sufficiently safe. On the example of crossing pedestrians, we show how to use statistics to derive a conscious quantitative risk-based decision from a previously defined acceptance criterion. The acceptance criterion is derived from accident statistics involving pedestrians. |
09:45 CET | 11.1.2 | ON SAFETY ASSURANCE CASE FOR DEEP LEARNING BASED IMAGE CLASSIFICATION IN HIGHLY AUTOMATED DRIVING Speaker: Himanshu Agarwal, HELLA GmbH & Co. KGaA, Lippstadt, Germany and Carl von Ossietzky University Oldenburg, Germany, DE Authors: Himanshu Agarwal1, Rafal Dorociak2 and Achim Rettberg3 1HELLA GmbH & Co. KGaA and Carl von Ossietzky University Oldenburg, DE; 2HELLA GmbH & Co. KGaA, DE; 3University of Applied Science Hamm-Lippstadt & University Oldenburg, DE Abstract Assessing the overall accuracy of deep learning classifier is not a sufficient criterion to argue for safety of classification based functions in highly automated driving. The causes of deviation from the intended functionality must also be rigorously assessed. In context of functions related to image classification, one of the causes can be the failure to take into account during implementation the classifier’s vulnerability to misclassification due to high similarity between the target classes. In this paper, we emphasize that while developing the safety assurance case for such functions, the argumentation over the appropriate implementation of the functionality must also address the vulnerability to misclassification due to class similarities. Using the traffic sign classification function as our case study, we propose to aid the development of its argumentation by: (a) conducting a systematic investigation of the similarity between the target classes, (b) assigning a corresponding classifier vulnerability rating to every possible misclassification, and (c) ensuring that the claims against the misclassifications that induce higher risk (scored on the basis of vulnerability and severity) are supported with more compelling sub-goals and evidences as compared to the claims against misclassifications that induce lower risk. |
10:00 CET | 11.1.3 | CONTINUOUS SAFETY VERIFICATION OF NEURAL NETWORKS Speaker: Rongjie Yan, Institute of Software, Chinese Academy of Sciences, CN Authors: Chih-Hong Cheng1 and Rongjie Yan2 1DENSO AUTOMOTIVE Deutschland GmbH, Eching, Germa, DE; 2Institute of Software, Chinese Academy of Sciences, CN Abstract Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges. In particular, the continuous engineering paradigm of gradually perfecting a DNN-based perception can make the previously established result of safety verification no longer valid. This can occur either due to the newly encountered examples (i.e., input domain enlargement) inside the Operational Design Domain or due to the subsequent parameter fine-tuning activities of a DNN. This paper considers approaches to transfer results established in the previous DNN safety verification problem to the modified problem setting. By considering the reuse of state abstractions, network abstractions, and Lipschitz constants, we develop several sufficient conditions that only require formally analyzing a small part of the DNN in the new problem. The overall concept is evaluated in a 1/10-scaled vehicle that equips a DNN controller to determine the visual waypoint from the perceived image. |
K.5 Keynote – Special day on ASD
Date: Thursday, 04 February 2021
Time: 15:00 – 15:50 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/Rv6DcWWGXHvRDApRS
Session chair:
Selma Saidi, TU Dortmund, DE
Session co-chair:
Rolf Ernst, TU. Braunschweig, DE
Autonomy is in the air: on one hand, automation is clearly a lever to improve safety margins; on another hand technologies are maturing, pulled by the automotive market. In this context, Airbus is building a concept airplane from a blank sheet with the objective to improve human-machine teaming for better overall performance. Foundation of this new concept is that when they are made aware of the “big picture” with enough time to analyze it, humans are still the best to make strategic decisions. Autonomy technologies are the main enabler of this concept. Benefit are expected both in a two-crew cockpit and eventually in Single Pilot Operations. Bio: Pascal Traverse is General Manager for the Autonomy “fast track” at Airbus. Autonomy is a top technical focus area for Airbus. The General Manager creates a vision, coordinates R&T activities with the objective to accelerate the increase of knowledge in Airbus. Before his nomination last year, Pascal was coordinating Airbus Commercial R&T activities related to the cockpit and flight operations. Earlier in his carrier, Pascal participated in the A320/A330/A340/A380 Fly-by-Wire developments, certification harmonization with FAA and EASA, management of Airbus safety activities and even of qualities activities in the A380 Final Assembly Line. Pascal has Master and Doctorate’s degrees in embedded systems from N7, conducted research in LAAS and UCLA and is a 3AF Fellow.
Time | Label | Presentation Title Authors |
---|---|---|
15:00 CET | K.5.1 | AUTONOMY: ONE STEP BEYOND ON COMMERCIAL AVIATION Speaker and Author: Pascal Traverse, Airbus, FR Abstract Autonomy is in the air: on one hand, automation is clearly a lever to improve safety margins; on another hand technologies are maturing, pulled by the automotive market. In this context, Airbus is building a concept airplane from a blank sheet with the objective to improve human-machine teaming for better overall performance. Foundation of this new concept is that when they are made aware of the “big picture” with enough time to analyze it, humans are still the best to make strategic decisions. Autonomy technologies are the main enabler of this concept. Benefit are expected both in a two-crew cockpit and eventually in Single Pilot Operations. |
12.1 Designing Autonomous Systems: Experiences, Technology and Processes
Date: Thursday, 04 February 2021
Time: 16:00 – 17:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/J65mDxoTBDp4Cn857
Session chair:
Selma Saidi, TU Dortmund, DE
Session co-chair:
Philipp Mundhenk, Robert Bosch GmbH, DE
Organizers:
Rolf Ernst, TU. Braunschweig, DE
Selma Saidi, TU Dortmund, DE
This session discusses technology innovation, experiences and processes in building autonomous systems. The first paper presents Fünfliber, a nano-sized Unmanned Aerial Vehicle (UAV) composed of a mudular open-hardware robotic platform controlled by a Parallel Ultra-low power system-on-chip (PULP) capable of running sophisticated autonomous DNN-based navigation workloads. The second paper presents an abstracted runtime for managing adaptation and integrating FPGA-accelerators to autonomous software framework , a show-case study integration into ROS is demonstrated. The third paper discusses current processes in engineering dependable collaborative autonomous systems and new buisness models based on agile approaches for innovative management.
Time | Label | Presentation Title Authors |
---|---|---|
16:00 CET | 12.1.1 | FüNFLIBER-DRONE: A MODULAR OPEN-PLATFORM 18-GRAMS AUTONOMOUS NANO-DRONE Speaker: Hanna Müller, Integrated Systems Laboratory, CH Authors: Hanna Mueller1, Daniele Palossi2, Stefan Mach1, Francesco Conti3 and Luca Benini4 1Integrated Systems Laboratory – ETH Zurich, CH; 2Integrated Systems Laboratory – ETH Zurich, Switzerland, Dalle Molle Institute for Artificial Intelligence – University of Lugano and SUPSI, CH; 3Department of Electrical, Electronic and Information Engineering – University of Bologna, Italy, IT; 4Integrated Systems Laboratory – ETH Zurich, Department of Electrical, Electronic and Information Engineering – University of Bologna, CH Abstract “Miniaturizing an autonomous robot is a challenging task – not only the mechanical but also the electrical components have to operate within limited space, payload, and power. Furthermore, the algorithms for autonomous navigation, such as state-of-the-art (SoA) visual navigation deep neural networks (DNNs), are becoming increasingly complex, striving for more flexibility and agility. In this work, we present a sensor-rich, modular, nano-sized Unmanned Aerial Vehicle (UAV), almost as small as a five Swiss Franc coin – called Funfliber – with a ¨ total weight of 18g and 7.2cm in diameter. We conceived our UAV as an open-source hardware robotic platform, controlled by a parallel ultra-low power (PULP) system-on-chip (SoC) with a wide set of onboard sensors, including three cameras (i.e., infrared, optical flow, and standard QVGA), multiple Time-of-Flight (ToF) sensors, a barometer, and an inertial measurement unit. Our system runs the tasks necessary for a flight controller (sensor acquisition, state estimation, and low-level control), requiring only 10% of the computational resources available aboard, consuming only 9mW – 13x less than an equivalent Cortex M4-based system. Pushing our system at its limit, we can use the remaining onboard computational power for sophisticated autonomous navigation workloads, as we showcase with an SoA DNN running at up to 18Hz, with a total electronics’ power consumption of 271mW” |
16:15 CET | 12.1.2 | RUNTIME ABSTRACTION FOR AUTONOMOUS ADAPTIVE SYSTEMS ON RECONFIGURABLE HARDWARE Speaker: Alex R Bucknall, University of Warwick, GB Authors: Alex R. Bucknall1 and Suhaib A. Fahmy2 1University of Warwick, GB; 2KAUST, SA Abstract Autonomous systems increasingly rely on on-board computation to avoid the latency overheads of offloading to more powerful remote computing. This requires the integration of hardware accelerators to handle the complex computations demanded by data-intensive sensors. FPGAs offer hardware acceleration with ample flexibility and interfacing capabilities when paired with general purpose processors, with the ability to reconfigure at runtime using partial reconfiguration (PR). Managing dynamic hardware is complex and has been left to designers to address in an ad-hoc manner, without first-class integration in autonomous software frameworks. This paper presents an abstracted runtime for managing adaptation of FPGA accelerators, including PR and parametric changes, that presents as a typical interface used in autonomous software systems. We present a demonstration using the Robot Operating System (ROS), showing negligible latency overhead as a result of the abstraction. |
16:30 CET | IP.ASD_2.1 | SYSTEMS ENGINEERING ROADMAP FOR DEPENDABLE AUTONOMOUS CYBER-PHYSICAL SYSTEMS Speaker and Author: Rasmus Adler, Fraunhofer IESE, DE Abstract Autonomous cyber-physical systems have enormous potential to make our lives more sustainable, more comfortable, and more economical. Artificial Intelligence and connectivity enable autonomous behavior, but often stand in the way of market launch. Traditional engineering techniques are no longer sufficient to achieve the desired dependability; current legal and normative regulations are inappropriate or insufficient. This paper discusses these issues, proposes advanced systems engineering to overcome these issues, and provides a roadmap by structuring fields of action. |
16:31 CET | 12.1.3 | DDI: A NOVEL TECHNOLOGY AND INNOVATION MODEL FOR DEPENDABLE, COLLABORATIVE AND AUTONOMOUS SYSTEMS Speaker: Eric Armengaud, Armengaud Innovate GmbH, AT Authors: Eric Armengaud1, Daniel Schneider2, Jan Reich2, Ioannis Sorokos2, Yiannis Papadopoulos3, Marc Zeller4, Gilbert Regan5, Georg Macher6, Omar Veledar7, Stefan Thalmann8 and Sohag Kabir9 1Armengaud Innovate GmbH, AT; 2Fraunhofer IESE, DE; 3University of Hull, GB; 4Siemens AG, DE; 5Lero @DKIT, IE; 6Graz University of Technology, AT; 7AVL List GmbH, AT; 8University of Graz, AT; 9University of Bradford, GB Abstract Digital transformation fundamentally changes established practices in public and private sector. Hence, it represents an opportunity to improve the value creation processes (e.g., “industry 4.0”) and to rethink how to address customers’ needs such as “data-driven business models” and “Mobility-as-a-Service”. Dependable, collaborative and autonomous systems are playing a central role in this transformation process. Furthermore, the emergence of data-driven approaches combined with autonomous systems will lead to new business models and market dynamics. Innovative approaches to reorganise the value creation ecosystem, to enable distributed engineering of dependable systems and to answer urgent questions such as liability will be required. Consequently, digital transformation requires a comprehensive multi-stakeholder approach which properly balances technology, ecosystem and business innovation. Targets of this paper are (a) to introduce digital transformation and the role of / opportunities provided by autonomous systems, (b) to introduce Digital Depednability Identities (DDI) – a technology for dependability engineering of collaborative, autonomous CPS, and (c) to propose an appropriate agile approach for innovation management based on business model innovation and co-entrepreneurship |
IP.ASD_2 Interactive Presentations
Date: Thursday, 04 February 2021
Time: 17:00 – 17:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/nM6kYLXg8nwB5C4un
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
Time | Label | Presentation Title Authors |
---|---|---|
17:00 CET | IP.ASD_2.1 | SYSTEMS ENGINEERING ROADMAP FOR DEPENDABLE AUTONOMOUS CYBER-PHYSICAL SYSTEMS Speaker and Author: Rasmus Adler, Fraunhofer IESE, DE Abstract Autonomous cyber-physical systems have enormous potential to make our lives more sustainable, more comfortable, and more economical. Artificial Intelligence and connectivity enable autonomous behavior, but often stand in the way of market launch. Traditional engineering techniques are no longer sufficient to achieve the desired dependability; current legal and normative regulations are inappropriate or insufficient. This paper discusses these issues, proposes advanced systems engineering to overcome these issues, and provides a roadmap by structuring fields of action. |
13.1 Predictable Perception for Autonomous Systems
Date: Thursday, 04 February 2021
Time: 17:30 – 18:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/HEQdxwdJ4stmTYZMC
Session chair:
Soheil Samii, General Motors R&D,, US
Session co-chair:
Qing Rao, BMW AG, DE
Organizers:
Selma Saidi, TU Dortmund, DE
Rolf Ernst, TU Braunschweig, DE
Modern autonomous systems – such as autonomous vehicles or robots – consist of two major components: (a) the decision making unit, which is often made up of one or more feedback control loops, and (b) a perception unit that feeds the environmental state to the control unit and is made up of camera, radar and lidar sensors and their associated processing algorithms and infrastructure. While there has been a lot of work on the formal verification of the decision making (or the control) unit, the ultimate correctness of the autonomous system also heavily relies on the behavior of the perception unit. The verification of the correctness of the perception unit is however significantly more challenging and not much progress has been made here. This is because the algorithms used by perception units now increasingly rely on machine learning techniques (like deep neural networks) that run on a complex hardware made up CPU+accelerator platforms. The accelerators are made up of GPUs, TPUs and FPGAs. This combination of algorithmic + implementation platform complexity and heterogeneity currently makes it very difficult to provide either functional or timing correctness guarantees of the perception unit, while both of these guarantees are needed to ensure the correct functioning of the control loop and the overall autonomous system. This is a part of the overall challenge of verifying the correctness of autonomous systems. This session will feature four invited talks, with each of them focusing on different aspects of this problem – some on the timing predictability of the perception unit, others on the functional correctness of the processing algorithms used in the perception units, and the remaining on the reliability and the performance/cost tradeoffs involved in designing perception units for autonomous systems.
Time | Label | Presentation Title Authors |
---|---|---|
17:30 CET | 13.1.1 | TIMING-PREDICTABLE VISION PROCESSINGFOR AUTONOMOUS SYSTEMS Speaker: Tanya Amert, UNC Chapel Hill, US Authors: Tanya Amert1, Michael Balszun2, Martin Geier3, F. Donelson Smith4, Jim Anderson4 and Samarjit Chakraborty5 1University of North Carolina at Chapel HIll, US; 2TU-München, DE; 3TU Munich, DE; 4University of North Carolina at Chapel Hill, US; 5UNC Chapel Hill, US Abstract Vision processing for autonomous systems today involves implementing machine learning algorithms and vision processing libraries on embedded platforms consisting of CPUs, GPUs and FPGAs. Because many of these use closed-source proprietary components, it is very difficult to perform any timing analysis on them. Even measuring or tracing their timing behavior is challenging, although it is the first step towards reasoning about the impact of different algorithmic and implementation choices on the end-to-end timing of the vision processing pipeline. In this paper we discuss some recent progress in developing tracing, measurement and analysis infrastructure for determining the timing behavior of vision processing pipelines implemented on state-of-the-art FPGA and GPU platforms. |
17:45 CET | 13.1.2 | BOUNDING PERCEPTION NEURAL NETWORK UNCERTAINTY FOR SAFE CONTROL OF AUTONOMOUS SYSTEMS Speaker: Qi Zhu, Northwestern University, US Authors: Zhilu Wang1, Chao Huang2, Yixuan Wang1, Clara Hobbs3, Samarjit Chakraborty4 and Qi Zhu1 1Northwestern University, US; 2Department of Electrical and Computer Engineering, Northwestern University, US; 3University of North Carolina at Chapel Hill, US; 4UNC Chapel Hill, US Abstract Future autonomous systems will rely on advanced sensors and deep neural networks for perceiving the environment, and then utilize the perceived information for system planning, control, adaptation, and general decision making. However, due to the inherent uncertainties from the dynamic environment and the lack of methodologies for predicting neural network behavior, the perception modules in autonomous systems often could not provide deterministic guarantees and may sometimes lead the system into unsafe states (e.g., as evident by a number of high-profile accidents with experimental autonomous vehicles). This has significantly impeded the broader application of machine learning techniques, particularly those based on deep neural networks, in safetycritical systems. In this paper, we will discuss these challenges, define open research problems, and introduce our recent work in developing formal methods for quantitatively bounding the output uncertainty of perception neural networks with respect to input perturbations, and leveraging such bounds to formally ensure the safety of system control. Unlike most existing works that only focus on either the perception module or the control module, our approach provides a holistic end-to-end framework that bounds the perception uncertainty and addresses its impact on control. |
18:00 CET | 13.1.3 | HARDWARE- AND SITUATION-AWARE SENSING FOR ROBUST CLOSED-LOOP CONTROL SYSTEMS Speaker: Dip Goswami, Eindhoven University of Technology, NL Authors: Sayandip De1, Yingkai Huang2, Sajid Mohamed1, Dip Goswami1 and Henk Corporaal3 1Eindhoven University of Technology, NL; 2Electronic Systems Group, Eindhoven University of Technology, NL; 3TU/e (Eindhoven University of Technology), NL Abstract While vision is an attractive alternative to many sensors targeting closed-loop controllers, it comes with high time-varying workload and robustness issues when targeted to edge devices with limited energy, memory and computing resources. Replacing classical vision processing pipelines, e.g., lane detection using Sobel filter, with deep learning algorithms is a way to deal with the robustness issues while hardware-efficient implementation is crucial for their adaptation for safe closed-loop systems. However, while implemented on an embedded edge device, the performance of these algorithms highly depends on their mapping on the target hardware and situation encountered by the system. That is, first, the timing performance numbers (e.g., latency, throughput) depends on the algorithm schedule, i.e., what part of the AI workload runs where (e.g., GPU, CPU) and their invocation frequency (e.g., how frequently we run a classifier). Second, the perception performance (e.g., detection accuracy) is heavily influenced by the situation – e.g., snowy and sunny weather condition provides very different lane detection accuracy. These factors directly influence the closed-loop performance, for example, the lane-following accuracy in a lane-keep assist system (LKAS). We propose a hardware- and situation-aware design of AI perception where the idea is to define the situations by a set of relevant environmental factors (e.g., weather, road etc. in an LKAS). We design the learning algorithms and parameters, overall hardware mapping and its schedule taking the situation into account. We show the effectiveness of our approach considering a realistic LKAS case-study on heterogeneous NVIDIA AGX Xavier platform in a hardware-in-the-loop framework. Our approach provides robust LKAS designs with 32% better performance compared to traditional approaches. |
18:15 CET | 13.1.4 | ORCHESTRATION OF PERCEPTION SYSTEMS FOR RELIABLE PERFORMANCE IN HETEROGENEOUS PLATFORMS Speaker: Soumyajit Dey, Indian Institute of Technology Kharagpur, IN Authors: Anirban Ghose1, Srijeeta Maity2, Arijit Kar1 and Soumyajit Dey3 1Indian Institute of Technology, Kharagpur, IN; 2student, IN; 3IIT Kharagpur, IN Abstract “Delivering driving comfort in this age of connected mobility is one of the primary goals of semi-autonomous perception systems increasingly being used in modern automotives. The performance of such perception systems is a function of execution rate which demands on-board platform-level support. With the advent of GPGPU compute support in automobiles, there exists an opportunity to adaptively enable higher execution rates for such Advanced Driver Assistant System tasks (ADAS tasks) subject to different vehicular driving contexts. This can be achieved through a combination of program level locality optimizations such as kernel fusion, thread coarsening and core level DVFS techniques while keeping in mind their effects on task level deadline requirements and platform-level thermal reliability. In this communication, we present a future-proof, learning-based adaptive scheduling framework that strives to deliver reliable and predictable performance of ADAS tasks while accommodating for increased task-level throughput requirements.” |
ASD.REC ASD Reception
Date: Thursday, 04 February 2021
Time: 18:30 – 19:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/jPSN2q5euq43mmf2G
Session chair:
Rolf Ernst, TU Braunschweig, DE
Session co-chair:
Selma Saidi, TU Dortmund, DE
Organizers:
Rolf Ernst, TU Braunschweig, DE
Selma Saidi, TU Dortmund, DE
In the virtual reception session of the DATE Special Initiative on Autonomous Systems Design, we will enjoy (1) the talk of Dr. Gabor Karsai (Vanderbilt University) on “Towards Assurance-based Learning-enabled Cyber-Physical Systems”, (2) introduce the Friday topics, (3) exchange thoughts on autonomous systems design, and (4) collect feedback regarding the special initiative.
Time | Label | Presentation Title Authors |
---|---|---|
18:30 CET | ASD.REC.1 | TOWARDS ASSURANCE-BASED LEARNING-ENABLED CYBER-PHYSICAL SYSTEMS Speaker and Author: Gabor Karsai, Vanderbilt University, US Abstract This talk will provide an overview of the DARPA Assured Autonomy program and give project examples. |