Safe ad-hoc cooperation between humans and autonomous machines
Research Field 1: Engineering and safeguarding of heterogeneous human-machine-teams
PhD project 1.1: Safe ad-hoc cooperation between humans and autonomous machines
Supervisors: Prof. A. Rausch and Prof. M. Prilla
State of the art: Autonomous systems must cooperate with other autonomous systems or even with humans to cope with complex, unplanned tasks and vice versa. Especially in ad- hoc cooperations - cooperations that were not planned before - it is necessary for the human-machine-team (a) to agree on a common procedure, usually informal and intuitive, (b) to determine and hand over control of the activities to be carried out accordingly, and (c) to permanently monitor the entire execution and question whether the intuitive common understanding is actually still the basis for action. Smooth control transfer and execution monitoring is only possible if interaction platforms are implemented that explicitly take into account the attentional and cognitive state of the person who is to take control. A secure transfer of control from the system after an incident must be planned in terms of time and content. The handover plan shall be adapted to the situation and shall include an explanation of the reason for the handover. Unexplained and extremely short-term requests to take control that cannot be understood by humans shake the trust of the human user or operator in the autonomy capabilities of the system and ultimately prevent the broad acceptance of such systems. Likewise, people must be enabled to intervene in a (partially) autonomous system controlled by complex technology if, for example, their intention of use deviates from the plan pursued by the machine. However, it is known from studies with pilots, for example, that it is difficult for people to monitor autonomous systems permanently and intensively.
An overview of the current situation, limited to the essential statements and easy to understand, is essential for a smooth transfer of controls. Here the automatic generation of multimodal presentations with linguistic and visual elements plays an outstanding role. If, for example, an autonomous robot determines that certain actions by a human being in the human-machine team are necessary to achieve the goal, it must make this clear to the human being and then start the necessary control transfer in good time. When transferring control between (semi-)autonomous machines and humans, the machine must be certain that the human being has actually taken over the task; when returning the task, the human being must be certain that the machine has actually taken over the task. When man and machine rely on each other in human-machine-teams, but nobody takes control, serious problems can arise. For this it is necessary that the machine "learns" a model of the human being and recognizes whether the human being has actually taken control. Conversely, this model can be used to signal to the human being that the machine has taken control again, so that it also reaches the human being. This requires a suitable, easily and quickly accessible presentation of the often complex information from control systems and awareness of the condition of a machine as the basis for the decision to take control. Two core tasks are relevant here: a) a human model must be learned or interpreted ad hoc during the operating period (in particular taking into account the lack of predictability of human actions) and b) a concept for an adaptive human-machine interface must be developed so that the interface can adapt to the learned model or the action recognition at runtime in order to communicate with the human as effectively as possible.
Further relevant work is to be mentioned in the field of dynamic risk management: Autonomous systems act (ideally) autonomously ("intelligently") in situations that were not explicitly considered during the development of the systems. The limits of their behavior are not precisely known; therefore there is no basis to think about misconduct or deviations from a specified behavior assumed to be safe and to derive concrete safety targets in this respect. A lot of work in the safety research community is dealing with this problem. All solutions have in common that they try to determine and control the risk of general types of accidents at runtime. This dynamic risk management was proposed across domains in the context of avionics, robotics and automotive. One approach to the dynamic risk management of autonomous systems, like the above approaches, is based on the assumption that dynamic risk management is only implemented with the help of the system's sensors and actuators. While there is work that suggests dynamic risk management in the “System of System”, it does not take into account that the task of risk management must be performed with integrity due to its inherent safety relevance.
Research gap: New transfer and validation concepts at the human-machine-interface have to be researched in order to enable secure ad-hoc cooperation between (semi-) autonomous machines and humans. At the same time, the correctness of these transfer and validation concepts themselves must be checked and their correctness continuously monitored and ensured. In addition, possibilities of human-machine communication need to be developed to facilitate the transfer of control and The aim of the project is to create transparency in the process of taking over and to enable people to (co-)decide on these processes. Appropriate methods and technologies must be developed for this purpose.
Own preliminary work: Prof. Rausch researches the validation and testing of software for autonomous driving functions and autonomous robots as well as the dynamic networking of components in the mobile environment. He has developed an approach for the runtime monitoring of safety-relevant properties and tested it in several research projects, such as iserveU (robotics and control stations, runtime monitoring and intelligent transport system for logistics tasks in hospitals) and DADAS (validation and certification methods for autonomous driving functions). Prof. Rausch also deals with methods of machine learning, e.g. prediction mechanisms for driver behavior.
Michael Prilla investigates the design of human-machine interfaces in various areas such as the visualization of information in augmented reality or the activation of users through prompts.