First International Workshop on Requirements Engineering for Explainable Systems (RE4ES)

RE4ES is co-located with the 29th IEEE International Requirements Engineering Conference (RE 2021) Notre Dame, South Bend, USA

RE4ES Logo

About RE4ES

Explainability has become a hot topic and communities from different areas of knowledge have been researching it actively. The primary purposes of this workshop are to advance RE for explainable systems, build a community, and foster interdisciplinary exchange. We believe that the methods and techniques of the RE community would add a lot of value to the explainability research and also ensure that we develop such techniques in parallel to the needs of other communities. This workshop should work as a starting point in exploring synergies between the RE community and other communities that are already actively researching explainability. To achieve that, our agenda will be based on a mix of paper presentations, keynotes, and interactive activities to stimulate lively discussions.

For now, an unofficial summary of the workshop: Explainability is definitely a hot and interesting topic, and there are many open questions and research opportunities. 🤩

And we've come to an end of two incredible days! We are very happy to say that our first edition was a success and full of interesting takeaways and ideas for our journey. 🥳🎉 @ieee_re

Load More...

Advance RE4ES

We foresee explainability as one of the key quality attributes for future systems. This workshop shall foster research by offering a platform that attracts research on and increase the visibility of the topic. We hope to create momentum and provide opportunities to combine research and practice towards explainability.

Community Building

This workshop aims to establish and strengthen links in the community and foster communication between researchers working in the different fields that research on explainability. We can interactively compare the state of the art, understand research gaps, and inspire new work.

w

Interdisciplinary Exchange

We have a vision of bringing researchers from different disciplines together to learn from them and propose solutions that fit the reality of research and practice. We are convinced that with the help of other research disciplines we can make a major contribution towards explainable software and explainable systems.

Featured Keynotes

Liu Ren, Ph.D

Bosch HMI

About Liu Ren

Dr. Liu Ren is the VP and Chief Scientist for integrated human-machine intelligence at the Bosch Research and Technology Center in North America. He is the global head responsible for shaping strategic directions and developing cutting-edge AI technologies for the human machine collaboration program in corporate research with a focus on big data visual analytics, explainable AI, mixed reality/AR, audio analytics, conversational AI, natural language processing (NLP), and cloud based robotics for industrial AI applications. He oversees these research activities conducted by several research teams and departments in Sunnyvale, Pittsburgh, Renningen, and Bangalore. Liu also serves in the technical program committees of several top tier computer science conferences. He has won the Bosch North America Inventor of the Year Award for 3D maps (2016), Best Paper Award (2020), Best Paper Award (2018), and Best Paper Honorable Mention Award (2016) for big data visual analytics in IEEE Visualization Conference (VAST).

Liu received his PhD and MSc degrees in computer science at Carnegie Mellon University. He also has a BSc degree in computer science from Zhejiang University in Hangzhou, China.

Keynote: Human-Assisted AI – A Visual Analytics Approach to Addressing Industrial AI Challenges

Domain knowledge offers enormous USP (unique selling point) opportunities for industrial AI products and services solutions. However, leveraging domain know-how to enable trustworthy industrial AI products and services with minimum human effort remains a big challenge in both academia and industry. Visual analytics is a promising approach to addressing this problem by leveraging a human-assisted AI framework that combines explainable AI (e.g., semantic representation learning), data visualization and user interaction. In this talk, I would like to demonstrate the effectiveness of this approach using Bosch Research’s recent innovations that have been successfully applied to several key industrial AI domains such as Smart Manufacturing (I4.0), Autonomous Driving, Driver Assistance, and IoT. Some of the highlighted innovations will also be featured in various incoming academic venues. In particular, I will also share some of the key insights learned from requirement and system perspective when some of our award-winning research work was transferred or deployed for real-world industrial AI product and service solutions.

Markus Langer, Ph.D

Saarland University

About Markus Langer
Dr. Markus Langer is Post-Doc and research associate at the Department of Work and Organizational Psychology at Saarland University. In his research he integrates theories from work and organizational psychology and human factors to address research questions in the realm of algorithmic decision making. His research interests cover trust in artificial intelligence, psychological dimensions of explainable AI, and human-system collaboration in decision-making.
Keynote: Psychological Dimensions of Explainability

Although there are already several decades of research on explainable artificial intelligence (XAI) in computer science, the need for multi-disciplinary perspectives on this topic has only recently received increasing attention. In this talk, I will introduce a psychological perspective on XAI. Specifically, I will provide an overview on psychological theories that – applied to human-computer interaction – can be used to derive hypotheses about the possible effects of explanatory information on psychological variables (e.g., trust, perceived justice).

Accepted Papers

Explainability auditing for intelligent systems: A rationale for multi-disciplinary perspectives

Authors: Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith and Jonas Wahl

Abstract: National and international guidelines for trustworthy artificial intelligence (AI) consider explainability to be a central facet of trustworthy systems. This paper outlines a multi-disciplinary rationale for explainability auditing. Specifically, we propose that explainability auditing can ensure the quality of explainability of systems in applied contexts and can be the basis for certification as a means to communicate whether systems meet certain explainability standards and requirements. Moreover, we emphasize that explainability auditing needs to take a multi-disciplinary perspective, and we provide an overview of four perspectives (technical, psychological, ethical, legal) and their respective benefits with respect to explainability auditing.

Cases for Explainable Software Systems: Characteristics and Examples

Authors: Mersedeh Sadeghi, Verena Klös and Andreas Vogelsang

Abstract: The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user’s expectation. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.

Can Explanations Support Privacy Awareness? A Research Roadmap

Authors: Wasja Brunotte, Larissa Chazette and Kai Korte

Abstract: Using systems as support tools for decision-making is a common part of a citizen’s daily life. Systems support users in various tasks, collecting and processing data to learn about a user and provide more tailor-made services. This data collection, however, means that users’ privacy sphere is increasingly at stake. Informing the user about what data is collected and how it is processed is key to reaching transparency, trustworthiness, and ethics in modern systems. While laws and regulations have come into existence to inform the user about privacy terms, this information is still conveyed in a complex and verbose way to the user, making it unintelligible to them. Meanwhile, explainability is seen as a way to disclose information about a system or its behavior in an intelligible manner. In this work, we propose explanations as a means to enhance users’ privacy awareness. As a long-term goal, we want to understand how to achieve more privacy awareness with respect to systems and develop heuristics that support it, helping end-users to protect their privacy. We present preliminary results on private sphere explanations and present our research agenda towards our long-term goal.

Holistic Explainability Requirements for End-to-End Machine Learning in IoT Cloud Systems

Authors: My Linh Nguyen, Thao Phung, Duong-Hai Ly and Hong-Linh Truong

Abstract: End-to-end machine learning (ML) in Internet of Things (IoT) Cloud system consists of multiple processes, covering data, model, and service engineering, and is involved by multiple stakeholders. Therefore, to be able to explain ML to relevant stakeholders, it is important to identify explainability requirements in a holistic manner. In this paper, we present our methodology to identify explainability requirements for end-to-end ML in developing ML services to be deployed within IoT Cloud systems. We identify and classify explainability requirements engineering through (i) involvement of relevant stakeholders, (ii) end-to-end data, model, and service engineering processes, and (iii) multiple explainability aspects. We present our work with a case of predictive maintenance for Base Transceiver Stations (BTS) in the telco domain.

On the Relation of Trust and Explainability: Why to Engineer for Trustworthiness

Authors: Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomaecker, Timo Speith and Sarah Sterz

Abstract: Recently, requirements for the explainability of software systems have gained prominence. One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system. Although this seems intuitively appealing, recent psychological studies indicate that explanations do not necessarily facilitate trust. Thus, explainability requirements might not be suitable for promoting trust.

One way to accommodate this finding is, we suggest, to focus on trustworthiness instead of trust. While these two may come apart, we ideally want both: a trustworthy system and the stakeholder's trust. In this paper, we argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness -- and that a system's explainability can crucially contribute to its trustworthiness.

Towards Perspicuity Requirements

Authors: Sarah Sterz, Kevin Baum, Anne Lauber-Rönsberg and Holger Hermanns

Abstract: System quality attributes like explainability, transparency, traceability, explicability, interpretability, understandability, and the like are given an increasing weight, both in research and in the industry. All of these attributes can be subsumed under the term of "perspicuity". We argue in this vision paper that perspicuity is to be regarded as a meaningful and distinct class of quality attributes from which new requirements along with new challenges arise, and that perspicuity as a requirement is needed for legal, societal, and moral reasons, as well as for reasons of consistency within requirements engineering.

A Quest of Self-Explainability: When Causal Diagrams meet Autonomous Urban Traffic Manoeuvres

Authors: Maike Schwammberger

Abstract: While autonomous systems are increasingly capturing the markets, they also become more and more complex. Thus, the (self-) explainability of these complex and adaptive systems becomes evermore important. We introduce explainability to our previous work on formally proving properties of autonomous urban traffic manoeuvres. We build causal diagrams by connecting actions of a crossing protocol with their reasons and derive explanation paths from these diagrams. We strive to bring our formal methods approach together with requirements engineering approaches, by suggesting to use run-time requirements engineering to update our causal diagrams at run-time.

Our Sponsors