Call for Papers
Please find below the Call for Papers for the First International Workshop on Requirements Engineering for Explainable Systems (RE4ES).
We live in the age of autonomous and complex software systems. Research advances the state-of-the-art at great speed in search of higher degrees of system autonomy. With the number and impact of such systems rising, it is increasingly important to speak about the features required to achieve high software quality standards.
Quality aspects such as ethics, fairness, and transparency have been discussed as essential for trustworthy systems. Explainability has been identified as not only a means to achieve all of these three in systems, but also as a way to actually foster users‘ sentiments of trust. Like other quality aspects, explainability must be discovered and treated during the design of those systems. As requirements engineers, we „translate“ these aspects into requirements in a project context. As simple as it sounds, we know that reality proves to be otherwise. Quality requirements are often a challenge for practice and explainability is no different.
Explainability has become a hot topic and communities from different areas of knowledge (e.g., machine learning, hci, philosophy, psychology, cyber-physical and recommender systems) have been researching it actively. Yet, the requirements engineering (RE) research community seems to be less concerned with the issue. At the same time, our community is extremely rich in methods and techniques that facilitate software development. This would add a lot of value to the explainability research and also ensure that we develop such techniques in parallel to the needs of other communities.
Based on this motivation, the workshop has three objectives: advancing RE for explainable systems, community building, and interdisciplinary exchange. This workshop should work as a starting point in exploring synergies between the RE community and other communities that are already actively researching explainability.
Topics of Interest
- Explanation visualization and interfaces
- Elicitation and analysis techniques for explainable systems
- Architectures for explainable systems
- Challenges for explainability in practice
- Specification and validation of explainability requirements
- Perspectives and views of different stakeholders
- Dealing with antagonistic quality aspects in terms of explainability
The workshop solicits two contribution types:
- Research papers (max. 6 pages w/o references) describe ongoing research or preliminary results aligned with the workshop’s topics of interest. They already have a proof of concept or a distinct vision of what the research aims at. They could be improvements of existing approaches, as well as empirical studies, and experience reports (e.g., of applied work in industry).
- Position or vision papers (max. 4 pages w/o references) serve to foster discussion on hot, relevant topics in the field. They could outline rough, but plausible visions regarding the future of explainability, as well as problem statements explaining challenges in industrial settings.
Submissions should clearly describe how they relate to the workshop scope and objectives. It would be beneficial to elaborate on the following points:
- the novelty of the approach,
- the authors’ understanding of the RE activities they wish to improve or support,
- the challenges to be solved in the development of explainable systems, and
- how the approach is or can be applied in practice, or, if the paper discusses no applicable approach, how industry can benefit from the created knowledge.
Papers will be peer-reviewed and accepted papers will appear in the IEEE digital library.
Important Dates – AoE (UTC-12h)
- Deadline for submission: Thursday, July 01, 2021 (extended) Thursday, June 24, 2021
- Notification of accepted papers: Friday, July 23, 2021
- Camera-ready paper due: Thursday, August 12, 2021
Submit via EasyChair: https://easychair.org/conferences/?conf=re4es21