TRUST-AI 2026: The European Workshop on Trustworthy AI IJCAI/ECAI 2026 Bremen, Germany, August 15-17, 2026 |
| Conference website | https://sites.google.com/view/trust-ai-2026 |
| Submission link | https://easychair.org/conferences/?conf=trustai2026 |
| Submission deadline | May 10, 2026 |
CALL FOR PAPERS: TRUST-AI 2026 – THE SECOND EUROPEAN WORKSHOP ON TRUSTWORTHY AI
Researchers and practitioners on trustworthy AI are invited to submit papers to TRUST-AI 2026, the Second European Workshop on Trustworthy AI, organized as part of the International Joint Conference on Artificial Intelligence - IJCAI 2026, Bremen, Germany.
Trustworthy AI is increasingly important as AI is integrated into the services and practices of an ever broadening range of application areas. We understand trustworthy AI from a human-centred perspective, where trustworthiness is considered over the lifetime of the AI system, spanning system technical attributes, business and societal implications, and adherence to ethical and legal requirements. A broad set of methodological and technical approaches are needed to adequately assess and optimize AI trustworthiness. Trustworthy AI may be achieved and managed within an AI risk management approach, in line with existing frameworks and approaches to trustworthy AI.
There is a rapid development of the knowledge base and frameworks for trustworthy AI. Furthermore, trustworthy AI is increasingly required through evolving policies and legislation – as for example seen in the European AI Act. Following from this, it is important to develop a common understanding of how to realize trustworthy AI for business and societal contexts. As such, TRUST-AI aims to be an arena for sharing and discussion of trustworthy AI based in academic research and industrial practice.
-
LOCATION: Bremen, Germany - as part of IJCAI/ECAI 2026
-
DATE: August 15-17 (exact dates to be decided)
-
WORKSHOP FORMAT: On-site attendance - as part of IJCAI-ECAI 2026
-
SUBMISSION DEADLINE: May 10
SUBMISSION CATEGORIES
Participants are invited to submit position or short papers to be presented at the workshop. Papers should address one of the workshop topic areas.
- POSITION PAPERS (2-4 pages): Papers presenting a specific position or open questions in need of reflection or discussion. Could also include case experiences or planned research.
- SHORT PAPERS (5-9 pages): Papers presenting theoretical contributions, case experiences, or findings from empirical studies. Could also include work in progress.
Concerning paper length:Note that the workshop paper template has room for approx. 500 words pr. page.
PROCESSING OF SUBMISSIONS
Submissions will be reviewed by three independent reviewers. The review process is single blind, meaning that author information is included in the submissions.
Accepted position papers are published at the workshop website. Accepted short papers will be submitted for publication in CEUR Workshop Proceedings (https://ceur-ws.org/).
IMPORTANT DATES
- May 10: Submission deadline
- June 10:Author notification
- July 10:Final version of papers
- August 15-17: Workshop (final dates to be decided)
KEY TOPICS OF INTEREST
TRUST-AI aims for needed reflection and discussion of trustworthy AI as it is manifested in research projects and case studies. Specifically, we encourage participants to contribute on the following topic areas:
-
Human-centered trustworthy AI: How to incorporate user perspectives and values in assessment and optimization of trustworthy AI. How to manage conflicting priorities between different stakeholders.
-
Technological Advancements for Trustworthy AI: Emerging technologies to support the development, deployment, and verification of trustworthy AI.
-
Risk Management for Trustworthy AI: Frameworks for identifying, assessing, and mitigating risks associated with AI trustworthiness.
-
The Ethical Basis of Trustworthy AI: Reflections on the ethical foundations of trustworthy AI, its principles and values.
-
Assessment of Trustworthy AI: Methods, tools, and best practices for trustworthiness assessment.
-
Ethical and Legal Considerations for Trustworthy AI: Ethical and legal requirements for Trustworthy AI and means to assess and support compliance.
-
Trustworthiness Optimization throughout the AI Lifecycle: Ensuring and maintaining trustworthiness throughout AI development and deployment.
ORGANIZERS AND PROGRAM COMMITTEE
The organizers of TRUST-AI are:
- Asbjørn Følstad, SINTEF, Norway
- Gregoris Mentzas, National Technical University of Athens, Greece
Program committee members are:
- Dimitris Apostolou, ICCS & University of Pireus, Greece
- Steve Taylor, University of Southampton, UK
- Eleni Tsalapati, ATC, Greece
- Giannis Stamatellos, Institute of Philosophy & Technology, Greece
- Andrea Palumbo, KU Leuven, Belgium
- Gabriel González Castañé, BDVA, Belgium
- Henrik Junklewitz, Fraunhofer, Germany
