Bellas, FranciscoOoge, JeroenRoddeck, LezelRashheed, Hasan AbuSkenduli, Marjana PriftiMasdoum, FlorentZainuddin, Nurkhamimi binGori, Jessica NiewintCostello, EamonKralj, LidijaDcosta, Deepti TeresaKatsamori, DoraNeethling, DarrenMaat, Sarah terSaurabh, RoyAlasgarova, RenaRadaelli, ElenaStamatescu, AnaBlazic, ArjanaAttwell, GrahamTamoliūnė, GiedrėTziampazi, TheodoraKreinsen, MoritzAlves Lopes, AntónioDieguez, Jose VinasObae, Cristina2025-04-302025-04-302025http://hdl.handle.net/10400.26/57757Explainable artificial intelligence (XAI) is a sub-field of artificial intelligence (AI), which aims to provide explanations about the reasons why an AI-based system takes a decision or provides an output (TechDispatch, 2023). The search for meaningful explanations is not new in the field of AI, but it has been mainly a technical issue for developers who were looking for reliability in the results obtained by their AI systems, so they could be accepted by end users of specific areas (Ali et al, 2023). The great advance of AI technology in the last years has turned these systems into general-purpose digital tools, and new considerations have arisen in this realm. In terms of ethical AI, the Ethics guidelines for trustworthy AI published in 2019 by the High-Level Expert Group on AI of the European Commission established seven key requirements for trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental wellbeing, and (7) accountability.engArtificial intelligenceEducationDigital educationEuropean UnionExplainable AI in education: fostering human oversight and shared responsibilitytext