ESSA - DF - Relatórios
Permanent URI for this collection
Browse
Recent Submissions
- Explainable AI in education: fostering human oversight and shared responsibilityPublication . Bellas, Francisco; Ooege, Jeroen; Roddeck, Lezel; Rashheed, Hasan Abu; Skenduli, Marjana Prifti; Masdoum, Florent; Zainuddin, Nurkhamimi bin; Gori, Jessica Niewint; Costello, Eamon; Kralj, Lidija; Dcosta, Deepti Teresa; Katsamori, Dora; Neethling, Darren; Maat, Sarah ter; Saurabh, Roy; Alasgarova, Rena; Radaelli, Elena; Stamatescu, Ana; Blazic, Arjana; Attwell, Graham; Tamoliūnė, Giedrė; Tziampazi, Theodora; Kreinsen, Moritz; Alves Lopes, António; Dieguez, Jose Vinas; Obae, CristinaExplainable artificial intelligence (XAI) is a sub-field of artificial intelligence (AI), which aims to provide explanations about the reasons why an AI-based system takes a decision or provides an output (TechDispatch, 2023). The search for meaningful explanations is not new in the field of AI, but it has been mainly a technical issue for developers who were looking for reliability in the results obtained by their AI systems, so they could be accepted by end users of specific areas (Ali et al, 2023). The great advance of AI technology in the last years has turned these systems into general-purpose digital tools, and new considerations have arisen in this realm. In terms of ethical AI, the Ethics guidelines for trustworthy AI published in 2019 by the High-Level Expert Group on AI of the European Commission established seven key requirements for trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental wellbeing, and (7) accountability.