Repository logo
 
Loading...
Thumbnail Image
Publication

Explainable AI in education: fostering human oversight and shared responsibility

Use this identifier to reference this record.
Name:Description:Size:Format: 
XAI report.pdf5.96 MBAdobe PDF Download

Advisor(s)

Abstract(s)

Explainable artificial intelligence (XAI) is a sub-field of artificial intelligence (AI), which aims to provide explanations about the reasons why an AI-based system takes a decision or provides an output (TechDispatch, 2023). The search for meaningful explanations is not new in the field of AI, but it has been mainly a technical issue for developers who were looking for reliability in the results obtained by their AI systems, so they could be accepted by end users of specific areas (Ali et al, 2023). The great advance of AI technology in the last years has turned these systems into general-purpose digital tools, and new considerations have arisen in this realm. In terms of ethical AI, the Ethics guidelines for trustworthy AI published in 2019 by the High-Level Expert Group on AI of the European Commission established seven key requirements for trustworthy AI: (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental wellbeing, and (7) accountability.

Description

Keywords

Artificial intelligence Education Digital education European Union

Citation

Research Projects

Organizational Units

Journal Issue

Publisher

CC License