Abstract:
Providing feedback to address learners’ confusion in a personalised and timely manner can enhance
learning engagement and deeper understanding in large-scale online courses, particularly Massive Open
Online Courses (MOOCs). This goal aligns with a key objective within the Learning Analytics (LA)
community. The advent of Generative Artificial Intelligence (GenAI) tools presents the potential to
identify learners’ confusion in vast numbers of discussion texts and provide automatically-generated
and adaptive feedback to learners rapidly. However, a lack of trust in AI-generated content among
educators and learners is an obstacle to building effective GenAI-based LA solutions. This paper
discusses the potential of enhancing trust in GenAI tools by improving the transparency and
explainability of the large language models (LLMs) — a foundation of GenAI. We illustrate this approach
through a pilot study where we apply an explainable AI (XAI) method — the Integrated Gradients — to
decipher LLM-based predictions regarding learners’ confusion in MOOC discussions. The findings
suggest promising reliability in the XAI method’s ability to identify word-level indicators of confusion in
MOOC messages. The paper concludes by advocating the integration of XAI methods in GenAI
applications, aiming to foster wider acceptance and efficacy of future GenAI-based LA solutions.