News

Deutsch

CATALPA Lecture Series: Tanja Käser on educational technology and models of learning

[22.01.2026]

Generalizability versus interpretability of learning models — that was the topic of Tanja Käser's presentation as part of the CATALPA Lecture Series on January 21. Modeling learners' knowledge and behavior are at the heart of educational technology, yet, they often lack either generalizability or interpretability. If and how AI can help here is yet to be discovered.


Tanja Käser and Marcus Specht in front of lecture screen Photo: CATALPA
Prof. Dr. Marcus Specht welcomed Prof. Dr. Tanja Käser to her lecture.

Insights into science from international colleagues and opportunities for exchange—that's the goal of the CATALPA Lecture Series. Next guest: Prof. Dr. Tanja Käser, a new member of the CATALPA advisory board and assistant professor with tenure track at the EPFL School of Computer and Communication Sciences (IC) in Lausanne, where she heads the ML4ED Laboratory (Machine Learning for Education Laboratory). Her research lies at the intersection of machine learning, data mining, and education. She is particularly interested in creating accurate models of human behavior and learning.

Abstract

Modeling learners’ knowledge, behavior, and strategies is at the heart of educational technology. Learner models serve as a basis for adapting the learning experience to students’ needs and supporting teachers in classroom orchestration. Consequently, a large body of research has focused on creating accurate models of student knowledge and behaviors. However, current modeling approaches are still limited: they are either defined for specific and well-structured domains (e.g., algebra, vocabulary learning) requiring substantial work from experts and limiting generalizability, or they lack interpretability. Recent advances in generative AI, in particular large language models (LLMs), have the potential to address these constraints. However, LLMs lack alignment with educational goals and a grounded knowledge.

In this talk, I will discuss the key challenges in developing generalizable and explainable models, and our solutions to address them, including models tracking learning in open-ended environments and generalizing between different environments and populations. I will present our work on explainable AI, including a rigorous evaluation of existing approaches, the development of inherently interpretable models, as well as studies on effectively communicating model explanations. Finally, I will show some of our recent results combining “traditional” modeling approaches and LLMs to provide interpretable feedback and explanations while not compromising on model trustworthiness.

Three key takeaways from the lecture and discussion

  1. Tanja Käser in front of lecture screenFoto: CATALPA
    The presentation sparked a lively discussion afterwards.
    A catch with personalization with AI:
    As much potential as large language models offer for personalized learning support, their training data always represents an average of the whole. This conflicts with individualization – a point that must be taken into account when developing AI-supported learning systems.
  2. Interpretability ≠ Explainable AI (or is it?)
    In practice, interpretability and explainable AI are often used synonymously. The discussion made it clear that a more precise distinction is necessary here – especially when it comes to not only making models explainable, but also designing them to be inherently interpretable from the ground up.
  3. Trust is built through good communication
    Studies by Tanja Käser show that learners' trust in AI-based feedback still has room for improvement. A key lever for this is good explanation and transparent communication. Explanations are not an add-on, but a central building block for trustworthy AI in education.
Sandra Kirschbaum | 23.01.2026