Computer Graphics Laboratory ETH Zurich

ETH

Computational Education using Latent Structured Prediction

T. Käser, A. G. Schwing, T. Hazan, M. Gross

Proceedings of Artificial Intelligence and Statistics (AISTATS) (Reykjavik, Iceland, October 22-25, 2014), pp. 540-548
[Abstract] [BibTeX] [PDF]

Abstract

Computational education offers an important add-on to conventional teaching. To provide optimal learning conditions, accurate representation of students' current skills and adaptation to newly acquired knowledge are essential. To obtain sufficient representational power we investigate suitability of general graphical models and discuss adaptation by learning parameters of a log-linear distribution. For interpretability we propose to constrain the parameter space a-priori by leveraging domain knowledge. We show the benefits of general graphical models and of regularizing the parameter space by evaluation of our models on data collected from a computational education software for children having difficulties in learning mathematics.

@inproceedings{kae14a,
author = {T. K{\"a}ser and A. G. Schwing and T. Hazan and M. Gross},
title = {{Computational Education using Latent Structured Prediction}},
booktitle = {Proceedings of Artificial Intelligence and Statistics (AISTATS)},
year = {2014},
pages = {540-548},
}
[Download BibTeX]

Overview

Arithmetic skills are essential in modern society but many children experience difficulties in learning mathematics. Computer-based learning systems have the potential to offer an inexpensive extension to conventional education by providing a fear-free learning environment. To provide effective teaching, adaptation to the user's knowledge is essential. A variety of methods are currently employed to model user knowledge and behavior. A popular approach are probabilistic models such as Hidden Markov Models (HMM). In this work, we introduce a framework to cope with more complex models, and show how to obtain interpretable results. Contrasting previous work on parameter learning with tree-structured models like HMMs we opt for more complex loopy parameterizations while noting that learning and inference is a challenge. We include apriori domain expert knowledge via regularization with constraints to naturally enforce interpretability.



Results

We evaluate our approach on real data experiments. Input logs stem from a computer-based adaptive training environment for children with difficulties in learning mathematics. The data was collected in The data was collected in a multi-center user study in Germany and Switzerland. From the 126 participating children (87 females, 39 males), 57 were diagnosed as having developmental dyscalculia and 69 were control children.

We compare prediction accuracy of our learning method with and without constraints to previous work employing parameters chosen by experts and to work applying HMMs. Our results demonstrate that introducing domain knowledge in the form of parameter constraints has a two-fold benefitt. On one hand, the introduced parameter constraints guarantee an interpretable model. On the other hand, the proposed restrictions lead to improvement of the error metrics. Introducing restrictions on the parameter space is particularly beneficial for more complex models as well as for more difficult skills. For difficult skills where children change from the unlearnt to the learnt state after some training time, the unconstrained optimization converges to a solution closed to a uniform distribution (of correct and wrong outcomes), while the introduced domain knowledge enables more precise modeling of learning.

Downloads

Download Paper
[PDF]