Cristina Conati

Professor

Relevant Thesis-Based Degree Programs

 
 

Graduate Student Supervision

Doctoral Student Supervision

Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.

User characteristics and eye tracking to inform the design of user-adaptive information visualizations (2019)

Amidst an ever-increasing amount of digital information, information visualizations have become a fundamental tool to support tasks for discovering, presenting, and understanding the many underlying trends in this data. Ongoing effort to improve the effectiveness of visualizations however has been typically limited to their design and evaluation following a one size-fits-all model, meaning that they do not take into account the individual differences of their users. There is mounting evidence though, that user differences such as cognitive abilities, personality traits, learning abilities, and preferences can significantly influence user performance and satisfaction during information visualization tasks, thus motivating a need for personalization. In this thesis, our primary goal is to inform the design of user-adaptive visualizations, namely, visualizations that aim to recognize and adapt to each user’s specific needs. We conducted three different user studies to address several key questions for designing user-adaptive visualizations: i) What characteristics of the user should be considered to drive adaptation? ii) How can a visualization system adequately adapt to these user characteristics? and iii) When should adaptations be delivered in order to maximize effectiveness and reduce intrusiveness?In our first study, we tested the effectiveness of highlighting interventions on bar chart visualizations and examined the role that several cognitive abilities may have on visualization processing. Results from this study provide contributions showing that: highlighting relevant information in real-time can be beneficial to bar chart processing; certain user characteristics may only warrant adaptation as task complexity increases; users with low Verbal Working Memory may need interventions that facilitate processing of the visualization’s legend; and adapting to users’ level of Evolving Skill with a visualization is possible using eye tracking to make real-time predictions of this user characteristic.In our second and third study, we investigate visualizations embedded in narrative text, referred to as Magazine Style Narrative Visualization (MSNV). Results from these two studies provide contributions showing that: Verbal Working Memory and English Reading Ability can impact users’ ability to effectively process MSNVs supporting a need for adaptation; and in particular low Reading Ability users might benefit from adaptations helping them locate relevant information in the visualizations.

View record

A data mining approach for adding adaptive interventions to exploratory learning environments (2017)

Due to the open-ended nature of the interaction with Exploratory Learning Environments (ELEs), it is not trivial to add mechanisms for providing adaptive support to users. Our goal is to devise and evaluate a data mining approach for providing adaptive interventions that help users to achieve better task performance during interaction with ELEs. The general idea of this thesis is as follows:In an exploratory and open-ended environment, we collect interaction data of users while they are working with the system, and then find representative patterns of behavior for different user groups that achieved various levels of task performance. We use these patterns to provide adaptive real-time interventions designed to suggest or enforce the effective interaction behaviors while discouraging or preventing the ineffective ones. We test and confirm the hypothesis that as a result of these interventions, the average learning performance of the new users who work with the adaptive version of this ELE is significantly higher than the non-adaptive version.We use an interactive simulation for learning Constraint Satisfaction Problems (CSP), the AIspace CSP applet, as the test-bed for our research and propose a framework which covers the entire process described above, called the User Modeling and Adaptation (UMA) framework. The contributions of this thesis are two-fold:i) It contributes to the Educational Data Mining (EDM) research, by devising, modifying, and testing different techniques and mechanisms for a complete data mining based approach to delivering adaptive interventions in ELEs summarized in the UMA framework. The UMA framework consists of 3 phases: Behavior Discovery, User Classification, and Adaptive Support. We assessed each of the above phases in a series of user studies. This work is the first to fully evaluate and provide positive evidence for the use of a data mining approach for deriving and delivering adaptive interventions in ELEs with the goal of improving the user’s performance.ii) It also contributes to the user modeling and user-adapted interaction community by providing new evidence for the usefulness of the eye-gaze data for the purpose of predicting learning performance of users while interacting with an ELE.

View record

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Classification of Alzheimer's using deep-learning methods on webcam-based gaze data (2023)

There has been increasing interest in non-invasive predictors of Alzheimer’s disease (AD) as an initial screening for this condition. Previously, successful attempts leveraged eye-tracking and language data generated during picture narration and reading tasks. These results were obtained with high-end, expensive eye-trackers. We explore classification using eye-tracking data collected with a webcam, where our classifiers are built using a deep-learning approach. Our results show that the webcam gaze classifier is not as good as the classifier based on high-end eye-tracking data, meaning its AU-ROC, Sensitivity and Specificity are significantly lower. However, the webcam-based classifier still beats a majority-class baseline classifier in terms of AU-ROC, indicating that predictive signals can be extracted from webcam gaze tracking. Our results provide an encouraging proof of concept that webcam gaze tracking should be further explored as an affordable alternative to high-end eye-trackers for the detection of AD.

View record

Classifying long-term traits from action and eye-tracking data for personalized XAI in an intelligent tutoring system (2023)

There is increasing evidence that, when interacting with an AI system, users may benefit from having personalized explanations of the system’s behavior. Providing such personalized explanations requires that the AI system can assess user properties that are relevant for personalization. In the absence of prior information on such user properties, the system must rely on being able to predict them during the course of the interaction, in order to deliver personalized explanations as needed. In this thesis, we investigate the feasibility of predicting three user traits – conscientiousness, need for cognition and reading proficiency – that have been shown to impact the effectiveness of explanations when users interact with an Intelligent Tutoring System (ITS). We discuss results on training machine learning models on eye-tracking data, action data and a combination of both as users interact with the ITS. For the eye-tracking data, we test features generated from summative statistics of a user’s gaze, and those from patterns in sequences of areas of interest generated from a user’s gaze. We provide a detailed analysis of the relative efficacy of such models, and show that prediction above baseline classifiers is possible even during early stages of the interaction, which is crucial for the timely personalization of explanations. Lastly, we examine the feature permutation importance of our best models to gain insight into how they work and relate to literature.

View record

Improving prediction of user cognitive abilities and performance for user-adaptive narrative visualizations by leveraging eye-tracking data from multiple user studies (2021)

Previous work leveraged eye-tracking to predict a user’s levels of cognitive abilities and performancewhile reading magazine style narrative visualizations (MSNV), a common type of multimodal documentwhich combines text and visualization to narrate a story. The eye-tracking data, used for training theclassifiers, came from a user study, called control, where subjects simply read through MSNVs withoutreceiving any type of adaptive guidance, otherwise known as the control condition. The goal was tocapture the relationship between users’ normal MSNV processing and their levels of cognitive abilitiesand performance and use that to drive personalization. In addition to the control study, two other userstudies were also previously conducted to investigate the benefits of adaptive support, also known asadaptive studies. In these studies, subjects were provided with gaze-based interventions to facilitate theirprocessing of the MSNVs.In the control study, there was no intervention, and the MSNVs did not adapt to the users in any waybecause the idea was to make the predictions based on users normal, unguided MSNV processing andthen use those predictions to deliver the appropriate adaptations. In the adaptive studies, however,interventions influenced the way subjects processed the MSNVs. As a result, their gaze behavior did notrepresent how they would have behaved in the intended control condition, and a classifier merely trainedon the eye-tracking data from these studies would not learn the proper relationship.In this thesis, we propose different strategies for combining the additional eye-tracking data from theadaptive studies with our original data from the control study to mitigate the potential differences and toform more consistent combinations conducive to improved performance. Our results show that theadditional eye-tracking data can significantly improve the accuracy of our classifiers.

View record

A neural architecture for detecting user confusion in eye-tracking data (2020)

Encouraged by the success of deep learning in a variety of domains, we investigate the effectiveness of a novel application of such methods for detecting user confusion with eye-tracking data. We introduce an architecture that uses RNN and CNN sub-models in parallel to take advantage of the temporal and visuospatial aspects of our data. Experiments with a dataset of user interactions with the ValueChart visualization tool show that our model outperforms an existing model based on a Random Forest classifier, resulting in a 22% improvement in combined sensitivity & specificity. This is a larger improvement in performance than that achieved by either the CNN or RNN when considered alone, though all three deep learning models outperform the Random Forest baseline. To investigate this effect and understand the performance increase achieved by the deep learning models, we carried out preliminary investigations using explainable AI methods, from which we derive future directions for exploring performance gains from combining deep learning models.

View record

Toward XAI for Intelligent Tutoring Systems: a case study (2020)

Our research is a step toward understanding when explanations of AI-driven hints and feedback are useful in Intelligent Tutoring Systems (ITS). We added an explanation functionality for the adaptive hints provided by the Adaptive CSP (ACSP) applet, an intelligent interactive simulation that helps students learn an algorithm for constraint satisfaction problems. We present the design of the explanation functionality and the results of an exploratory study to evaluate how students use it, including an analysis of how students’ experience with the explanation functionality is affected by several personality traits and abilities. Our results show a significant impact of a measure of curiosity and the Agreeableness personality trait and provide insight toward designing personalized Explainable AI (XAI) for ITS.

View record

Toward user-adaptive visualizations: further results on real-time prediction of user cognitive abilities from action and eye-tracking data (2017)

Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye tracking data. Performing this type of user modeling is important for devising user-adaptive visualizations that can adapt to a user’s abilities as needed during the interaction. In this thesis, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from gaze data, thus providing evidence on the generality of these findings. We also evaluate how quality of gaze data impacts prediction. Finally, we further extend previous work by investigating interaction data as an alternative source to predict our target user characteristics. We present a formal comparison of predictors based solely on gaze data, on interaction data, or on a combination of the two.

View record

Trade-offs in data representations for learner models in interactive simulations (2016)

Interactive simulations can foster student driven, exploratory learning. However, students may not always learn effectively in these unstructured environments. Due to this, it would be advantageous to provide adaptive support to those that are not effectively using the learning environment. To achieve this, it is helpful to build a user-model that can estimate the learner’s trajectories and need for help during interaction. However, this is challenging because it is hard to know a priori which behaviors are conducive to learning. It is particularly challenging in complex Exploratory Learning Environments (like in PhET’s DC Circuit Construction Kit which is used in this work) because of the large variety of ways to interact. To address this problem, we evaluate multiple representations of student interactions with the simulation that capture different amounts of granularity and feature engineering. We then apply the student modeling framework proposed in [1] to mine the student behaviors and classify learners. Our results indicate that the proposed framework is able to extend to a more complex environment in that we are able to successfully classify students and identify behaviors intuitively associated with high and low learning. We also discuss the trade-offs between the differing levels of granularity and feature engineering in the tested interaction representations in terms of their ability to evaluate learning and inform feedback.[1] Samad Kardan and Cristina Conati. 2011. A Framework for Capturing Distinguishing User Interaction Behaviours in Novel Interfaces. Proceedings of the 4th International Conference on Educational Data Mining, 159–168.

View record

Constructing user models from eye gaze data in the domain of information visualization (2015)

A user-adaptive information visualization system capable of learning models of users and the visualization tasks they perform could provide interventions optimized for helping specific users in specific task contexts. This thesis investigates the accuracy of predicting visualization tasks, user performance on tasks, and user traits from gaze data. It is shown that predictions made with a logistic regression model are significantly better than a baseline classifier, with particularly strong results for predicting task type and user performance. Furthermore, classifiers built with interface-independent are compared with classifiers built with interface-dependent features. Interface-independent features are shown to be comparable or superior to interface-dependent ones. Adding highlighting interventions to trials is shown to have an effect on the accuracy of predictive models trained on the data from those trials and these effects are discussed. The applicability of all results to real-time classification is tested using datasets that limit the amount of observations that are processed into classification features. Finally, trends in features selected by classifiers and classifier accuracies over time are explored as a means to interpret the performance of the tested classification models.

View record

Inferring User Cognitive Abilities from Eye-Tracking Data (2015)

User-adaptive visualization can provide intelligent personalization to aid the user in the information processing. The adaptations, in the form as simple as helpful highlighting, are applied based on user’s characteristics and preferences inferred by the system. Previous work has shown that binary labels of user’s cognitive abilities relevant for processing information visualizations could be predicted in real time, by leveraging user gaze patterns collected via a non-intrusive eye-tracking device. The classification accuracies reported were in the 59–65% range, which is statistically more accurate than a majority-class classifier, but not of great practical significance. In this thesis, we expand on previous work by showing that significantly higher accuracies can be achieved by leveraging summative statistics on a user’s pupil size and head distance to the screen measurements, also collected by an eye tracker. Our experiments show that these results hold for two datasets, providing evidence of the generality of our findings. We also explore the sequential nature of gaze movement by extracting common substring patterns and using the frequency of these patterns as features for classifying user’s cognitive abilities. Our sequence features are able to classify more accurately than the majority-class baseline, but unable to outperform our best classification model with the summative eye-tracking features.

View record

Eye-tracking as a source of information for automatically predicting user learning with MetaTutor, an intelligent tutoring system to support self-regulated learning (2014)

Student modeling has been gaining interest among researchers recently. A lot of work has been done on exploring value of interface actions on predicting learning. The focus of this thesis is on using eye-tracking data and action logs for building classifies to infer a student’s learning performance during interaction with MetaTutor, an Intelligent Tutoring System( ITS) that scaffolds self-regulated learning (SRL). Research has shown that eye tracking can be a valuable source for predicting learning for certain learning environments. In this thesis we extend these results by showing that modeling based on eye-tracking data is a valuable approach to predicting learning for another type of ITS, a hypermedia learning environment. We use data from 50 students (collected by a research team at McGill University, which also designed MetaTutor) to compare the performance of actions and eye-tracking data (1) after a complete interaction, and (2) during interaction when different amounts of gaze and action data are available. We built several classifiers using common machine learning algorithms and techniques, with feature sets that are based on (1) eye-tracking data only, (2) actions data only and (3) eye-tracking and actions data combined. The results we found show that eye-tracking data brings important information in predicting student’s performance for an ITS supporting SRL in both overall and over time analysis. The features used for training classifiers suggest that usage of SRL tools available in MetaTutor can be a good predictor of learning.

View record

Predicting Affect in an Intelligent Tutoring System (2014)

In this thesis we investigate the usefulness of various data sources for predicting emotions relevant to learning, specifically boredom and curiosity. The data was collected during a study with MetaTutor, an intelligent tutoring system (ITS) designed to promote the use of self-regulated learning strategies. We used a variety of machine learning and feature selection techniques to predict students‘ self-reported emotions from eye tracking data, distance from the screen, electrodermal activity, and an ensemble of all three sources. We also examine the optimal amount of interaction time needed to make predictions using each source, as well as which gaze features are most predictive of each emotion. The findings provide insight into how to detect when students disengage from MetaTutor.

View record

The impact of individual differences on visualization effectiveness and gaze behavior: Informing the design of user adaptive interventions (2013)

Research has shown that individual differences can play a role in information visualization effectiveness. Unfortunately, results are limited given that there are so many individual differences that exist, and information visualizations are commonly designed without taking into account these user differences. The aim of this thesis is to investigate the impact of a specific set of individual differences (i.e., user characteristics) in order to identify which of these user differences have an impact on various aspects of information visualization performance. Eye tracking is also employed, in order to see if there is an impact of individual differences on user gaze behavior, both in general and for specific information visualization elements (i.e., legend, labels). In order to gather the necessary data, a user study is conducted, where users are required to complete a series of tasks on two common information visualizations: bar graphs and radar graphs. For each user, the following set of user characteristics are measured: perceptual speed, visual working memory, verbal working memory, and visualization expertise.Using various statistical models, results indicate that user characteristics do have a significant impact on visualization performance in terms of task completion time, visualization preference, and visualization ease-of-use. Furthermore, it is also found that user characteristics have a significant impact on user gaze behavior, and these individual differences can also influence how a user processes specific elements within a given information visualization.

View record

User Modeling and Data Mining in Intelligent Educational Games: Prime Climb a Case Study (2013)

Educational games are designed to leverage students’ motivation and engagement in playing games to deliver pedagogical concepts to the players during game play. Adaptive educational games, in addition, utilize students’ models of learning to support personalization of learning experience according to students’ educational needs. A student’s model needs to be capable of making an evaluation of the mastery level of the target skills in the student and providing reliable base for generating tailored interventions to meet the user’s needs. Prime Climb, an adaptive educational game for students in grades 5 or 6 to practice number factorization related skill, provides a test-bed for research on user modeling and personalization in the domain of education games. Prime Climb leverages a student’s model using Dynamic Bayesian Network to implement personalization for assisting the students practice number factorization while playing the game. This thesis presents research conducted to improve the student’s model in Prime Climb by detecting and resolving the issue of degeneracy in the model. The issue of degeneracy is related to a situation in which the model’s accuracy is at its global maximum yet it violates conceptual assumptions about the process being modeled. Several criteria to evaluate the student’s model are introduced. Furthermore, using educational data mining techniques, different patterns of students’ interactions with Prime Climb were investigated to understand how students with higher prior knowledge or higher learning gain behave differently compared to students with lower prior knowledge and lower learning gain.

View record

Prime Climb: An Analysis of Attention to Student-Adaptive Hints in an Educational Game (2012)

Prime Climb is an educational game that provides individual support for learning number factorization skills in the form of hints based on a model of student learning. Previous studies with Prime Climb indicated that students may not always be paying attention to the hints, even when they are justified (i.e. based on a student model’s assessment). In this thesis we will discuss the test-bed game, Prime Climb, and our re-implementation of the game which allowed us to modify the game dynamically and will allow for more rapid prototyping in the future. To assist students as they play the game, Prime Climb includes a pedagogical agent which provides individualized support by providing user-adaptive hints. We then move into our work with the eye-tracker to better understand if and how students process the agent’s personalized hints. We will conclude with a user study in which we use eye-tracking data to capture user attention patterns as impacted by factors related to existing user knowledge, hint types, and attitude towards getting help in general. We plan to leverage these results in the future to make hint delivery more effective.

View record

 

Membership Status

Member of G+PS
View explanation of statuses

Program Affiliations

 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Sign up for an information session to connect with students, advisors and faculty from across UBC and gain application advice and insight.