Date of Award
Spring 2025
Degree Type
Thesis
Degree Name
Bachelor of Science
Department
Computer Science
First Advisor
Andrew Forney, Ph.D.
Abstract
Modeling complex hierarchical decision systems can be used for predicting the effects of policy changes before their enactment, such as understanding how new laws might influence students within an educational system. However, understanding the effects of policies on individuals versus populations requires a structured assertion of the system’s causal dynamics. As such, we propose a novel reinforcement learning framework that integrates causal modeling to optimize decision-making in multi-agent environments, like schools. The causal model captures relationships between these levels, providing agents with a structured understanding of how their actions propagate through the system. Compared to traditional reinforcement learning methods, our framework offers improved explainability by grounding decision-making in a transparent causal structure. This alignment with human reasoning processes not only enhances interpretability but also facilitates more effective policy development. The framework is validated through simulation studies and compared to traditional, model-free approaches in reinforcement learning to assess its effectiveness in complex systems.
Recommended Citation
Dhingra, Vivek; Bazile, Brandon; and Forney, Andrew, "Causal Models for Realistic Cognitive Reinforcement" (2025). Computer Science Undergraduate Theses. 2.
https://digitalcommons.lmu.edu/cs_theses/2