Inferring Mental Representations from Gaze Using Vision-Language Models
OngoingUsing CLIP embeddings to decode the semantic content of mental models from eye-tracking data during naturalistic viewing
My research investigates how people perceive, segment, and remember ongoing events using eye-tracking, fMRI, and computational modeling. I combine behavioral experiments with machine learning approaches (vision-language models, neural networks) to understand the cognitive and neural mechanisms underlying event cognition.
Using CLIP embeddings to decode the semantic content of mental models from eye-tracking data during naturalistic viewing
Interactive visualization platform for pose estimation and pantomimed action recognition research
Investigating how gaze anticipation errors reveal event boundaries during movie watching — Published in Journal of Experimental Psychology: General
Gaze entropy increases predictively before event boundaries, revealing proactive cognitive control during naturalistic viewing
Investigating whether event model updating occurs gradually or suddenly during narrative comprehension — Dissertation Project
Investigating the role of mimicry and action execution in event perception and memory encoding
Python 3 + TensorFlow 2+ implementation of the SEM cognitive model for event memory research
Applied research and translational work demonstrating how cognitive science findings can be translated into practical tools and applications.
A UX demo applying EMRC theory from cognitive neuroscience to help patients remember medication changes