Language data from people thinking aloud proves that observing human behavior often leads to the wrong conclusion about how they actually think.
Cognitive models built purely on what people do fail to capture the actual internal logic used during a task. Traditional research relies on behavioral data to map the mind, assuming actions are a direct mirror of thought processes. Integrating verbalized thoughts into AI-driven models reveals that people often reach the same result using completely different mental mechanisms. This finding suggests that our current understanding of human decision-making is heavily distorted by a focus on outcomes. Real-world training or therapy might be targeting the wrong mental gears because it only sees the final action.
Think-Aloud Reshapes Automated Cognitive Model Discovery Beyond Behavior
arXiv · 2605.05091
Computational cognitive models discovered using large language models have so far relied solely on behavioral data. However, it is well-known that models produced from the behavioral trajectory alone are typically under-determined. In this work, we explore the use of Think Aloud traces as an additional form of data constraint during automated model discovery. When applied to the domain of risky decision-making, we find that the models discovered with think-aloud achieve significantly improved pr