Models of human probability judgment errors

Humans have remarkable abilities to reason under uncertainty: we can make reasonable predictions about whether it will rain tomorrow or decide whether to buy a stock based on market analysis. The predominant framework for reasoning under uncertainty is classical probability theory,1 which is grounded in the Kolmogorov axioms (Kolmogorov, 1933). Classical probability theory provides the normative standard for rational behavior through the Dutch book arguments (Halpern, 2017, Ramsey, 1926), and has also served as the foundation for Bayesian cognitive theory, which has been applied across various domains in cognitive science (Hemmer and Steyvers, 2009, Knill and Richards, 1996, Sanborn and Chater, 2016, Tenenbaum and Griffiths, 2002).

Despite the success of models based on classical probability theory in explaining human cognition, substantial empirical evidence challenges the idea that cognition strictly follows this framework. For example, Tversky and Kahneman (1983) found that people sometimes judge the probability of a conjunction A and B, P(A∩B), to be greater than the probability of one of its individual components, P(A) — a reasoning error known as the conjunction fallacy. Other notable empirical deviations from classical probability theory include the unpacking effects (Tversky & Koehler, 1994), probability identity violations (Costello & Watts, 2014), and normality identity violations (Huang, Busemeyer, Ebelt, & Pothos, 2024a). These probability judgment errors are systematic, occurring not just as outliers in a few individuals but consistently across a substantial portion of individuals, as well as in the mean of aggregated judgments. Consequently, they cannot be attributed to Gaussian random noise in probability judgments or random effects, as these would cancel out in the aggregated data.

The mixed findings create an important gap that needs resolution: how can probabilistic accounts of human cognition be so effective, despite our consistent tendency to commit fallacies in probabilistic reasoning? Research related to this question has inspired two Nobel prizes in the field (Herbert Simon in 1978 and Daniel Kahneman in 2002) and has inspired a range of important computational and mathematical models in the field. The goal of this tutorial is to provide an introduction to these models and the specific reasoning fallacies each model seeks to address. A key strength of this tutorial is its focus on the precise and concise mathematical formulations of the models, offering readers the tools to accurately reimplement them.

We aim to present this tutorial to cognitive scientists and psychologists interested in computational and mathematical modeling of cognition, as well as those exploring human probabilistic reasoning. The sections on quantum models and models that incorporates probability judgment with choice and response time will require some basic knowledge of linear algebra, set theory, and differential equations. However, we strive to minimize the required prior knowledge and present these topics in the most accessible way possible. We believe that studying human probabilistic reasoning is valuable for cognitive science and decision-making research, given the broad applications of probabilistic models in cognition and their relevance to human rationality. Practically, probabilistic reasoning is crucial in fields like military and medical decision-making, as well as in daily aspects such as weather forecasting and stock market predictions. A mathematical understanding of the biases we exhibit in probabilistic reasoning can significantly improve how we manage these situations under uncertainty.

It is important to acknowledge that this tutorial does not encompass all fallacies associated with probabilistic reasoning. Notably, phenomena like base-rate neglect, stemming from individuals’ attempts at Bayesian inference approximations (Bar-Hillel, 1980), are of significant interest in probabilistic reasoning. However, we do not address base-rate neglect in this tutorial, as evaluating this fallacy requires additional information about prior probabilities, beyond the axioms of probability theory. For instance, to determine that someone judging P(A|e)>P(B|e) is committing base-rate neglect given the same evidence e, we must know the normative base rates P(A) and P(B) a priori. In contrast, the conjunction fallacy, where P(A)>P(A∩B), is always a fallacy because it directly violates the axioms of classical probability theory. We refer to these as “direct probability judgment errors”, as they challenge the axioms of probability theory itself, independent of specific probability values. In this tutorial, we emphasize these direct probability judgment errors rather than those that require additional information to classify as fallacies.

The tutorial will be structured as follows. We will begin with an overview, outlining the empirical and theoretical scope of the study. Next, we will explore each of the models mentioned in the overview in greater detail, dedicating a chapter to each model. Finally, we will conclude with a summary of the models and discuss potential future directions for research in the field.

Comments (0)

No login
gif