Behavioral Economics

Posted

In 1971, the psychologists Daniel Kahneman and Amos Tversky ran a simple experiment. They told participants that a disease was expected to kill 600 people and asked them to choose between two public health programs. Program A would save exactly 200 people. Program B had a one-in-three chance of saving all 600 and a two-in-three chance of saving none. Most people chose A, i.e., they preferred the certain outcome.

Then Kahneman and Tversky rephrased the choice. Program C would result in exactly 400 deaths. Program D had a one-in-three chance that nobody would die and a two-in-three chance that all 600 would die. Statistically, the two programs are identical, but this time, most people chose D. Nothing changed except how the outcomes were described.

Classical economics assumes that people are rational agents who consistently maximize their own utility. Given a choice, they weigh expected outcomes, discount the future at a consistent rate, and select whatever serves them best. This is, in a word, bullshit. Behavioral economics looks at how people actually make decisions, and has repeatedly shown that they deviate from “rational” in predictable ways.

The first problem with the rational-agent model is computational. Optimizing requires evaluating all possible options against all possible outcomes under all possible conditions. No one can actually do this, so instead, people use a strategy that Herbert Simon called satisficing: they search through available options until they find one that is good enough and then stop. Simon called this bounded rationality: people are rational within the limits of the information, time, and cognitive capacity they actually have, which makes the heuristics people use to make decisions worth studying.

Kahneman and Tversky spent decades cataloguing people’s heuristics and the cognitive biases they embody. Anchoring is one of the most reliably reproduced findings in all of psychology. When people estimate an unknown quantity, their estimates are heavily influenced by numbers they have recently encountered, even ones they know to be irrelevant. In one study, participants spun a wheel rigged to land on either 10 or 65, then estimated the percentage of African countries in the United Nations. Those who had seen 65 guessed about 45 percent higher than those who had seen 10. They knew the wheel was random, but the number shaped their thinking anyway.

This isn’t stupidity or laziness; it is the brain doing something that is sensible in most contexts. Nearby numbers are usually informative, so most of the time, it makes sense to rely on them. This is why prosecutors set high anchor charges: juries’ verdicts cluster around the opening number. It is also why retailers display high “original” prices: customers anchor to whatever is crossed out. And research on salary negotiation consistently shows that the person who names the first number has the advantage, which is why negotiating advice boils down to the same instruction: speak first.

The availability heuristic says that people estimate how likely something is by how easily they can think of examples. After a plane crash receives extensive media coverage, people overestimate the risk of flying and underestimate the risk of driving, even though the underlying statistics have not changed. The availability heuristic is why catastrophic but rare events dominate public attention while slow, diffuse harms are systematically underestimated, which in turn is why it took decades to build public pressure around tobacco, lead paint, and vehicle safety.

Prospect theory describes how people actually evaluate outcomes. The key finding is loss aversion: a loss of a given size produces roughly twice the emotional impact of an equivalent gain. This asymmetry has practical consequences wherever people have a reference point they are trying to protect. Studies of taxi drivers in New York, Singapore, and other cities show that drivers work longer hours on bad days when earnings are below their daily target, and knock off early on good days. A rational agent who cares about total earnings would do the opposite, working more hours when conditions are favorable and fewer when they are not. Instead, drivers are managing losses relative to a reference point, not maximizing total income.

The same dynamic governs financial markets. Investors hold losing stocks far longer than winning ones, not because it is a good strategy but because the emotional cost of a loss exceeds the rational benefit of reinvesting the capital. This is called the disposition effect.

Similarly, the standard model predicts consistent discounting: a reward next month should be worth a fixed percentage less than the same reward today, and the same percentage should apply to any two adjacent future periods. What people actually show is hyperbolic discounting: an extremely steep preference for the present relative to any future point, combined with much flatter preferences among future periods. This is why someone can genuinely plan to quit smoking next year while lighting a cigarette. It is why gym memberships are purchased with full intention and then rarely used. Our future selves are strangers, and we are generous to ourselves and stingy with strangers.

If small changes in how choices are presented can have large effects on behavior, then the design of decision environments is itself a policy tool. Thaler and Sunstein called deliberate choice-environment design nudging. The canonical example is pension enrollment. When workers must actively opt in to a pension plan, participation rates are typically around 50 to 60 percent. When workers are enrolled unless they actively opt out, participation rises to 80 to 90 percent, without any change to the financial terms. The UK government introduced automatic pension enrollment in 2012; by 2019, over ten million additional workers had joined workplace pensions as a direct result.

The UK’s Behavioural Insights Team, established in 2010, found that adding a single sentence “Nine out of ten people in your area pay their taxes on time” to letters sent to late tax payers increased on-time payment rates by several percentage points. The intervention cost essentially nothing and recovered tens of millions of pounds in additional revenue. Nudges like thiat are not manipulation in the obvious sense—nothing is hidden and no options are removed. But the line between a nudge and a shove depends entirely on whose interests the design serves. Automatic enrollment in a pension plan serves the worker. Automatic enrollment in a subscription that is difficult to cancel serves the company. Variable reward schedules designed to maximize platform engagement are also nudges, built on the same science, serving a different master. Every infinite scroll, every notification badge, every “people who liked this also liked” recommendation is a behavioral economics intervention. The field that began by documenting human irrationality has become the primary toolkit for industrializing it.

see the whole series · email me

Kahneman2011
Daniel Kahneman: Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011, 9780374533557.
Thaler2009
Richard H. Thaler and Cass R. Sunstein: Nudge: Improving Decisions About Health, Wealth, and Happiness. Penguin, 2009, 9780143115267.