Cognitive Pollution Revisited

Posted

Engineers learn to reason about direct, traceable failures: a faulty valve leads to a boiler explosion or a bug crashes a program. This model frames harm as rare, dramatic, and attributable, but the most serious damage caused by industry hasn’t actually worked this way. Instead, it has been diffuse, cumulative, slow to emerge, and difficult to attribute to any single decision or actor.

When leaded gasoline lowered the IQs of an entire generation of children, no single tank of fuel caused a measurable injury. Similarly, no particular cigarette is responsible for any particular cancer death. The harm is real and massive, but it was distributed across millions of exposures, tens of millions of people, and decades, so those responsible didn’t meet the legal requirement of direct and proximate cause.

This pattern is no longer confined to physical toxins. Social media platforms optimized for engagement produce radicalization and depression as a byproduct. The harm is diffuse: no single recommendation causes a school shooting or an act of genocide. The long, probabilistic causal chain makes it difficult to assign fault and therefore difficult to regulate.

The term cognitive pollution is increasingly used to describe this. As with other forms of pollution, it is proving difficult to regulate, and tech companies have every incentive to maintain that difficulty. After all, as long as harm cannot be attributed to them, they can externalize its cost onto the people who absorb it.

The tobacco industry did not accidentally produce uncertainty about the link between smoking and cancer. It funded research specifically intended to produce uncertainty, identified scientists willing to dispute the consensus, and maintained that effort for decades after the science was settled. The same pattern appears in the history of leaded gasoline, asbestos, oxycontin, and now social media and AI.

Civil and chemical engineers are now taught about pollution, not because the profession had a crisis of conscience, but because society decided over the course of many decades and through many court cases to hold polluters liable for harm. Noise pollution, and now light pollution, are retracing that history, and I think that if we take mental health as seriously as physical health, it’s inevitable that we will start to hold companies accountable for the mental suffering they cause.

If we frame harm as rare, dramatic, and attributable, responsible engineering means avoiding the specific decision that produces attributable failures. Under the pollution model, on the other hand, responsible engineers are accountable for long-term cumulative effects. A few recent court judgments in the United States may show that this shift is finally happening, but they will undoubtedly be contested, and as the overturn of Roe v. Wade shows, precedent isn’t enough of a guarantee. Legislation that holds tech companies responsible for the damage their products do to users’ mental health can come sooner, will be far more robust, and will have more impact than pious gestures like banning young people from using social media.

See the whole series

Michaels2008
David Michaels: Doubt Is Their Product: How Industry’s Assault on Science Threatens Your Health. Oxford University Press, 2008, 978-0195300673.
Oreskes2010
Naomi Oreskes and Erik M. Conway: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change. Bloomsbury, 2010, 978-1608193943.
Perrow1999
Charles Perrow: Normal Accidents: Living with High-Risk Technologies. Princeton University Press, 1999, 978-0691004129.
Singer2023
Jessie Singer: There Are No Accidents: The Deadly Rise of Injury and Disaster. Simon & Schuster, 2023, 978-1982129668.