Cognitive Pollution

Posted

Update: I have posted a recording of the talk.

I am grateful to Prof. Alberto Bacchelli for inviting me to give a colloquium at the University of Zurich a couple of days ago. My talk was titled Cocaine and Conway’s Law, but what it was really about was how to teach young software developers about cognitive pollution.

When most existing software engineering courses talk about risk, they talk about single-point incidents like the Therac-25 or Ariane 5 flight V88 where there is a direct cause-and-effect relationship between a mistake in software and something bad happening. I think we should instead talk about things like tetraethyl lead and asbestos, or about Purdue Pharma’s role in the opioid epidemic and the fossil fuels lobby’s campaign to promote climate change denial.

In each of these cases, deliberate choices increased the general level of risk to hundreds of millions of people, but the statistical nature of that risk allowed those responsible to avoid personal accountability. I think that case studies like these will help learners understand things like the role that Meta’s algorithms played in the Rohingya genocide and think more clearly about scenarios like the one below:

It is 2035. Most men under 20 have learned what they “know” about what women like from AI chatbots trained on porn and tuned to maximize engagement. As a result, many of them believe that a frightened “no” actually means “yes please”. The people who have earned billions from these systems cannot legally be held accountable for their users’ actions.

How does that make you feel?

If you are currently teaching an undergraduate course that covers cases like these, please get in touch: I’d be grateful for a chance to learn from you.