Software Taboos

Posted

Every professional culture places certain topics outside the scope of legitimate concern, and those who choose to discuss them publicly anyway run the risk of being ostracized as radicals. For example, medicine was slow to discuss error and malpractice, while law was slow to discuss access and cost. A similar pattern is visible in the literature written for software engineers: thousands of pages on management, leadership, and technical decision-making, but almost nothing on workers’ rights, collective action, or alternative ownership structures. As with medicine and law, the topics excluded are ones that threatened the interests of the profession’s most powerful practitioners.

In Purity and Danger, Mary Douglas argued that that what a group treats as taboo is not random. Taboos cluster around the boundaries of the group itself: what is forbidden defines who belongs and who does not, what counts as legitimate thought, and what marks the thinker as dangerous or confused. The things placed out of bounds are therefore not those that threaten the group’s members. They are frequently the things that, if examined, would reveal tensions in the group’s self-understanding or power structure. The placement of a boundary is a form of power, exercised by those with enough standing to enforce it.

Medicine’s encounter with its own error rate is a case study in professional taboo and its eventual failure. For most of the twentieth century, the dominant framework for medical error was individual blame: a bad outcome was the fault of a bad physician, and the appropriate response was to identify and punish the individual. This framework served the profession’s interest in self-regulation and resisted external oversight. The work of researchers like Lucian Leape, and later the publication of To Err is Human by the Institute of Medicine in 1999, reframed medical error as a systems problem. The finding that between 44,000 and 98,000 Americans died annually from preventable medical errors was not new to researchers, but was taboo because it endangered the profession as a whole.

Atul Gawande’s Checklist Manifesto documents how aviation-style checklists, which treat error as a predictable product of complex systems rather than individual failure, were resisted for decades by physicians who experienced them as an affront to professional judgment. The evidence that checklists reduced deaths in surgical settings was not in dispute. The resistance was to the implication that a physician’s individual competence and experience were insufficient, and that standardized protocols designed by committees should constrain clinical practice.

The legal profession’s relationship with access to justice follows a similar pattern. Legal ethics in the United States has historically focused on conflicts of interest, attorney-client confidentiality, and competence, but not on cost. The profession’s code of conduct treats affordability as a matter of individual attorney discretion rather than a structural obligation. This is not because access to legal representation is unimportant; it is because the profession’s wealthiest members like the current pricing structure, and because defining access as a professional ethics problem would require the profession to regulate itself in ways that would reduce their incomes. The explosion in legal costs over recent decades has made the civil court system effectively inaccessible to most people in most disputes, a situation that the profession has addressed primarily by wringing its hands.

I own over a dozen books on software engineering. They covers project management, software architecture, team dynamics, technical leadership, career development, and the business of startups. What they do not cover is workers’ rights in the tech industry, the legal framework governing noncompete agreements and ownership of employee inventions, alternative ownership structures like cooperatives and worker-owned firms, or the systematic harassment and exclusion of women. While it’s common to hear people in Silicon Valley say that we should keep politics out of programming, these decisions about what to write (and read) are political: it puts the interests of one set of stakeholders over another.

See the whole series

Douglas2002
Mary Douglas: Purity and Danger: An Analysis of Concepts of Pollution and Taboo. Routledge, 2002, 978-0415289955.
Gawande2009
Atul Gawande: The Checklist Manifesto: How to Get Things Right. Metropolitan Books, 2009, 978-0805091748.
Kohn1999
Linda T. Kohn, Janet M. Corrigan, and Molla S. Donaldson (eds.): To Err is Human: Building a Safer Health System. National Academies Press, 2000, 978-0309068376.