Jerod W. Wilkerson, Jay F. Nunamaker, Jr., and Rick Mercer: "Comparing the Defect Reduction Benefits of Code Inspection and Test-Driven Development." Pre-print, IEEE Trans. Software Engineering, April 2011.
This study is a quasi-experiment comparing the software defect rates and implementation costs of two methods of software defect reduction: code inspection and test-driven development. We divided participants, consisting of junior and senior computer science students at a large Southwestern university, into four groups using a two-by-two, between-subjects, factorial design and asked them to complete the same programming assignment using either test-driven development, code inspection, both, or neither. We compared resulting defect counts and implementation costs across groups. We found that code inspection is more effective than test-driven development at reducing defects, but that code inspection is also more expensive. We also found that test-driven development was no more effective at reducing defects than traditional programming methods.
I'm still not sure what to think about test-driven development. On the one hand, I feel that it helps me program better—and feel that strongly enough that I teach TDD in courses. On the other hand, studies like this one, and the other summarized in Erdogmus et al's chapter in Making Software, seem to show that the benefits are illusory. That might mean that we're measuring the wrong thing, but I'm still waiting for one of TDD's advocates to say how we'd measure the right thing.
One thing I am sure of is the importance of studying students. It's easy to dismiss the results of such studies by saying that they don't necessarily apply to experienced full-time programmers in industry, but that's missing the point. They definitely do apply to students, and if a tool or practice isn't compelling to a 20-year-old who is exposed to it for three weeks, it's going to spread slowly (if at all).
Originally posted at Never Work in Theory.