I’ve been saying for years that programmers ought to pay more attention to empirical studies of software engineering and base their practices on evidence rather than strong opinion. I was challenged on this three years ago when someone asked me to cite studies showing that bug trackers are a better way to manage backlog than shared spreadsheets. I couldn’t, and still can’t.

I also can’t find any studies showing that version control is a better way to manage software projects than mailing files around or dumping them in a shared folder. I “know” it’s true—I wouldn’t work on a project that didn’t use version control—but then again, my aunt “knew” that putting colored crystals on her chi points would relieve her arthritis. As far as I can tell from outside the Great Paywall of Academia, nobody’s ever actually done the study.

That’s kind of embarrassing, but it’s also an opportunity. The biggest open problem in empirical software engineering research is measuring productivity: lines of code per hour and story points per sprint are easy but meaningless, and we haven’t agreed on anything more sophisticated.

So here’s my proposal: let’s have a bunch of graduate students design and publish the experiments they would run in order to determine whether version control actually is better than alternatives. I’m not asking for them to actually run the experiments, but rather to make their ideas about measuring productivity explicit and public so that we can compare them.

And yes, once their ideas have been debated, I would like to see a few of the experiments run and their results compared, because this will help us figure out which of their metrics actually capture what we intuitively think of as “productivity”: if metric X doesn’t show that groups using version control outperform groups that don’t, I think we can safely discard it.