Measuring Software Tools

I think there are four interesting ways to measure a software tool. To make them seem more deserving of serious attention, I’m going to use mathematical notation (subscripts!) to label them:

  1. The median discovery time is how long it takes half of the people who ought to be using the tool to find it.

  2. The median time to proficiency is how long it takes half of the people who stick with the tool to reach the point where they can use it without constantly consulting the manual or feeling frustrated.

  3. The average improvement in efficiency measures the benefits of the tool on tasks the user was already doing. This can either be how much time it saves (if the amount of work stays constant) or how much more work they can do (if the time stays constant).

  4. The increase in reach measures how many new tasks the user can tackle, i.e., how many things they can do with the tool that they couldn’t do before.

Funders care most about #4 because that’s what gets grant recipients on the cover of Nature. Developers mostly focus on #3, though they often describe what they’re doing in terms of #4: for example, they might describe an online text formatting tool as “revolutionary” when what it’s really doing is a task everyone is already familiar with in a less demanding way.

The world looks very different from the average user’s point of view, though. I’m belatedly starting to appreciate how important #1 is—Sarah Lin recently posted ten quick tips for making stuff findable that I wish I’d thought about ten years ago—but as an educator, I think #2 is the most important factor for most people. If something is so hard to learn that most people give up or never reach the point where they can do things without a lot of online searches and cursing then #3 and #4 are irrelevant. Good lessons are necessary but not sufficient: you can’t document your way out of a usability hole (cough Git cough), but not having decent tutorials pretty much guarantees that only the most obsessive and/or privileged of potential users will last long enough to see a tool’s benefits.

This is why I’m so weary of seeing funding announcements that don’t split money 50/50 between building new things and teaching people how to use them. Academia and the tech industry look down on training as second-class work; both congratulate themselves on how many people accomplish X without asking how many gave up along the way or how many aren’t blogging or tweeting about it because X makes them feel stupid every time they use it and they blame themselves instead of its creators. A shift in funding priorities won’t change that overnight, and might not get funders as many Nature covers to brag about, but it will help a lot of people and make the world a better place.

Updated: