I got mail from a colleague at a prominent US university yesterday saying (in part, and elided to protect the guilty):
...the graduate student representative to the curriculum committee reported that the students did not want a scientific computing course, that they would instead figure it out themselves.... How does one respond to statements like this...that have...basically frozen skill levels? The options I see are formal ("in curriculum") training, bootcamps and /workshops, and letting them "figure it out themselves". Are there arguments about the successes of each?
There are certainly arguments: the problem is, there's practically no data. After 14 years, the conclusion I've reached is that we will be ignored until we do empirical field studies to show people just how many potential research hours are being wasted due to inadequate computational skills. Surveys won't tell us: we need to get someone out in the field to shadow grad students for a few weeks, watching what they actually do and how they do it, so that we can compare the median with the 90th percentile (or 75th, or whatever). I estimate it would take one person 4-5 months to do a preliminary version, and then another 15-20 researcher-months to collect enough data to show senior faculty just how bad things are. Of course, many would ignore the results (just look at how many doctors smoke), but I'd like to think it would change at least a few minds, and I frankly don't know what else will.
We know that such studies are possible, but I haven't found anyone willing to fund one in this particular area: I asked NSERC—Canada's equivalent of the NSF—twice in the three and a half years I was a professor; they said "no" both times, and I've had no more success elsewhere. As scientists, shouldn't we study the effectiveness of training just as rigorously as we'd study the effectiveness of a new treatment for diabetes? And if we're not going to do that, shouldn't we stop calling ourselves scientists?
Originally posted at Software Carpentry.