We had our third post-workshop debriefing of the year yesterday, in which we discussed several recent workshops. The most important point was probably that workshops are running more smoothly today than they did a year ago, even for first-time instructors.
Our Tulane workshop was taught in R; about 50 people registered, and about 40 showed up. The crowd was mostly biomedical, and the instructors use the Gapminder dataset throughout. R was taught in 2 half day sessions, an introduction to R and advanced R.
The biggest innovation was echoing all commands via Dropbox: the R scripts being edited were all saved to the Dropbox folder, so learners could follow along in real time. More importantly, the helpers could follow along as well, and catch people up when they fell behind. The workshop also got as far as showing people ggplot2 and knitr in the second session, though this was more a case of "look at the cool things you'll be able to do" than a lesson.
The workshop in Illinois was slightly smaller—about 30 people. Most were aerospace, chemical, and materials engineers, with a few biologists and geologists along to liven things up. As one of the instructors described in an earlier post, they collected feedback at every break, and took 5-10 minutes at the start of each session to address issues raised before pressing on. All instructors took part in sorting the feedback (which was handed in on sticky notes) so that instructors who were going too fast or into too much detail could see that for themselves.
The workshop at Sick Kids Hospital in Toronto used R again. The lead instructor felt it didn't hit everybody's expectations, mainly because there were so many different expectations: some people had a lot of prior programming experience, and were there to learn the statistical packages in R, while others needed help with basic ideas like loops. One thing that went really well was the introduction to R factors and different ways of addressing data in a data set: by the end of the workshop, some learners were using very complex techniques for manipulating their data.
Thomas Guignard taught at both SUNY Albany and the University of Toronto He commented that he's not comfortable ad libbing in front of learners, so he could defer difficult questions, figure out the answer during the break, and then come back to them when students were back in their seats. He also commented that several of the attendees at the Toronto workshop had installed Python 3.4 instead of Python 2.7, which led to more than a little confusion. (We have since added notes to the installation instructions to clarify which version people should get.)
A couple of points were made about teaching with the IPython Notebook. First, it doesn't show as much history on screen at one time as (for example) the Bash shell, so it's harder for learners to follow what's going on. Second, Neal Davis said that he found it much better to start with an empty notebook and grow it cell by cell rather than opening up one that already had code in it and replaying it.
Finally, we had a brief discussion (again) of whether we ought to follow Data Carpentry's lead and have a single data set or example run through all of the lessons. I've resisted doing this in the past, since it makes it harder for instructors to mix and match material, but I now believe the benefits outweigh the drawbacks. We will discuss this at an upcoming lab meeting, along with ways of getting notes like these into the instructor's guides for the lessons.
This post originally appeared in the Software Carpentry blog.