The latest issue of SIGCSE Bulletin (Vol 39, #3, Sept 2007) has the proceedings from the 12th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE’07). The papers aren’t available (directly) on the web, though students can get them through through the U of T library, and a little creating Googling can usually turn up authors’ copies of their contributions. I read about a quarter of the papers, and skimmed another quarter; notes are below.
Sue Jane Jones and Gary E. Burnett: “Spatial skills and navigation of source code.” This was one of the most interesting papers in the whole collection, and I hope it’s followed up. The authors point out that being able to find your way around a program is (a) an important part of programming, and (b) a spatial skill. Their study shows that people with high spatial ability (as measured by standard tests) were able to complete programming tasks in less time than people with weak spatial ability. (See also my review of Why Aren’t More Women in Science?)
William L. Honig and Tejasvini Prasad: “A classroom outsourcing experience for software engineering learning.” I’ve been toying with the idea of doing this ever since we started work on DrProject; it’s cool to see someone else got there first. Groups of students in two classes—one in Chicago, the other at the University of Wisconsin—worked on the same project at the same time, outsourcing work to one another. This is a great way to help students see why documentation, testing, and all “that stuff” is necessary; it’s also great prep for the real world.
Zachary Dodds, Christine Alvarado, Geoff Kuenning, and Ran Libeskind-Hadas: “Breadth-first CS 1 for scientists.” Describes an introductory course designed to provide future scientists with a one-semester overview of Computer Science. This isn’t quite what Software Carpentry or U of T’s new CSC120 are trying to do—they aren’t trying to give students a feel for the whole of CS—but it’s a very impressive course.
Guy Tremblay, Bruno Malenfant, Aziz Salah, and Pablo Zentilli: “Introducing students to professional software construction: a ‘software construction and maintenance’ course and its maintenance corpus.” Describes a new course at UQAM in Montréal in which students have to improve, extend, and test a pre-existing application. It’s obviously a useful experience; students at U of T should get some of this in the new CSC302 course.
Tamar Vilner, Ela Zur, and Judith Gal-Ezer: “Fundamental concepts of CS1: procedural vs. object oriented paradigm - a case study.” Gal-Ezer’s empirical studies of how students learn to program are always rigorous and thoughtful. Here, she and her collaborators show that, “…there is no significant difference in the overall achievements between the students who took the CS1 course with the traditional procedural approach and those who studied the object oriented paradigm.”
David Ginat: “Hasty design, futile patching and the elaboration of rigor.” A five-page gripe about how students throw code together, then try to make it work by patching it repeatedly, rather than thinking the problem through rigorously from the start. Hard to argue with the thesis, but unlike Gal-Ezer et al., the author doesn’t back up his claims or gauge the efficacy of his proposed solution with any kind of field study. File under “not proven true”.
Orna Muller, David Ginat, and Bruria Haberman: “Pattern-oriented instruction and its influence on problem decomposition and solution construction.” Describes an approach to teaching programming in which common small patterns are explicitly named and composed. The patterns themselves seem pretty small—“check each element of a list”, for example—but unlike the previous paper, this one is backed up with field data. I couldn’t see a link to a catalog of patterns anywhere in the paper, unfortunately.
Christopher James Martin: “Scribbles: an exploratory study of sketch based support for early collaborative object oriented design.” Give two programmers a whiteboard and a design problem, and they’re more likely to draw pictures than they are to write code longhand. This paper describes a lightweight tool to support that kind of shared brainstorming. Tablets and other better-than-keyboard devices are fast becoming the norm; it’ll be interesting to see if it takes the software engineering community as long to respond to that as it took them to notice the web…
Richard Kheir and Thomas Way: “Inclusion of deaf students in computer science classes using real-time speech transcription.” One of many papers on ways to level the playing field in CS classrooms for disabled students. This is one I’d like to explore in its own right: if automatic speech recognition (ASR) is really 75-85% accurate in classroom lecture settings, it’d be a valuable aide for everyone.
Blaise W. Liffick and Gary Zoppetti: “You can take it with you: profile transportability.” Describes the Personal Protable Profile (P3) system that copies a user’s profile onto a USB or similar device, which they can then plug into another machine to replicate their settings. “Kinda nice” if your physical characteristics put you near the middle of the bell curve; can save hours if you have special needs.
Michael H. Goldwasser and David Letscher: “Introducing network programming into a CS1 course.” Tempting—very tempting—but the authors actually convinced me that this should still be left to second year, as too many things can go wrong in building simple client/server systems that most first-year students won’t have the mental repertoire to debug.
Joseph Distasio and Thomas Way: “Inclusive computer science education using a ready-made computer game framework.” Now that computer gaming is bigger than Hollywood, there’s a lot of interest in building CS curriculum around games. Unfortunately, that would tend to reinforce the already-awful gender imbalance in computing. Here, the authors describe a course that emphasizes the broad range of skills needed in real game development teams. The Labyrinth game framework they describe is moderately interesting, but they haven’t convinced me that game-based instruction isn’t going to make a bad situation worse. I had a similar reaction to:
Laura Korte, Stuart Anderson, Helen Pain, Judith Good: “Learning by game-building: a novel approach to theoretical computer science education.” From the intro: “This paper describes an innovative method for teaching modeling skills in theoretical computer science (e.g., finite state automata, Turing machines). Students aquire a new modeling skill by completing a game-building assignment in which there is a direct and transparent mapping between the game that the student is building and the model…they are trying to master.” Nice, and I liked the examples, but it’s still game-based.
Catherine Lang, Judy McKay, and Sue Lewis: “Seven factors that influence ICT student achievement.” The prose is densely academic, but worth wading through, as the authors have identified seven factors that affect student success in information and communication technology courses. Quoting from the abstract, “These seven factors had minimal effect when they occurred in isolation within a unit of study, but certain combinations of factors created a learning environment that was detrimental to all students, and in other instances a learning environment that was particularly unfavourable for female students.” Among their conclusions:
- When a low female critical mass (less than 25%) was combined with no female teaching staff, the resulting "clubhouse" atmosphere accelerated female withdrawal rates.
- The three aspects of pedagogy that mattered most were a real-world curriculum (no games or abstract string sorting problems, please), varied assessment techniques (i.e., mixing solo and group work, oral exams and written, etc.), and having teachers who actually had some training in teaching.
Mary Anne L. Egan: “Teaching a ‘women in computer science’ course.” Want to fix what’s broken, but aren’t sure where to start? You’d be hard pressed to do better than the course outlined in this paper. I particularly liked the “Lego” exercise, and will try to incorporate it into my courses this fall.
Michael E. Caspersen, Kasper Dalgaard Larsen, and Jens Bennedsen: “Mental models and programming aptitude.” Describes a re-run of Dehnadi and Bornat’s Camel study that was unable to replicate the original results. It’s disappointing in a way—like most instructors, I’d welcome a programming aptitude test that actually worked—but stepping back a bit, it’s encouraging to see software engineering researchers checking each other’s work this way.
Michael T. Helmick: “Interface-based programming assignments and automatic grading of Java programs.” Describes a system that combines the course-management features of Blackboard or Moodle with version control, style checking (via PMD), and the like. An interesting hybrid, but I have to wonder about its long-term viability—the effort to maintain something like this is considerable. (This is one of the reasons we’re so pleased that local companies are starting to adopt DrProject.)
Rainer Oechsle and Kay Barzen: “Checking automatically the output of concurrent threads.” Teaching concurrency is hard; marking concurrent programs is harder. Here, the authors describe an approach based on replacing standard synchronization library primitives with variants that attach a vector timestamp to each operation, so that a marking tool can decide whether any two operations are causally related.
Christian Brown and Chris McDonald: “Visualizing berkeley socket calls in students’ programs”. Another useful tool for dealing with 21st Century programming exercises; this one draws pictures of what students’ programs are doing with sockets. (Code has to be compiled specially to provide the necessary information, but that seems like a small price to pay.)
Andrew Solomon: “Linuxgym: software to automate formative assessment of unix command-line and scripting skills.” Linuxgym is a VMware appliance that can automatically assess students’ mastery of Unix command-line and scripting skills. There isn’t much in the paper—it just outlines the author’s tutorial—but I’ve bookmarked the software…
Mordechai Ben-Ari: “Teaching concurrency and nondeterminism with Spin.” Outlines another tutorial, this one introducing instructors to a modern concurrency model checker called Spin. Yes, creating the model is the hard part, but tools like this are going to bring modeling into the mainstream, just as PMD and FindBugs have done to program analysis.