I have sometimes quipped that we’ll know our work is done when the weekend papers run lesson reviews beside their book and movie reviews. (Yes, I quip. I can in fact be extremely quippy, but that’s a digression.) But reviews aren’t the only sign I’m waiting for, and neither is collaborative lesson development. I also hope I live long enough to see competitions like Kaggle’s in which people or teams vie with each other to build the best possible lessons about particular topics and for particular audiences.

The best model I know for this would the competitions Ned Gulley used to run at the MathWorks. As he described in this paper, everything is done in the open: the winner isn’t the person who submits the top-performing entry, but the person who contributed most to the overall result. I think this mix of openness and competitiveness would be wonderful for lesson construction…

…except I have no idea how to grade submissions. Grading code is easy: does it run, and if so, is it faster or more accurate than something else? But there’s no way to robo-grade the effectiveness of a lesson, and relying on a panel of human experts would neither scale nor provide sufficiently timely feedback.

All of which brings me back to Mike Caulfield’s choral explanations, and the idea that our notion of “lesson” may be obsolete. At their best, Stack Overflow and Quora provide a (loosely) curated chorus of answers to each question; those answers aren’t connected to form a narrative by something like Storify or DebateGraph, but they could be. Would it be possible to award points for adding those narrative links, and using the frequency with which people traverse them as a measure of efficacy? Or is the very notion of automating assessment of lesson efficacy yet another category error on my part.