Further Thoughts from a Not-So-Influential Educator

After receiving ACM SIGSOFT’s Influential Educator Award I wrote that I haven’t actually had much influence on software engineering education. One sign of that is that I’ve had no luck at all persuading faculty to create a class on data science for software engineers that uses software engineering data sets and results in its examples.

I now have a tentative explanation for that failure: the dearth of actionable results. It’s easy to compare the lengths of functions in JavaScript and Python (the former are much shorter on average), but what is an undergraduate supposed to do once they know that? Knowing how bugs cluster in large projects doesn’t connect with a student’s next assignment in operating systems the same way that analying data on transmission of bacterial infections connects with a nursing student’s lab practice.

So here’s my question: what results do we have from empirical software engineering that undergraduates could reasonably be expected to act on? Marian Petre’s analysis of why most practitioners don’t use UML comes to mind, though since courses outside software engineering already ignore it the practical impact would be quite small. Several decades’ worth of findings on the benefits of code review might be a better example; what others can you think of?