How It Will Be Used

Posted

I was a big proponent of MOOCs when they first emerged. Why have thousands of teachers deliver mediocre lessons when instead we could make the best teachers presenting the best content available to everyone? It didn’t play out that way: building and maintaining a good online course is difficult and expensive, so almost everything we actually got was less effective than what we’d had.

However, MOOCs did work well for three groups:

  1. A minority of people are autodidacts capable of learning from just about anything. (I’ve heard their number put at one in seven, but can’t back that up.) I strongly suspect that most professional academics are in this group, which leads to a lot of survivor bias in judging MOOCs’ success or failure.

  2. Right-wing political groups have long viewed teachers with hostility, particularly if those teachers are unionized. They won’t say “we can gut our opponents by deskilling their profession” in public, but “we have to embrace the future and it will save us money” serves the same end.

  3. And speaking of money, I wrote in 2014 that “learning at scale” is the same as “ubiquitous surveillance in the classroom”. I can’t talk about my direct experience of this, but LinkedIn didn’t buy Lynda because education is a profitable business; they spent 1.5 billion dollars in order to give employers even more information about job applicants.

During COVID, what had once been a witty quote had turned into reality: online education is what everyone wants for everyone else’s kids. As public schools pushed more content online, private after-school tutoring exploded. Some of it is conspicuous consumption: today’s middle-class parents humble-brag to each other about how much they’re spending to get their kids into the right university in the same way that they casually mention how much their latest home renovations cost. But some of it, I think, stems from a growing sense of social insecurity. More and more middle-class parents believe (rightly) that their jobs and their children’s futures are at risk. If poor kids can watch the same videos as yours, then you’re going to start looking for some secret sauce that they can’t afford.

Which brings us to AI in the classroom. I’ve been using LLMs to help me with a bit of programming at work, and I understand why people like Jon Udell are so excited by them: they really do make it possible for non-experts to solve difficult problems in a fraction of the time they would otherwise need. My daughter isn’t exactly excited by them, but frequently consults ChatGPT when she’s struggling through math homework, in part because a third of the teacher’s solutions to practice problems are wrong.

So yes, I think AI has a lot of potential, but I’d like to learn from my mistake with MOOCs. In practice, I think AI will be used to hollow out the teaching profession. The Japanese call this kūdōka, and it inevitably widens the gap between haves and have-nots. In a few years—ten, maybe, but not twenty—I expect that most low-income children will be “learning” from bots while what’s left of the middle class spend an ever-larger fraction of their income on actual human teachers for their kids, both because it produces better results and to prove that they are, in fact, middle class. Those who are profiting from this politically and financially will point at the poor kids who succeed as proof that everyone could, and those who saw the possibility but discounted the risk will still be telling themselves that with just a bit more tinkering and a bit more funding we might still reach the promised land.

I hope I’m wrong.

If poor inner-city children consistently outscored children from wealthy suburban homes on standardized tests, is anyone naive enough to believe that we would still insist on using these tests as indicators of success?

– Kenneth Wesson, in Littky and Grabelle’s The Big Picture