Human Connection
I was an enthusiastic proponent of MOOCs (self-paced online video courses) when they first appeared. Like many people, I believed that recording the best teachers in the world and putting those recordings online with auto-graded exercises would democratize education on a global scale. I eventually realized that I was guilty of a category error: the real divide in teaching and learning isn’t in-person versus online, but interactive versus directive. If you and I are riffing off each other’s ideas in real time, it doesn’t matter (much) if we’re sitting side by side or on a video call: we have a human connection that isn’t present if you record a video for me to watch or write a book for me to read.
I’ve been thinking about this as I’ve been doing reviews on Scribophile, teaching myself Gleam on Exercism, supervising some undergraduate students at the University of Toronto, and trying to figure out how I feel about AI coding tools:
-
“Doing reviews” means commenting other people’s work as well as posting my own fiction, because doing the former is the only way to earn the points needed to do the latter. It’s a simple, transparent, and remarkably effective mechanism.
-
I’m not actually “teaching myself” on Exercism. Instead, I’m being taught indirectly by the people who have contributed exercises for me to do and by those other learners who have posted their solutions for me to compare mine against. I don’t have to post mine or take part in communal code reviews in the way that I do on Scribophile, but I learn a lot faster (and enjoy learning more) if I do so.
-
All the students I’m working with use AI tools as frequently and un-self-consciously as the students I had 15 years ago used Stack Overflow. One of them said it’s like pair programming with someone who knows a lot but thinks he knows everything; another student I spoke to at CUSEC a couple of weeks ago said that she mostly uses AI for “why not?” questions, as in, “Why doesn’t this piece of code work?”
-
I’m really intrigued by Unblocked, which uses LLM technology to answer questions about legacy code bases. Why do we have to test these three conditions in this precise order? Why do we have to repeat this configuration value in two places? Unblocked can answer these questions as well as someone who worked on the code two years ago and hasn’t touched it since, but only if it’s fed the whole conversation: code review comments, chat, docs, and everything else.
What ties all these together in my mind (this morning at least) is that “interactive vs. directive” continuum. Are LLMs just (or “just”) a way to interact with other people indirectly? When I ask an LLM a question, its answer is a statistical amalgam of answers that people I’ve never met might have given; should I put that in the same mental bucket as viewing other people’s exericse solutions on Exercism, reading other people’s answers to questions on Stack Overflow, or looking through old chat logs and code review comments? It certainly feels more like that to me than watching a video or reading a blog post like this one.
I realize that LLMs only provide the illusion of a human connection, but does that have to be the case? Can an LLM be an extra voice when a group of students debate each other rather than a substitute for person-to-person interaction? Can the AI in my IDE tell me who on the team I should talk to rather than trying to be spicy auto-complete? I think that investigating those questions will help us avoid making the same kind of category error I made all those years ago.