A Modest Proposal
Everyone in academia is trying to figure out what to do about AI. I don’t have a general answer, but I do have a proposal for dealing with the flood of AI slop in research publishing. My idea is built on several observations:
-
Credit cards became popular in the US twenty years before they took off in Europe. The reason is that Senator William Proxmire pushed through legislation in 1975 that put 100% of the cost of fraud on the credit card companies themselves. His argument was that the technology was moving too quickly for regulation to keep up. Instead, this legislation put the pain on the people with the power to make change. Credit cards companies could respond with whatever mix of prevention, detection, and insurance made sense in the moment. And yes, they passed the cost on to consumers in their bills, but since there was competition between the big providers, it was a reasonably efficient way to achieve the desired goal.
-
Some microlending services work in a similar way. Instead of lending money to just one person, these services lend to small groups (typically half a dozen people). The people in the group aren’t using the money together, but they are collectively responsible for its repayment, so that if one defaults, the others bear the cost. This model works best when the people in the group have pre-existing social ties: they don’t have to be friends, but if they are all from the same neighborhood, their community will help put pressure on potential defaulters.
-
I think the big five academic publishers have done research a lot of harm, but in the short and medium term we have to accept that they have a lock on the publication process that is central to recognition and promotion. Right now, those publishers need a solution to the twin problems of machine-authored submissions and machine-generated peer reviews just as much as we do.
-
Finally, the basic unit of organization and accountability in academia is the department: not the lab, and not the university (economics barely knows physics exists).
So here’s my proposal: if Person A from Department B submits a paper or a review to Journal C owned by Publisher D, that is deemed to contain AI slop that materially affects the results, then Publisher D will not accept new submissions from anyone in Department B for twelve months. Here’s why it works:
-
It focuses on desired outcomes rather than on mechanisms (which I think is essential right now because the technology is moving so quickly).
-
It puts the pain at a level where people know each other and have leverage over each other.
-
It brings the publishers on board.
There are lots of practical problems, such as what to do about multi-author papers and how to handle appeals. There will also undoubtedly be howls of outrage the first couple of times the penalty is enacted, because yes, innocent parties are going to pay a price for someone else’s misdeeds, but as with parking laws, the penalties only need to be enforced enough to have the desired effect. I’m sure there are a hundred problems I haven’t thought of yet, but it feels more practical and actionable than most of what I’m hearing. Feedback is always welcome.