← Back to AI

The AI Layoff Trap

A Note from the Authors of “The AI Layoff Trap”

There seems to be a lot of commentary flying around about our paper “The AI Layoff Trap,” on X, so here’s a note from the actual authors. We’re researchers who use ML and AI in our own workflows every day. Not boosters and certainly not anti-AI doomers.

The paper is a theoretical model, not a forecast. It does not predict AI will “collapse the economy,” no matter how many people are insisting otherwise on social media.

The very first word of our abstract is “If.” As in, “If AI displaces human workers faster than the economy can reabsorb them…” That “if” is carrying a lot of the weight in that sentence. People are conveniently dropping it to make the paper say something it doesn’t.


The Idea in Plain English

When a company replaces workers with AI, it keeps all the savings. But those laid-off workers stop spending, and that lost spending hits every firm in the market, not just the one doing the firing. Each company grabs a reward for automating (in terms of lowering costs or increasing productivity) while the damage gets spread around.

So automating is the rational move for any single firm. But if everyone does it, everyone ends up worse off — firms included — because their customers are the ones who just lost their paychecks.

It’s comparable to a Prisoner’s Dilemma. Swap “ratting” for “automating” and you can recover part of our setup. Textbook game theory. The result shouldn’t surprise most economists. What our paper adds is the math: when the trap kicks in, how bad it gets, and what actually fixes it.


Back to That “If”

The trap only bites if laid-off workers can’t find new income fast enough. If they can, the layoff problem stays small and might even reverse (see parameter $\eta$ in the model, and think through what could happen when it goes above 1). We’re flagging a structural risk that should be addressed now, not announcing an imminent crisis.


We’re Not Anti-AI

AI is genuinely useful. We use it every day. Our point is narrower: when everyone automates at once, even the firms doing the automating end up losing money. This is a type of coordination problem, like some have rightly pointed out, not a “ai is inherently bad” problem.


On Fixes

Most of the usual ideas — UBI, retraining, profit taxes, worker equity, firms negotiating with each other — don’t solve the underlying trap in our model. The one thing that does is a tax on automation itself, in the spirit of a carbon tax. And as workers find new jobs, that tax naturally shrinks over time.

Are we advocating for a tax? No, we’re just pointing out that in this model, it does address the issue. Again, IN THE MODEL.


If you’re going to talk about the paper, please reference the paper. Better yet, drop the PDF into your favorite LLM and argue with it there. You’ll get a more honest, critical read than from social media. I’m not active much on X. If you want to follow my work, I occasionally post on LinkedIn.