Introducing the 2025 AI advent calendar

It’s been two years since I last created an AI advent calendar, a stretch of time that feels like a lifetime given the pace of progress in large language models and AI more broadly. Back then, I used the then-current version of ChatGPT and the first iteration of DALL·E—the image generation tool where you had to purchase tokens on the go.

This time, I’m using GPT-5.0. Rather than producing short stories based on images, I’ve asked the AI to generate 24 short essays on economic theories and concepts, each roughly 600 words. A typical prompt might be something like: “Write an essay about [topic] in 600 words.”

I quickly discovered the need to be specific. GPT loves bullet points and excessive bolding, so I had to explicitly instruct it to avoid both. I also standardized a referencing format to ensure consistency across essays. In some prompts, I provided detailed context to steer the AI in a particular direction; in others, I left it more open-ended.

Can the Slop Trap Be Avoided?

A project like this inevitably invites the question: should anyone spend time reading AI-generated content like this? Even before the rise of AI, the internet was already groaning under the weight of information overload and algorithmically curated realities. Now, with AI encroaching on every aspect of content creation and dissemination—at speed and with uncanny realism—we’re facing a new dilemma: the rise of AI slop.

GPT-5.0 itself defines AI slop as follows (ironically, throwing its own foundation under the bus):

“AI slop refers to the growing volume of low-effort, automatically generated content produced by artificial intelligence models—such as text, images, or videos—that lacks originality, accuracy, or aesthetic and informational value. AI slop is typically created at scale for engagement farming, ad revenue, search-engine manipulation, or content flooding, and often circulates without disclosure or human curation. It can include repetitive or incoherent writing, derivative art, clickbait news, synthetic videos (‘deepfakes’), or voice clones, and it contributes to information pollution by diluting authentic or verifiable human output.”

Perhaps the most immediately visible harm of AI slop is that it accelerates the production of low-quality, high-engagement content that was already dominating the digital landscape before AI. If you thought the internet was becoming a dopamine-maximizing hellscape before, AI now threatens to turn it into a doomsday machine, consuming all our attention, all the time.

Deepfakes are a particularly alarming consequence. Over time, they may erode the boundary between reality and fiction for people who rely on narrowly curated online experiences. Synthetic audio and video also open the door to sophisticated scams that neither users nor the public institutions meant to protect them are prepared for.

Can AI Content Be Valuable?

The notion of AI slop also raises a deeper question: Can AI-generated content be valuable at all, and under what conditions? Let’s return to the context of this project: an AI-generated advent calendar featuring 24 essays on economic ideas.

Would such a calendar be more worthwhile if it were written by a human? More genuine? More readable?

From a creator’s perspective, the difference is stark. Had I written the essays myself, I would have internalized the information far more deeply. Even though I selected the topics and tailored the prompts to reflect the messages I wanted the AI to convey, the process of researching and writing would have led to a much richer understanding. This speaks to a broader truth about learning: meaningful internalization takes time, repetition, and effort, none of which is replicated when outsourcing the task to AI. In other words: kids, use AI to write your school assignments at your own peril.

From the reader’s point of view, however, the trade-off is more nuanced. It’s too simple to dismiss all AI-generated content as slop. So let’s confront the central question: would a calendar written by me be better—more original, more insightful—than one generated by GPT-5.0?

I believe the answer is yes, but not decisively so in this case.

Measured purely in terms of output, I’m no match for the AI. It produced each essay in 15 to 20 seconds—orders of magnitude faster than I could have done. Even if my versions were higher quality overall, the AI wins hands-down in terms of output-per-hour, and for some writing tasks it is now naive to think that AI can’t increase productivity, significantly. For readers, meanwhile, this creates a different kind of dilemma, not just whether to read AI content, but how to choose which content to read, a task that is made more difficult if more and more AI-standardised (slop) content group around the mean.

If you're interested in spending your December mornings reading short essays on quirky economic ideas, I definitely think you should read mine. But the bar to entry is low—others could do the same with equal ease. If dozens (or hundreds) of people create similar AI-generated calendars, the problem becomes one of attention allocation. With limited time, how does a reader choose the best one when they’re all written by AI? We face here, I think, a fairly simple economic trade-off. As the cost of content generation goes down, the cost for the user in most efficiently allocating scarce time and effort goes up. After all, as much as we would like, we are not able to plug a switch into the back of our head and download content, Matrix-style, at least not yet.

Still, that doesn’t mean Ai content worthless. If the content is thoughtful, well-structured, and enjoyable, perhaps it is worth your while—especially if you’re curious about economics, and open to seeing how AI interprets and conveys complex ideas.

Thanks for reading—and stay tuned. The first essay drops on December 1.