AI and the evolution of Software
* ‘AI’ and ‘LLM’ are used interchangeably in this essay
The conversation around AI has settled into a predictable cycle: the announcement of a reality-altering feature from a new model, followed by a scientific study reminding us that AI is neither truly intelligent nor capable of reasoning, and may, in fact, be making us dumber. I should be upfront: I think AI models are great. I use them as much as I can, I try to learn with them, and I believe they will fundamentally transform how we work. In this essay, I’ll explain why.
Don’t slay a strawman
Many criticisms of AI attempt to catch the technology off guard by highlighting tasks it cannot perform—be it reasoning, solving a simple logic puzzle, or providing consistent answers. More often than not, these critiques amount to little more than attacks on a strawman. AI models belong to an entirely new class of software and should be analyzed as such.
Let me illustrate this through my own domain. I routinely use tools like the Microsoft Office suite and other software for creating and manipulating text, data, images, and sound, as well as platforms that compile and run code. Sometimes, these tools draw on cloud-based data; other times, they collect information from the web for specific tasks. Even generic activities like online research rely on these systems. In all these cases, the interaction between user and software is governed by strict, predefined rules. To get Excel to perform a task, I must follow its specific syntax—click the right buttons, enter formulas correctly, and so on. But with a large language model (LLM), I interact with software using natural language, which is then translated into executable code. This represents a fundamental evolutionary step in the realm of software models, one that is too often overlooked in discussions about AI.
This evolution plays out along two dimensions. First, users transform ideas expressed via natural language into functional code, ready to be exported into other environments. For domain experts, this is enormously powerful. It enables them to move from idea to execution rapidly, with AI providing iterative assistance along the way. Second, AI allows users to complete tasks within the AI environment itself, using natural language instructions—provided the AI has access to the internet, can read and manipulate user-generated data, and possesses autonomous, agent-like capabilities.
Thus, a spectrum emerges: traditional, rules- and syntax-based software at one end, and generic, multipurpose AI/LLMs at the other.
Much of the frustration and disappointment with AI stems from misplaced expectations. People expect AI to behave like traditional software—producing consistent, predictable outputs from natural language prompts. But there is an inherent trade-off between a system capable of exploring creative solutions, generating original content, and suggesting improvements, and one that rigidly follows user instructions based on a fixed architecture. Consider so-called “hallucinations,” for which AI is frequently criticized. These are features, not bugs, of a system designed to be explorative and, at times, ambiguous in how it solves problems. Similarly, when AI is accused of reinforcing confirmation bias during research tasks, we should ask: are humans not equally prone to the same biases? Of course we are—and frequently so.
This doesn’t mean AI hallucinations or biases are harmless. They are real costs that must be minimized. Users are right to expect AI to translate natural language prompts into more predictable outputs—producing consistent results for similar prompts and avoiding fabricated information. But they also reasonably expect AI to help them explore problems creatively, offer alternative approaches, and deliver these outputs far faster than a human could using traditional software or code. The combination of these two is AI’s promise—even if the technology does not fully live up to it yet.
In my view, the future of AI lies in parametrization—giving users control over where their model operates along the spectrum described above. Whether through user-operated models or autonomous agent systems, AI will evolve into a flexible tool whose behavior can be tailored to individual needs. It is along this spectrum that the true AI “killer apps” will emerge and where disruption of existing tools and workflows will occur. Developers should focus their efforts accordingly, and software users must recognize that we are now entering a fundamentally different era—one where interactions between people and software will take on a new, more dynamic dimension.
Nothing stops this train?
Data show that AI adoption—and the investment fueling it—is accelerating rapidly. The early boom-and-bust cycles of the internet offer a cautionary parallel. In its early days, internet hype far outpaced the technology's ability to deliver concrete returns. The bubble eventually burst, but the internet itself matured, becoming woven into nearly every aspect of daily life and economic activity.
AI may follow a similar path. It is possible that the massive investment currently pouring into AI models and their delivery at scale is at odds with the technology’s ability to produce returns on the timeline investors and users expect. If so, an AI bubble is forming—one that may eventually pop, leaving investors nursing large losses and the public wondering what all the fuss was about. Yet this reality could coexist with the technology itself maturing, growing, and becoming indispensable in both private and commercial spheres, much as the internet did.
A darker possibility emerges as we consider how AI will scale over time. Today, the internet is, for the most part, a public good, at least in the developed world. Jeff Bezos is many orders of magnitude wealthier than I am, but the phone and computer he uses to access the internet are functionally similar to mine, as is the internet he can access. AI, however, may follow a different trajectory. It is conceivable that the AI available to Bezos and other ultra-wealthy individuals and organizations will far surpass anything accessible to the average user. If that happens, the next evolution of the internet—centered around AI—will look very different from its early, more democratized stages of development.