I’ve spent over 25 years watching software systems accumulate entropy and drift into disrepair and failure. I’ve led several projects and know the excruciating pain it takes to bring such systems back on track. That experience makes me cautious about unbounded productivity gains with AI. The early productivity gains are clear, but we should discuss complexity and entropy before jumping to conclusions.
While there is no universal definition of complexity, when we say some software is complex, we usually mean a system with many parts that occasionally behaves in ways that go counter to our understanding. Entropy is a measure of uncertainty and unpredictability. It shows up as tangled dependencies, unpredictable states, cascading failures, etc.
Frederick Brooks explored these topics in the context of productivity 40 years ago in his ACM article titled No Silver Bullet: Essence and Accidents of Software Engineering. Since that article is behind a paywall, see this 1995 reproduction. In that article, Brooks made a few observations: (a) software construction involves two kinds of complexities, essential and accidental, (b) essential complexity is irreducible, and (c) there is no silver bullet for productivity.
AI is reshaping both essential and accidental complexities. We can now automate several aspects of determining what to build and how to build it. It sounds like we found the silver bullet that Brooks has given up on: more tokens → higher productivity → fewer people → more profits. But is that so? Let’s probe into the origins of complexity and entropy before jumping to such a conclusion.
Path dependence
The idea of path dependence is that early choices have irreversible consequences. As a system gets successfully adopted, early choices create a lock-in. We see it in every successful tech company: some early assumptions and designs usually dictate the rest of the software’s evolution. Those assumptions and designs ultimately influence not just code evolution but also team structures and even culture.
Paul David, an economist who introduced the idea of path dependence, uses the QWERTY keyboard as an example of how early typewriter designs in the 1880s locked us in since then. Path dependence constrains future choices. By limiting choice, it increases the cost of changes as requirements change and business conditions evolve.
Once I read about path dependence, I could not unsee its impact in the companies I worked at in my career. At every place, path dependence was at play – large monoliths, custom frameworks that lock the data in particular databases, particular team structures because “it has always been that way”, etc. Changing such things takes enormous time and effort. Most of what we consider legacy, or technical debt, is often the result of path dependence.
As we add more features to such software, the path dependence constraint will lead people to work around it. Then it becomes more difficult to understand the implications of changes. Entropy accumulates over time as we try to work within the constraints of path dependence.
Will AI help you circumvent path dependence? One might argue so: you could direct an agentic coding tool to refactor and rewrite the code to tear apart path dependence. However, in practice, that can prove to be disastrous. Rewriting any complex system, including all its dependencies, while preserving data and existing user behavior, is a hard task. The risk is high.
The next three are based on two excellent books on systems thinking: Thinking in Systems by Donella Meadows and Drift into Failure by Sydney Dekker.
Competing feedback loops
In the first chapter, Meadows introduces the concepts of “stocks” and “flows” and two kinds of feedback loops: balancing (stability-seeking) and reinforcing (amplifying, growth-seeking). Stock is the material you work with. In the context of software development, stock refers to the amount of code, services, data stores, various components, teams, and people. Flows are activities we do to manage the stock. Flows change the stock.
The best way to think about stability-seeking and amplifying feedback loops is to ask whether your organization is changing the stock for stability or growth. When focused on stability, you constrain the flow to reduce bugs and improve system stability and performance. When focused on growth, you increase the flow to prioritize business growth. You can try to improve both at the same time, but you can not ignore the tension between the two.
In practice, no software organization does either alone. Some parts of an organization may be stability-seeking (such as your infrastructure or platform teams), while others may be growth-seeking (such as your feature teams). Similarly, your architects and senior engineers may prioritize the architecture’s stability and integrity, while the rest may prioritize speed. The tension between these two leads to conflicting choices and workarounds. The net result is an increase in the overall system’s entropy.
Now, bring AI into the picture. Unless constrained, AI will rapidly accelerate conflicting feedback loops. Empowered by a high-speed tool, each team could attempt to optimize the system in conflicting ways – some for stability, and many for growth. Most might get their outcomes in the short term, but very quickly, the competition between these factions will increase entropy faster than it would without AI.
Delayed feedback
Delayed feedback is like the slow drip you forgot to fix in the basement, and now you have mold in the house. In software, certain delays take time to manifest. For example, you might delay some cleanup or scale-out activity because everyone is busy introducing other changes to the system, unaware that entropy has been increasing and that the system is reaching a critical state. Things would be fine for some time, and one day you might find yourself in firefighting. The delay in the feedback creates a false sense of safety, which then leads to delayed repairs. Per Dekker, delayed maintenance and repair contribute to systems drifting into failure.
Delays show up in other forms, too. In one case, a minor data corruption issue remained undetected for several months, and correcting it became expensive and time-consuming. In reality, in any large software-powered enterprise, there are likely several such slow feedback loops at play. Such delays usually result in future unplanned maintenance work and drain your productivity calculations.
Will AI help detect such delayed feedback loops sooner? Will it prioritize such repair work over other kinds of work without our prompting? Or will we have more such delayed feedback loops as we rapidly change the system with AI? While we don’t have evidence for either, AI will likely introduce more delayed feedback loops, requiring more frequent unplanned maintenance.
Stale/incorrect models
Meadows also reminds us that whatever we think we know about the world is a model. Models are incomplete, and different people construct different models to deal with the world outside their minds. Meadows says in Chapter 4,
… our models fall short of representing the world fully.
What does this have to do with software? As software ages and multiple people touch the software, our models drift apart. For example, the senior-most person in the company who wrote the original software (and thus the creator of path dependence) might have one model of the software. A junior engineer who joined the organization recently will have a very different model of the software. As software ages, changes by different people with different models lead to even more drifted models of how the system is supposed to work. Eventually, nobody would have the complete picture to reason about the system. Most technical debates I’ve witnessed are the result of people holding different models of how the system is supposed to work. People argue about how to add or modify something before checking whether they share the same assumption about how the current system is supposed to work.
As more changes get made, the coupling within the system changes in unexplainable ways. The result is increased entropy. Changes become time-consuming to make and difficult to validate.
AI will likely add more fuel to this situation unless we find ways to coerce everyone to use the same model of how the system is supposed to work. It will be difficult to construct such unified models for large monoliths, or even for monorepos.
Will AI have a better model of software (including systems’ runtime behavior and user behavior), carefully balance between stability-seeking and growth-seeking patterns, and manage entropy?
No. The four factors we discussed above – path dependence, competing feedback loops, delays in feedback, and above all, incomplete models will create a complexity ceiling for AI.
Just consider our models. Like us, AI builds a model of the system and uses that model to determine actions. Like all models, an AI-built model would be imperfect too. Further, multiple people working on the same system will likely see their AI tool generate a slightly different model tailored to their use. AI thus magnifies the same model problem we humans have. It is like 100 teenage developers working on the same system, each with a different model of the system.
Since we’re the ones setting the goals for AI, we will likely continue to favor growth-seeking feedback loops over stability-seeking ones, delay maintenance, and allow our systems to drift toward failure faster.
AI will require us to hold on to good software engineering principles even tighter. Those who understand this will build systems that grow and last. The ones chasing unbounded productivity gains won’t know why they failed.