Is AI generated code tech debt?
Full disclosure, I used to be an AI coding agent skeptic: trying out LLMs only helped me figure out bugs, but when using agents to assist while coding I would be underwhelmed by the results and overwhelmed by the amount of code produced and ultimately uncomfortable going further until I understood what had been generated.
OpenClaw is the project that led me to change my mind. The sole contributor built a 500K+ lines of code project in less than 3 months which is impressive to say the least. And then I had my answer: I was trying to use AI agents within my existing coding workflow, which didn't give the model enough context and guardrails into the problem to solve and of course led to the generation of generic and poorly organised code. It's all about adapting your workflow to allow an agent to work alongside you.
How do I approach problem solving nowadays:
- I design my interfaces: this is process that I take the appropriate time to do which means purposefully slowing down
- I provide the agent with the context of the problem to solve and interface design with extra guardrails such as style guide (via an AGENTS.md)
- the agent generates tests: a collaborative and iterative approach works wonders
- the agent implements the various subclasses and application
- I or the agent use the tests to validate that the code is working as expected
this approach has allowed me to implement projects much faster. I am still working on improving my speed of implementation as I sit in the discomfort of not having the same in-depth knowledge of my codebase.
This is this discomfort that I wanted to write about. What happens when software engineering teams become less and less familiar with the codebase that they maintain?
Exploding Complexity
One of the most important components of software engineering is the organisation of the codebase to solve problems via complex solutions while managing the cognitive load of maintaining said solution. The process of writing code helped engineers keeping familiarity with their implementation, though imperfectly. We've all been there right? Reading code we're unfamiliar with, asking what was this developer thinking and realising we're the culprit. Even when you implement a function yourself, it's just a matter of time before you either forget your code, think it could be better or both.
So how can we expect engineers to be familiar with code they spent less time with because it was implemented faster and a very little part of it by them? This is the world we live in now.
I want to be pragmatic here, it's not about halting the march of progress, but understanding how to adapt ways of working and processes to these new tools which are very much challenging the parameters used to make decisions about codebases. Sure we can realistically repay tech debt in 3 months' time if we've only released 2 features by then and release code at a low frequency. What happens when any of these variables double, triple, or gets multiplied by 10? How do you keep up with the pace of introduction of tech debt, is it manageable anymore? Or do we accept it as a new normal?
Circle CI's state of software delivery report for 2026 explains that time to recover from failed build has increased from 2023 to 2025 by 33% due to engineers taking more time to identify root causes navigating unfamiliar codebases.
One of the biggest challenges teams face in integrating AI-generated changes is debugging unfamiliar code when a build fails.
Managing Complexity
As teams are able to produce code at a much faster rate than ever before, managing the complexity of codebases growing faster than any human can possibly adapt to, becomes essential.
Modular and cohesive codebases are easier to understand by humans as implementation complexity is abstracted away, but also make it easier to manage context size for coding agents. AI coding agents merely single out that software engineering was never about writing code, it was about implementing solutions and managing their complexity to empower organisations to learn and iterate.
As we are still very early in the adoption of AI and the proportion of AI generated code in production codebases, the consequences will start to be seen in the near future. But it's not difficult to extrapolate: just like poorly designed human-written codebases become difficult to maintain with time, AI generated ones (even partially) will trend to chaos much faster. The answer is to embrace the software engineering best practices developed and battle tested over the past 50 years.
Managing complexity is not just for humans. AI agents can be lost in context rot and become less efficient at completing tasks It turns out that what makes code easier to manage for humans also makes it more effective to navigate by coding agents: modular codebase with well design abstractions allow to make changes without needing the full context of the codebase.
SDLC disruption
AI is a new layer of abstraction: just like we started by managing memory manually (via low-level programming languages), we progressively handed over that responsibility to the programming language (e.g. high-level languages).
This new layer of abstraction is powerful, but doesn't fundamentally changes how we need to organise code, it actually reinforces the need for software engineering as a practice. Engineers designing high-level abstractions and working with AI implementing the logic necessitates the involvement of AI throughout the software development lifecycle. Producing more code faster challenges the cognitive limits of humans, so designing and being familiar with interfaces gives strong foundation to navigate the codebase without knowing every single implementation detail. Agentic solutions are (and will be) required to assist engineers navigating codebases at every single step of the software development lifecycle: design, implementation, testing & review, deployment and incidents management. This will become vital as the percentage of AI generated code grows.
Engineering rigour is more important than ever. Software best practices allowed us to iterate faster while making successful implement reproducible over time. AI has multiplied the speed of implementation. The right guardrails are necessary to organise the implementation work, but also to manage every step of the lifecycle. Software design can finally become the step requiring the most human input and feedback. We've created a 2 tier system where humans and AI share the "responsibility" of managing a codebase, and this responsibility needs to be extended throughout the SLDC: Anthropic just released Claude Code Review which is a critical step forward in this journey.
References






