The Parts Don't Make The System: Why AI won't replace developers (yet)
By James Eastham
Welcome to the first AWSCQ of 2026!
We’re kicking things of with one of our favourite people - the brilliant James Eastham.
When James isn’t running Ultra Marathons you’ll find him talking all things serverless as Developer Advocate at Datadog.
James has spoken at Comsum so many times it doesn’t feel like a Community Summit without him!
Speaking of which we’ve got our first ever Comsum in Birmingham this June.
Hit the link and grab an early bird ticket now!
Ok over to you, James!
It’s a stressful time to be working in the software engineering industry isn’t it. If you believe the hype it’s only a matter of time before we all lose our jobs, the robot overlords take over all aspects of software engineering and we all wish we’d instead trained to be tradesmen.
I’m hugely cynical of the AI hype, but equally I’ve seen Claude Code produce some top quality code that’s as good if not better than what I would have written myself.
Many of you who have tried these tools will, I’m sure, have seen similar progress.
But I’m still cynical. Selfishly, I’m writing here partly to stress test my own thinking on what our future might look like.
Reach out if you agree, shout at me if you don’t.
Software Isn’t Code
Building software isn’t just writing code; I don’t think that’s a controversial statement. It’s deployment. It’s security. It’s observability. It’s evolving the system over time.
It’s understanding and interfacing with the business. It’s dealing with over zealous product managers or C-suite executives with big dreams about delivering shareholder value by replacing all of us rather expensive software engineers with robots.
All of these things are part of the software lifecycle. And each of these individual components taken on their own can probably be automated away with an LLM, maybe apart from the interfacing and understanding the business part. LLM’s are pretty good at churning out millions of lines of code. I’ve used Claude Code extensively to refine GitHub actions pipelines. I’ve pointed it at the Datadog MCP server to help improve the performance of my application, based on the actual production telemetry.
This misses one big challenge though. All of these individual, nicely automatable things working together are what make up the entire system.
The actual properties of the system emerge from the individual technical components, plus the spaces between the components, plus a collection of fallible humans.
There is no way you could possibly predict what those emergent properties will be.
If you are using an LLM to automate away all those individual parts, at some point it’s all going to need to come together to solve a users problem.
You can see this in action if you take a peak outside of the software development industry.
Learning From The Real World
In a forest, a tree will grow from a seed. That seed itself has fallen from another tree, hit the ground, and been lucky enough to start to grow. The tree can’t possibly function on its own. It’s taking nutrients from the soil; when it’s small it might rely on other plants like brambles to prevent animals coming along and eating it. If the trees grow too dense, the forest floor won’t get much sunlight and new trees will struggle to grow. From that little seed, the possible outcomes can’t possibly be predicted.
Back in the 1960s, a cyberneticist and psychologist called Ross Ashby published his law of requisite variety. The law states ‘ When the variety or complexity of the environment exceeds the capacity of a system the environment will dominate and ultimately destroy that system.‘.
Sticking with our nature analogy, we are seeing this play out in real time with climate change. As the climate changes rapidly, individual ecosystems don’t have time to evolve and they ultimately die. The environment will always win. This same law applies to many different domains. If a system operates inside a fast changing and complex environment, it needs to have the ability to deal with change.
Something operating in a fast changing and complex environment. Sounds a lot like software development doesn’t it?
You need to think about the entire system.
What Does It Mean For Software?
If I’m engineering a physical building there are a reasonably fixed set of constraints to deal with. The laws of physics don’t change all that often; gravity is pretty static at this time in history. If I’m a hands on engineer on a production line, the characteristics of a sheet of metal I’m working with are constant. Machines or automations can deal with this consistency.
However, software engineering is different. One of the reasons software is as prevalent as it is, is that computers are really quite good at doing the same thing over and over again. A task that us humans aren’t so great at. But whilst the individual parts (writing code, building pipelines, wrangling Terraform scripts) are pretty consistent and deterministic the environment we are operating within is anything but deterministic.
When we apply Ashby’s law to this challenge, we end up in an interesting place. Although we’ve got these fixed, deterministic components that on their own are relatively ‘simple’. Individual components might be low-variety (there are only so many ways to write a CRUD API) bu the system they operate in is high variety.
Just because you have an LLM blasting through the low-variety parts, doesn’t mean you are also solving the high variety parts. The system as a whole needs the ability to respond, to evolve and to deal with change over time.
One-shotting a new feature through an LLM might make you feel good, but how is that feature going to evolve over time? How is that going to deal with the messy, ever-changing emotional worlds that us humans bring to the party?
This is only part of the problem though. The individual ‘parts’ that make up the software development lifecycle are largely automatable. Modern software systems are anything but, made up of (hundreds of services, shared databases, event buses), and embedded in a social system (product managers, users, shareholders with competing incentives). Both contribute a high amount of variety that an LLM can’t absorb. All of these different components provide a feedback loop and its only when you consider the entire system you start to see the emergent properties.
Whilst there only so many ways you can write a handler for a Kafka topic in .NET, or write a GitHub actions pipeline, or deploy a serverless function. There a myriad of ways these things can be put together and infinitely more ways they can be tipped out of balance.
You might be able to apply automation to some of the individual components, but there is no possible way you could begin to predict all the possible ways that those. components could work together.
Now, the counter argument to all of that is assuming that humans are part of the loop. If we remove humans completely, then maybe we are on to something. But as Moltbook might have proved to us, agents are equally as fallible and prone to doing crazy things. If you aren’t familiar with Moltbook, it’s a Reddit clone but built purely for agents. Humans aren’t allowed to post, only agents are. Spend some time there, you’ll realise how weird a place it is. And whilst I certainly don’t believe the probability machines we call agents are sentient, the context they are given certainly impacts how they interact with the world.
Even considering the fact that one day we only build systems for agents, and us humans never interact with a piece of software directly, each of these agents is going to have an element of unique context. Of unique guiding principles and unique ‘engineering’.
Hey, look at that we are almost building systems for humans again if you squint hard enough.
What Does This All Mean For You?What Does This All Mean For You?
stuck my head in the sand for quite a while with AI assisted development. I refused to engage out of principle. Now, I have a slightly different view.
If you categorize yourself as someone who writes code, and that’s it, you are probably in trouble. LLM’s are very good at writing code, providing they are in the hands
of someone who knows what they are doing. And right now, that’s a big caveat.
A phrase I’ve heard a lot is that LLM’s are an amplifier. Note - there’s no specification on that amplification being positive or negative.
If you have good engineering discipline, you understand how to structure evolvable software, you understand the caveats and tradeoffs and you practice things like
test driven development you will probably live to fight another day. If you’re releasing to production in a completely automated way, numerous times a day and doing
that in a safe and reliable way - fantastic, AI is probably going to positively impact your software development process.
If you have dreadful engineering practices, you don’t test anything and your deployments to production require a single person following a 25 step runbook to
manually deploy a binary - honestly, you’ve got bigger problems and no amount of AI is going to save you.
Most of you reading probably sit somewhere in the middle; I’ve certainly worked on very few perfect engineering teams. Most have some good practices, and some
dreadful ones. Remembering that AI is a tool that amplifies is important when you’re looking at where it can add value in your organisation.
Try to shift your thinking though, from someone who identifies as being a ‘developer’ to someone who thinks about the entire system. To someone who imagines
themselves more like a gardener and less like an engineer. Someone who cultivates a healthy system over the long term. Healthy, meaning it solves your users
problems in a way that is simple for the people running the system.
Instead of getting caught up in the act of focusing on an individual piece, and individual line of code or feature. Focus on the entire system.
Think about bigger questions like:
You look at your observability backend and can see there is a service you thought was retired but is actually still receiving traffic. Is it adding value? Or should you dive down that rabbit hole and look at ripping it out?
Your team spends 4 hours every Friday rotating secrets, is there a better way to manage that?
You spend 10 hours a week monitoring queue depths and manually tweaking scaling behaviours for your workers running on Kubernetes, shoud we run a proof of concept using Lambda instead?
Any or all of the above.
As a gardener, your role is less about writing code and more about deciding what code needs to be written or what code needs to be deleted. As the relative cost of writing lines of code descends to close to zero, the ideas of malleable software become more convincing. Software built not for scale, not for millions of users, but to solve a specific problem that your specific use case has. If you see yourself as a gardener of the entire system, these problems become clear pretty quickly. But to do that, you need to think about the entire system.
You are a steward of the entire system and if you consider yourself in that way then hopefully you feel a little bit better about the future. I for one seem to swing back and forwards between full on existential crisis and happiness about the potential power LLM’s give us. Every single time, I come back to the idea of systems thinking and that normally helps calm me down.
Thank you James!
A massive shout to you for putting this issue together.
What a great way to get the 2026 AWSCQ train rolling!
And that’s almost a wrap on this AWSQ - but before you go we’ve some interesting insights over in Sponsor corner from Chainguards 2026 Engineering Reality Report:
Take it away Chainguard…
Software teams are burning out; not from innovation, but from maintenance.
72% of engineers say constant demands make it hard to build new features.
They spend just 16% of their week writing code - the work they find most rewarding.
And 66% of tech leaders are now worried about retaining talent as a result.
The takeaway?
If we want innovation, we need to give developers the time and space to build.
Explore more insights in the 2026 Engineering Reality Report.
And that’s all folks!
Be sure to give our sponsors a click before you go!
















