Essay
Rethinking Systems in the Agentic Age
Apr 20, 2026
Software engineering is in a middle period of AI integration. Four open questions define the next few years: which systems to evolve, which to leave behind, where engineers actually work, and how teams keep trust intact.
One of the central problems in software engineering for the next few years will be how we integrate AI into our workflows to increase productivity. The deeper question is whether AI will be integrated into our workflows, or whether our existing workflows will still be valuable in the age of AI at all.
I think it's likely that most human-machine interactions will be redefined, and software engineering will be no different. For now, we're in a middle period where this is by no means a solved problem — most systems don't support AI end-to-end yet. We're still working with regular manual interventions at the integration points between technologies, evolving best practices around permissions, immature hallucination and probability mitigations, codebase drift, and many other issues. Nevertheless, the massive productivity gains have been real, and there is no clear sign of a plateau yet.
The technologies are getting better, and we're getting better at working with them. There are already clear questions that need to be answered.
What systems should we evolve?
Most software teams currently use some form of project management software along with a git provider (usually GitHub) and somewhere to store documentation. Depending on the team, this could also include cloud services for observability, infrastructure, UI design, customer support tickets, and so on. I think it's too simplistic to say that all these technologies and ways of working will be thrown out completely. They were created and adopted to solve specific problems, and those problems are not just going to disappear. It will take time for our ways of working to evolve, and many of these technologies may be able to evolve along with us. Properly maintained connectors or MCP servers might be enough for us to continue using these services as-is. It's important to understand how to evaluate and integrate AI features as they become available. Some tools genuinely multiply output, others provide no boost at all, and a poorly-designed tool can actively cost you time.
What systems should we leave behind?
Some systems we will have to leave behind for good in favor of new paradigms. There's risk involved, which is why this should be measured and deliberate, but we also do not want to overcorrect too far in the direction of pragmatism. Evaluating tooling effectively is essential to finding the sweet spot, and at this point nobody has settled heuristics for it. We will have to get comfortable with the uncertainty.
What level of abstraction should we work at?
It's still uncertain where we will spend most of our day. It doesn't seem likely that we'll spend focus hours writing code line-by-line in an editor any longer. Many of us are already moving from the system architecture level to the service module level in the same session. It's likely that most software engineers will be responsible for a number of services in a system, decide on a contract with another service owner, and build, test, and release a feature in the same day. If we make this transition, what are the tradeoffs? These are the things we'll have to keep in mind as we develop these processes as a profession.
How do we work together?
This may be the biggest growing pain for new teams integrating agentic processes into their workflows. Nobody wants to read an AI generated document that the person who generated it hadn't even bothered to read yet. This is a quick way to erode trust across your teams. AI process implementation must be rooted in productivity. Productivity shouldn't be measured by the number or size of the artifacts you generate. Its effectiveness should be measured the way we've always measured tool effectiveness — does it help us get the job done? Our processes should be designed around outcomes, not around activity.