Essay
In Defense of Tickets and Pull Requests
Apr 20, 2026

Ticket writing and pull request workflows aren't valuable because they're convenient for humans; they're actually quite inconvenient upfront. Their value comes from something less visible: they determine what engineering teams are able to notice, catch, and correct.
The case against the ticket and the pull request
With agentic processes, you can now give a problem to an AI agent and let it work through the steps until it finds a solution. Some people claim to generate entire working applications with a single prompt. There's truth to this shift. Things have genuinely changed in fundamental ways, so it makes sense to question the old scaffolding. In a new paradigm, maybe we should invent new tools, too.
That makes sense to a point. There are real cases where ditching the ceremony is fine:
- A throwaway proof of concept you're going to delete once you've answered the question it was built to answer.
- A spike branch exploring whether an approach is even viable; nobody needs a ticket to explore.
- Hackathon or weekend projects where the constraint is time and the audience is yourself.
- Solo experiments and prototypes where the entire "team" is one person who already holds the full context in their head.
In those cases, wiring up tickets and formal reviews is pure overhead. But in most cases, abandoning these conventions means giving up protections forged over decades by engineers who discovered their value the hard way. Before we discard them, it's worth asking what they were actually protecting against; and whether the replacements can do the same work.
The case for the ticket
The primary purpose of the ticket is to document a requested unit of work and track it from beginning to end. Over the years, we've converged on a handful of things that make tickets work well:
- Linked documentation, so the reader can trace the "why" without interrupting someone.
- Ticket hierarchies that place a unit of work inside its larger objective; scope creep becomes visible rather than drifting in.
- Design mocks and specs that pin down intent before code gets written, so the wrong thing doesn't get built.
- Contextual breadcrumbs; related tickets, prior discussions, decisions already made that keep a new reader from flying blind.
The value isn't just saved time. Each of these artifacts is a surface for catching a specific kind of mistake: missing context, drifting scope, misread intent, isolated decisions.
The same artifacts are useful to an AI agent. Dispatching a fresh agent to pick up a ticket lets it start with a clean context and zero in on what matters for execution. With properly set up connectors or MCP servers, an agent can:
- Traverse linked documentation on its own.
- Interpret design mocks as part of its input.
- Ingest prior decisions before starting.
- Produce artifacts that downstream agents can pick up in parallel.
The case for the pull request
To get the full benefit of pull requests, we need multiple lenses. If the same agent that wrote the code reviews the code, it will be influenced by its own reasoning. Different agents can invoke specific skills and configurations to work through concerns in parallel:
- Is the change consistent with the system's current way of working?
- Does the code meet quality standards for readability, testability, and idiomatic use?
- Are infrastructure concerns; performance, cost, security, deployability, addressed?
- Does the change honor existing contracts and interfaces, keeping the promises the system makes to its users?
Review matters, whether it's performed by humans or agents, for the same reason tickets do: a system can only correct the errors its feedback mechanisms can detect. A single reviewer, human or agent, can only catch the failure modes it's equipped to see. Orthogonal reviewers, looking through independent lenses, widen the detection surface to match the variety of ways code can actually go wrong.
Conclusion
I'm not here to make the decision for anyone; I'm working through these problems like everyone else in the industry right now. I understand the pull to stay with what's proven, and the push to throw the old tools away and start fresh. But the question worth asking isn't whether to keep tickets and pull requests; it's whether whatever replaces them preserves the detection surface. A system can only correct what it can notice. That doesn't change just because an agent is doing the work.