Where Agentic Patterns Actually Live in Your Coding Workflow
May 15, 2026
Most public writing about agentic design patterns either jumps to deployed pipelines or stays inside the model harness. I take a look at the layer most software engineers are actually working at.


One of the issues with the current landscape of agentic software engineering education is that patterns or workflows are not put into context at the level of the software engineer, working with Claude Code or Cursor, trying to make the most out of the software. It’s not even made clear that these tools are necessary for success.
These concepts manifest at different layers, three that we speak about regularly are:
- Autonomous services
- Claude Code or Cursor at the user level
- The harness agentic loop
In this post we will be talking about Claude Code at the user level, as these patterns are most often applied to the other two cases. Although many of the most valuable processes will be built into the harness agentic loop, but there will likely always be need for human nuance and judgement when it comes to execution.
Where the vocabulary comes from
The shared names mostly come from Anthropic's December 2024 "Building effective agents" post, which lists six workflows: prompt chaining, routing, parallelization with sectioning and voting, orchestrator-workers, evaluator-optimizer, and the open-ended "agents" category. Useful taxonomy. Reflection isn't on that list; it traces back to the Reflexion paper (Shinn et al., 2023) and Self-Refine (Madaan et al., 2023). I'll flag jargon where I use it.
The gap I want to name sits between those workflows and the Claude Code primitives. Anthropic's SDK docs show you how to build the patterns from scratch in code. The feature docs cover subagents, skills, hooks, and MCP servers individually. Nothing official maps "evaluator-optimizer" onto "a critic subagent dispatched after the implementer." That mapping is folk knowledge right now.
Chaining
The simplest implementation is a long prompt asking the agent to emit a small JSON object at each phase. A more long-term solution would be encoding that prompt as a SKILL.md file with the phase definitions and schemas baked in.
Reflection
After a process finishes, an agent at what they did and decides whether it went well, then it writes the judgment so it can be acted on it later. Two places to put that step:
- Skill-level. The last step in the skill’s own checklist. After the skill does its work, it spends a final paragraph asking whether the path it took matched the path the skill described, what it had to invent, what failure modes it hit that aren’t documented. Output: a log note, a proposed edit to the skill, or both.
- Agent-level. Same idea, one layer up. A SubagentStop hook (or a convention inside the subagent file) appends a structured note every time the agent finishes. When the log fills, you read it back, condense it into a prompt update, and edit the agent file.
Parallelization
Multiple parallel subagents in a single Agent batch.
Point one subagent at a detached-node.dev pattern entry, another at an arxiv paper, another at a set of repo files, another at the relevant docs. Now you have four subagents working on four differently-grounded versions of the problem, instead of four subagents all defaulting to whatever's freshest in the orchestrator's context. A subagent without grounding will confidently produce whatever pattern it half-remembers from training; a grounded one produces something less hallucinatory.
If you write this orchestration prompt more than twice, encode it as a skill. There are skill-writing skills on the plugin marketplace; writing-skills in the superpowers plugin is the one I reach for.
Critic
The argument from an earlier piece on subagent orchestration: an agent reviewing its own work is committed to the framing that produced it, and can't see past that framing. A critic in a fresh context catches what the implementer missed.
- Critic subagent. Focused review prompt, dispatched after every implementation pass.
- Marketplace reviewer (or several). Disagreement between them is signal.
- Separate machine-user identity posting comments on a pull request. Heavier setup; what I lean on for anything I’m shipping.
The critic doesn’t have to be more intelligent than the recipient of the critique, but it should be meaningfully different. If your implementer emits structured output, a thirty-line script that validates the schema and fails loudly when something’s off is a critic. The deterministic version is often what you need, and people skip it because it doesn’t feel like a pattern.
What actually matters
The named patterns are vocabulary. The work happens one level down, in the entities you wire together: subagents, skills, hooks, structured outputs, logs, and the prompts that glue them. The workflow names are a labeling layer over those primitives, not a constraint on what you can build with them.