Most advice about AI coding tools focuses on prompting tricks. My workflow is simpler: I treat Claude Code like a developer on my team. I write tickets, review plans, collect feedback, and iterate. The same loop I’d run with a human, just faster.

The Task Brief

I write task descriptions the same way I’d write a Jira ticket. What needs to happen. Acceptance criteria. Relevant context. Not a prompt, a specification.

A good brief includes what the feature does, where it fits in the existing system, and what constraints matter. “Add a filtering endpoint that supports the existing DSL, returns paginated results, and reuses the query builder from the search service.” Not “add filtering.”

The better the brief, the fewer rounds of back-and-forth later. This is true for human developers. It’s true for Claude.

Plan Review

I ask Claude to read the relevant code and create an implementation plan. Claude’s plan mode produces a step-by-step breakdown: which files to touch, what changes in each, and in what order.

I open my editor next to the terminal and start reading. Wrong assumptions go into the editor. Missing edge cases go into the editor. Architectural misalignment go into the editor. I read the whole plan before sending anything back.

Batch feedback beats trickle feedback. One message with ten corrections is clearer than ten messages with one correction each. The AI gets the full picture and can adjust coherently instead of ping-ponging between fixes.

This usually takes two or three rounds. Claude adjusts the plan, I review again, flag what’s still off, and repeat until it’s solid.

One thing worth calling out: I know how the task should be implemented. I have a clear picture of what the right solution looks like, what the wrong solution looks like, and where the tricky parts are. The plan review is verification, not discovery. If you don’t have that understanding, no amount of iteration will save you from a bad plan that looks plausible.

The Safety Net

Before I tell Claude to implement anything, I make sure everything is committed. Clean working tree.

If Claude produces something fundamentally wrong, git checkout . gets me back to a known state. If the implementation is partially right, I can inspect the diff, cherry-pick what I want, and discard the rest.

This takes ten seconds. It saves hours. Do it every time.

Implementation and Review

I tell Claude to proceed with the plan. It writes the code.

When it’s done, I review the changes. Same process as the plan review: I open the editor, read through the diff, and collect everything that needs fixing. Wrong variable names, missing error handling, logic that doesn’t match the spec. All of it goes into one message.

Two or three rounds of this and the code is where I want it.

Migrations get special attention. Code changes are reversible. Schema changes might not be. I check every migration Claude generates: is it additive? Can I roll it back? Does the ORM produce sane SQL? With a good ORM like Django’s or Diesel’s, migrations are usually fine. But “usually” isn’t a word you want near your production database. Check every time.

The PR Loop

After a few tasks, I ask Claude to create a PR. It generates the title, description, and pushes to the branch.

Then I go to GitHub and review the PR there. This catches things the terminal doesn’t.

Terminal diffs are good for line-by-line review. GitHub is better for seeing the shape of the change: the file tree, which modules were touched, how changes relate across files. I catch architectural drift on GitHub that I miss in the terminal. A function that ended up in the wrong module. A dependency that goes the wrong direction. These patterns are visible when you see the full file list, not when you’re reading one diff at a time.

Same feedback process. I go file by file, collect notes in the editor, and send them back to Claude in one batch. Sometimes it’s minor: naming, formatting, a redundant check. Sometimes it’s structural: “this service shouldn’t depend on that module, move the logic to X.” Claude handles both.

When I’m happy with the PR, I merge.

Why This Works

The workflow mirrors how you’d manage a human developer, because the failure modes are the same. Misunderstood requirements. Wrong assumptions about existing code. Missed edge cases. Over-engineering. Under-engineering.

Three things make it reliable:

Batch feedback. Collecting everything in an editor before sending means Claude gets complete context for each revision. No half-fixes that create new problems.

Multiple review stages. Plan review catches design mistakes before any code is written. Implementation review catches code-level issues. PR review catches architectural drift across the full changeset. Each stage filters different types of problems.

Git as a checkpoint system. Committing before each implementation attempt means you can always roll back. This gives you the confidence to let Claude take bigger swings. If it doesn’t work, you lose nothing.

No special prompting. No custom tooling. The same engineering discipline you’d apply to any code, whether a human or AI wrote it.