Scrum for One: Running Sprints with an AI Coding Agent

Can the right methodology consistently produce production-quality code from an AI coding agent? Well-decomposed architecture. Maintainable code that can be refactored without disproportionate rewrites. Clean abstractions that survive the next feature. It started with one planning skill. I was tired of Claude Code implementing features freestyle, producing code that worked but couldn’t be extended. So I wrote a /grooming skill that reads the codebase and produces a structured plan before any code is written. That helped. Then I added agent delegation to avoid context rot on long sessions. Then story points for tracking throughput. Then retrospectives and lessons to stop repeating the same mistakes across sprints. Four skills, a ~/Claude/ knowledge base, and a self-learning loop that makes the agent genuinely improve across sessions. ...

April 15, 2026 · 16 min · Anton Shuvalov

My Claude Code Workflow: Delegate, Review, Iterate

Most advice about AI coding tools focuses on prompting tricks. My workflow is simpler: I treat Claude Code like a developer on my team. I write tickets, review plans, collect feedback, and iterate. The same loop I’d run with a human, just faster. The Task Brief I write task descriptions the same way I’d write a Jira ticket. What needs to happen. Acceptance criteria. Relevant context. Not a prompt, a specification. ...

March 1, 2026 · 4 min · Anton Shuvalov

Architecture Is What Makes Vibe-Coding Work

An academic study found that professional developers using AI agents don’t vibe. They control. After more than 300 Claude Code sessions building a full-stack app in a language I’d never used, I agree. But the control that matters isn’t what most people think. I wanted to find the limits. Not of Claude Code on a toy script, but on a complex project that might exceed both my skills and the AI’s. So I picked a stack I’d never used, Rust with Leptos and Axum, and started building North, a GTD task manager. Multiple views, a filtering DSL, drag-and-drop, recurring tasks, keyboard navigation. The kind of project where you can’t fake understanding. ...

February 28, 2026 · 10 min · Anton Shuvalov

Your Vibe-Coded App Works. Until It Doesn't.

You shipped 15 features in a week with Claude Code. Three of them are silently broken, and you won’t find out until you click through the app by hand. Or until a user does. This isn’t a feeling. CodeRabbit’s analysis of 470 GitHub PRs found AI-co-authored code has 1.7x more major issues than human-written code, with logic errors 75% more common. Tenzai tested five AI coding tools by building 15 identical web apps. Zero out of 15 implemented CSRF protection. Zero set security headers. All five tools introduced SSRF vulnerabilities. The bugs that slip through aren’t syntactic. They’re behavioral. AI produces code that runs, passes linting, and looks correct in a diff. The failures show up when a real user clicks through the actual flow. ...

February 27, 2026 · 9 min · Anton Shuvalov