Architecture Is What Makes Vibe-Coding Work

An academic study found that professional developers using AI agents don’t vibe. They control. After more than 300 Claude Code sessions building a full-stack app in a language I’d never used, I agree. But the control that matters isn’t what most people think. I wanted to find the limits. Not of Claude Code on a toy script, but on a complex project that might exceed both my skills and the AI’s. So I picked a stack I’d never used, Rust with Leptos and Axum, and started building North, a GTD task manager. Multiple views, a filtering DSL, drag-and-drop, recurring tasks, keyboard navigation. The kind of project where you can’t fake understanding. ...

February 28, 2026 · 10 min · Anton Shuvalov

Your Vibe-Coded App Works. Until It Doesn't.

You shipped 15 features in a week with Claude Code. Three of them are silently broken, and you won’t find out until you click through the app by hand. Or until a user does. This isn’t a feeling. CodeRabbit’s analysis of 470 GitHub PRs found AI-co-authored code has 1.7x more major issues than human-written code, with logic errors 75% more common. Tenzai tested five AI coding tools by building 15 identical web apps. Zero out of 15 implemented CSRF protection. Zero set security headers. All five tools introduced SSRF vulnerabilities. The bugs that slip through aren’t syntactic. They’re behavioral. AI produces code that runs, passes linting, and looks correct in a diff. The failures show up when a real user clicks through the actual flow. ...

February 27, 2026 · 9 min · Anton Shuvalov