Even in the Agentic Coding Era, Maintainability Beats Fast Performance

February 20, 2026 Rens Jaspers Short thoughts NestJS Architecture AI LLM

LLMs can generate code quickly and even port entire applications. Thanks to AI agents, I can now build everything in Rust, Zig, or C and ship tiny, high-performance binaries. Still, I default to NestJS for most of what I build. That means gigabytes of node_modules and Docker images packed with a heavy JS runtime.

Why do I limit myself despite my new agent superpowers? Well, it turns out LLM agents are great for quick results, but less so for coherence and long-term maintainability. Even the best model struggles once a poorly organized project reaches a certain size. By design, NestJS encourages a predictable structure. That makes the codebase easier to understand for both humans and AI agents. Since I have to review and own the code, it had better be a cleanly organized stack I know well. The larger the project, and the faster code is generated by LLMs, the more this matters.

Image size and raw speed, on the other hand, rarely matter much at all. Storage is cheap. For personal projects, even the slowest code is usually fast enough. For customer-facing applications, NestJS can handle thousands of requests per second on modest hardware. In practice, the database and the network are the real bottlenecks.

The vast majority of applications will never need to scale to millions of users. It is tempting to architect for a hypothetical future where extreme performance is required, but that is rarely justified. You don’t save money by using less disk space or fewer CPU cycles. You save by spending fewer human hours and fewer agent tokens. Complexity is what drives cost, not raw throughput.

Premature optimization is bad, and having an army of LLM coding agents does not change that. I optimize for maintainability, and you should too. For serious work, forget about "blazing-fast binaries" and use the boring, opinionated framework you know well.