Vibe Coding vs. Zero-Trust Reviewing

AI coding assistants have shifted what it means to write software. But they’ve also created a new divide: between those who treat AI output as something to directly ship, and those who treat it as something to should be verified first. The first has a name: vibe coding. For the second, I’ll borrow a term from network security and call it zero-trust reviewing.

Vibe Coding

The term was coined by Andrej Karpathy in February 2025. His description:

“You fully give in to the vibes, embrace exponentials, and forget that the code even exists.”

“I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”

The pattern:

  1. Describe what you want to the AI
  2. Accept the output
  3. Run it — if it works, ship it
  4. If it errors, paste the error back to the AI and repeat

The human acts as a director, not a programmer. The defining characteristic: the human does not read the code. They trust the AI’s output as long as it appears to work.


Zero-Trust Reviewing

The name is borrowed by analogy from the network security principle of “never trust, always verify.” Applied to AI-generated code, the stance is the same: “works” is not the same as “correct.”

Every line of LLM-generated code is untrustworthy until a human has:

The human stays a programmer, not just a director.


Why the Gap Matters

The core problem with vibe coding is that LLMs produce code that is plausible, not necessarily correct. The failure modes are specific:

FailureWhy vibe coding misses it
Hallucinated APIPasses syntax check, fails at runtime in production — not the dev environment
Wrong edge case logicHappy path works; boundary case silently wrong
Missing error handlingFunctions fine until the network times out
Security holeWorks perfectly — and leaks data or accepts SQL or command injection
Plausible-but-wrong algorithmProduces output, just the wrong output for some inputs

Vibe coding optimizes for “does it run?” Zero-trust reviewing optimizes for “is it actually correct?” These are different questions.


The Accountability Problem

In a professional context, “the AI wrote it” is not a defense:

The human who merged the code owns it. Zero-trust reviewing is how you own it responsibly.


When Vibe Coding Is Reasonable

Vibe coding is not inherently wrong. It’s appropriate when:

It becomes irresponsible when applied to:


The Default Should Be Zero-Trust

For most professional work, zero-trust reviewing should be the default. Vibe coding is a deliberate choice to accept risk — and it’s only reasonable when you can fully absorb the consequences of being wrong. When in doubt: read the code.