Vibe Coding vs. Zero-Trust Reviewing
AI coding assistants have shifted what it means to write software. But they’ve also created a new divide: between those who treat AI output as something to directly ship, and those who treat it as something to should be verified first. The first has a name: vibe coding. For the second, I’ll borrow a term from network security and call it zero-trust reviewing.
Vibe Coding
The term was coined by Andrej Karpathy in February 2025. His description:
“You fully give in to the vibes, embrace exponentials, and forget that the code even exists.”
“I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
The pattern:
- Describe what you want to the AI
- Accept the output
- Run it — if it works, ship it
- If it errors, paste the error back to the AI and repeat
The human acts as a director, not a programmer. The defining characteristic: the human does not read the code. They trust the AI’s output as long as it appears to work.
Zero-Trust Reviewing
The name is borrowed by analogy from the network security principle of “never trust, always verify.” Applied to AI-generated code, the stance is the same: “works” is not the same as “correct.”
Every line of LLM-generated code is untrustworthy until a human has:
- Read it
- Understood what it actually does (not what they think it does)
- Explicitly approved it
The human stays a programmer, not just a director.
Why the Gap Matters
The core problem with vibe coding is that LLMs produce code that is plausible, not necessarily correct. The failure modes are specific:
| Failure | Why vibe coding misses it |
|---|---|
| Hallucinated API | Passes syntax check, fails at runtime in production — not the dev environment |
| Wrong edge case logic | Happy path works; boundary case silently wrong |
| Missing error handling | Functions fine until the network times out |
| Security hole | Works perfectly — and leaks data or accepts SQL or command injection |
| Plausible-but-wrong algorithm | Produces output, just the wrong output for some inputs |
Vibe coding optimizes for “does it run?” Zero-trust reviewing optimizes for “is it actually correct?” These are different questions.
The Accountability Problem
In a professional context, “the AI wrote it” is not a defense:
- For a security breach caused by SQL injection the AI introduced
- For a production incident caused by a missing null check
- For a GDPR violation because the AI logged PII
The human who merged the code owns it. Zero-trust reviewing is how you own it responsibly.
When Vibe Coding Is Reasonable
Vibe coding is not inherently wrong. It’s appropriate when:
- You own all the risk personally (throwaway script, personal project)
- Nothing is at stake if it’s wrong (prototype, demo, experiment)
- You’ll never put it in production
- You can fully test every path
It becomes irresponsible when applied to:
- Production code handling real users
- Anything touching auth, payments, PII
- Code that will be maintained by a team
- Regulated or safety-critical systems
The Default Should Be Zero-Trust
For most professional work, zero-trust reviewing should be the default. Vibe coding is a deliberate choice to accept risk — and it’s only reasonable when you can fully absorb the consequences of being wrong. When in doubt: read the code.