ContentsTailor
Build

The Reality of Vibe Coding -- 3 Things I Built Fast and Immediately Regretted

Vibe coding lets you ship in hours. But 25% of AI-generated code has security holes. Here are 3 real mistakes from building 6 products with Cursor and Claude Code.

March 3, 20268 min read
The Reality of Vibe Coding -- 3 Things I Built Fast and Immediately Regretted

I built 6 products in 6 weeks using vibe coding. Cursor, Claude Code, and pure speed. Each product went from zero to deployed in about a week.

That speed felt like a superpower. Until I looked at what I'd actually shipped.

This isn't an anti-AI take. I still vibe code every day. But the gap between "it works" and "it's safe to deploy" is wider than most people think. Here are three real mistakes I made -- and what I do differently now.

What Is Vibe Coding, Really

The term caught on fast. You describe what you want in natural language, an AI writes the code, you tweak until it works, and you ship. No manual typing. No Stack Overflow. Just vibes.

It works remarkably well for getting to a working prototype. But there's a problem: AI optimizes for "runs without errors," not "runs safely."

When you tell Claude "add Stripe payments" it'll wire up a checkout flow in minutes. It'll compile. It'll process a test payment. You'll feel like a genius.

What it won't do is:

  • Validate webhook signatures properly
  • Add rate limiting to the checkout endpoint
  • Handle edge cases like double charges
  • Sanitize error messages that leak your Stripe configuration

It works. But "works" and "production-ready" are very different things.

Mistake 1: Trusting AI-Generated Auth Without Reading It

When I built b4uship.com, I asked Claude Code to set up GitHub OAuth with NextAuth.js. It did. In about 90 seconds.

The auth worked perfectly. Users could sign in with GitHub, sessions persisted, and the UI showed their avatar. Done. Next feature.

Except the session secret in my .env.local was literally breakmyvibe-dev-secret-change-me. A placeholder that Claude generated and I never changed. If anyone guessed that string -- and it's pretty guessable -- they could forge session tokens.

What I missed:

The AI generated a working auth flow but used a human-readable placeholder for the cryptographic secret. I was so focused on "does login work?" that I never checked the configuration.

This is the core trap of vibe coding: when something works on the first try, you don't review it. You move on. The code that "just works" gets the least scrutiny, and that's exactly where bugs hide.

What I do now:

After any AI-generated auth setup, I run a checklist:

  • Is the session secret a random 32+ character string?
  • Are tokens stored securely (httpOnly cookies, not localStorage)?
  • Is there a proper session expiry?
  • Are auth endpoints rate-limited?

Takes 5 minutes. Saves you from shipping an auth system held together with a string literal.

Mistake 2: CORS Set to "Allow Everything" and Forgetting About It

This one is embarrassing because it's so basic.

When I was developing locally, the frontend and backend ran on different ports. CORS errors everywhere. So I asked the AI to "fix CORS." It did -- by setting Access-Control-Allow-Origin to accept everything. Broadly. Generously.

The problem: this configuration was global. It applied to every route, including API endpoints that handle payment verification, scan results, and user data. And when I deployed to production, it went with it. Because I never went back to tighten it.

The pattern:

This happens constantly with vibe coding. You hit a blocker, ask the AI to fix it, it applies the broadest possible fix, and you move on. The fix works, so it stays. But the scope was "make development work," not "make production secure."

Other examples of the same pattern:

  • eslint-disable comments that were supposed to be temporary
  • any types in TypeScript that were supposed to be replaced
  • console.log statements with sensitive data that were supposed to be removed

What I do now:

Before deploying, I grep for patterns that scream "development shortcut":

  • Access-Control-Allow-Origin: *
  • eslint-disable
  • TODO and FIXME
  • console.log in API routes

Or better yet, I run b4uship on the repo. It catches these automatically. Which is ironic, because I built it after making these exact mistakes.

Mistake 3: Error Messages That Tell Attackers Everything

When I integrated Stripe checkout, the AI generated this error handler:

```

catch (error) {

return Response.json(

{ error: "Stripe error: " + error.message },

{ status: 500 }

)

}

```

Looks fine, right? The problem is that Stripe error messages can include:

  • Your API endpoint configuration
  • Rate limit details
  • Internal error codes that reveal your account setup
  • Whether you're using test mode vs. live mode

I was returning all of this directly to the client. Any user who triggered an error could see internal system details.

This happened across multiple services. Every time the AI wrote an error handler, it followed the same pattern: catch the error, send its message to the client. Helpful for debugging. Terrible for production.

The fix is simple but boring:

```

catch (error) {

console.error("Stripe checkout error:", error);

return Response.json(

{ error: "Payment processing failed. Please try again." },

{ status: 500 }

)

}

```

Log the details server-side. Send a generic message to the client. The AI won't do this by default because it optimizes for developer experience, not security.

The 25% Problem

Stanford researchers found that developers using AI assistants produce code with security vulnerabilities about 25% more often than those who don't. That number matches my experience.

It's not that AI writes bad code. It writes code that works. But "works" in the AI's context means "compiles and does what you asked." It doesn't mean "handles edge cases," "validates inputs," or "follows security best practices."

The issue is compounded by speed. When you can ship a feature in 20 minutes, you ship a lot of features. And each one has a 25% chance of having a security hole you didn't catch. Over 6 products, those probabilities stack up fast.

What I Actually Do Now

I still vibe code. Every day. It's too productive not to. But I've added friction in the right places:

1. Security scan before every deploy.

I built b4uship.com specifically for this. It scans the repo for hardcoded secrets, overly permissive CORS, exposed error details, missing input validation -- the exact things AI code tends to get wrong. It takes 30 seconds and catches the stuff I always miss.

2. "Would I let a junior ship this?" test.

Before deploying any AI-generated code, I ask myself: if a junior developer wrote this and submitted a PR, would I approve it? If I'd ask them to change something, I should change it now.

3. Never trust auth or payment code without reading it.

These two areas have the highest blast radius if something goes wrong. I read every line of AI-generated auth and payment code, even if it takes longer than writing it manually would have.

4. Development shortcuts have a 24-hour lifespan.

Any console.log, broad CORS config, or any type I add for debugging gets a // TODO: remove before deploy comment. And I actually search for those before deploying. Radical concept, I know.

The Bottom Line

Vibe coding is real. The speed is real. But so are the blind spots.

The developers who'll thrive with AI aren't the ones who generate code fastest. They're the ones who know what to check after the AI is done.

Build fast. Review what matters. Scan before you ship.

This post is part of the ContentsTailor build series. We build products with AI and document the process — including the mistakes. See all projects or apply to build with us.

Get the monthly build log

Real numbers — revenue, projects, failures. No fluff, just data.