Gene Kim on Vibe Coding and GenAI AppSec Risks

Blog

Gene Kim on Vibe Coding—and Why DevSecOps Must Be Ready for What’s Coming 

6 min.

June 24, 2025

One would expect an author in the final 72 hours before turning in his manuscript to be feverishly editing it. That is not what Gene Kim—WSJ bestselling author and DevOps thought leader— was doing with his Vibe Coding manuscript.  

Instead, he was coding. Or to be more accurate – Vibe Coding.  

Over four days, he committed 4,000 lines of working code and tests, not as a developer, but as a founder trying to eliminate the pain of copy-pasting across Google Docs. The tool he built didn’t just help finish the book. It proved a deeper thesis: with the right support, GenAI puts powerful coding capabilities within reach, even for those who don’t consider themselves elite programmers. 

Vibe coding isn’t just a revolution in how we write code, but also democratization of code. As Gene Kim puts it: 

“When you can commit 4,000 lines of working code in four days, that’s not just a productivity gain. It’s a redefinition of what’s possible.” 

Generative AI writes the code, and the developer supervises and provides feedback in a stream of prompts and responses. It’s no longer experimental. For enterprises, it’s becoming the new standard. 

The New AppSec Risks of Vibe Coding 

Historically, every technological breakthrough introduced a new brand of risks with it. With driving, came car accidents. With the internet came cybercrime, data privacy threats and increased infrastructure vulnerability.  

With Vibe coding, those 4,000 operational line of codes in four days don’t only represent a fundamental shift in how software is created: The same forces that unlock developer potential can expose organizations to serious security, quality, and reliability risks. 

Gene shared an example where an AI-generated code structure rapidly became a “haunted codebase:” – initially useful, then impossible to maintain:  

“After just two weeks, the code was so opaque and brittle it required a full two-day stand-down to make it operable again.” 

Multiply that pattern across an enterprise, and you don’t just have messy code. You have an environment that is ungovernable and potentially unstable. At scale, this becomes a severe business risk. 

Productivity alone isn’t enough. It must come with maturity. 

Vibe Coding is Stress-Testing Your Architecture 

GenAI doesn’t just accelerate development. It accelerates complexity. Even small-scale vibe coding projects can result in opaque, tightly coupled systems. AI agents may bypass intended architectural boundaries, invoke undocumented interfaces, or create dependencies developers didn’t intend. 

As Gene Kim said: 

“The same instincts that served us at 5 miles per hour now fail us at 50.” 

For enterprises with legacy systems, slow pipelines, or low test coverage, GenAI won’t just boost delivery—it will amplify weaknesses.  

This shift cuts both ways: it surfaces risk, but also reveals opportunity. If your architecture is already straining under the weight of GenAI-fueled output, that’s not just a red flag—it’s a signal. The systems that struggle today are the ones most in need of rapid reinforcement to meet the scale of what’s next. 

 This is the architectural stress test we’ve been waiting for. 

Developers Are Getting Mandates. Are They Getting Guardrails? 

Across industries, top-down mandates to use GenAI are already in motion. 

At some leading labs, developers using GenAI tools report five to ten times improvements in output. But those gains are uneven. Not every team is ready. 

At Adidas, a 750-person GenAI pilot showed clear results. Teams with loose coupling and strong feedback loops saw significant productivity gains. Teams closer to the ERP system, with tight coupling and slow release cycles, saw little to no benefit. 

Vibe coding shines a spotlight on systemic friction. And without modern DevSecOps practices, it opens new vectors for risk: 

  • Insecure or hallucinated third-party packages 
  • Disabled or incomplete test coverage 
  • Loss of critical code due to deletions or missteps 
  • Drift from secure coding standards and policy 
  • Unmanaged token costs from excessive GenAI usage 
  • Shadow AI development bypassing security review 

Security and engineering leaders must align—because developers won’t slow down. The mandate isn’t to restrict GenAI use. It’s to ensure it happens with visibility, trust, and accountability. 

Without that alignment, two dangerous patterns are already emerging. 

Shadow Code 

Unvetted, undocumented, and ownerless code is quietly creeping into production. It often originates from GenAI outputs pasted directly into codebases without proper review or oversight—creating blind spots that compound over time. 

Speed Without Validation 

In the race to match GenAI’s velocity, traditional security gates are being bypassed. Critical checks like SAST, SCA, and peer reviews are skipped or deferred, leaving vulnerabilities unchecked and risk exposures unaddressed. 

DevSecOps Is the Safety Net and the Enabler 

This moment echoes the rise of DevOps. 

Years ago, the idea of daily deployments seemed reckless. But DevSecOps proved that fast could also be safe. Gene Kim’s Accelerate and DORA research confirmed it—high-performing teams release faster, recover faster, and remediate less because security is part of the development flow. 

Now, with vibe coding and GenAI, the cycle accelerates again. 

Before GenAI, a single feature might take a week to scope, code, review, and merge. Today, as shown in the Vibe Coding book, a developer can start an AI conversation before logging off and return to a completed, functional pull request by morning. 

For organizations with strong modularity, fast feedback, and a culture of learning, GenAI becomes a superpower. For others, it becomes a liability. 

Security doesn’t have to sit on the sidelines. It can lead. 

This article is based on a live session with Gene Kim from Checkmarx’ 2025 Agentic AI Summit

Watch the full conversation:  

Checkmarx Secures the Future of GenAI Development 

We launched Checkmarx One Assist, agentic AI for AppSec intelligent, autonomous code security protection built for enterprises that are actively writing GenAI code. 

Checkmarx One Assist helps security teams move at the same pace as development. It brings agentic AI-powered prevention and remediation, secure-by-default feedback, and visibility into the source, intent, and risks behind AI-generated code. 

If your developers are already coding with GenAI, your security practices must keep pace. 

Discover Checkmarx One Assist  

Learn first-hand about the innovative Agentic AI capabilities of Checkmarx One Assist, and how it can help your Dev and AppSec teams keep code fast and safe.  

Book your personal demo

Read More