Confident Developers Are the New Security Risk 
← Blog

Confident Developers Are the New Security Risk 

Teams are shipping more code, faster than ever. It looks polished, it runs smoothly, and it works – so developers trust it. That’s the problem.

Developer confidence is a security risk

AI coding tools have fundamentally changed how software gets built. 

After attending and speaking with security and development leaders at OnPoint Ski & Snowboard CyberCon 2026, one theme stood out to me: teams are shipping more code, in more languages, across more projects than ever before. Features that used to take days now take minutes and complex logic can be scaffolded from a single prompt. 

The output is fast, it looks polished, and it runs smoothly. 

And that’s exactly the problem. 

When Confidence Outpaces Security 

As developers rely more on AI tools, something subtle happens: the speed and quality of the output create confidence. The code looks clean, it compiles, it works as expected – so it gets trusted. 

But AI models only predict what is likely to work, they don’t understand your threat model and they aren’t able to assess exploitability in your environment. AI-generated code can function perfectly and still introduce serious vulnerabilities. 

This gap between what works and what’s secure is where risk exponentially builds. 

This isn’t a criticism of developers. AI tools are powerful productivity accelerators, and teams absolutely should use them. But validating functionality is not the same as validating security. And right now, that distinction is getting blurred. 

More Code, Same Security Team

This confidence issue isn’t happening in a vacuum; it’s the byproduct of broader organizational shifts. 

The development lifecycle is shifting to be more agentic, more automated, and moving faster than ever. That means more code written with fewer reviewers, pull requests that are more frequent but also more complex, and an AppSec team expected to keep pace without any additional resources.  

So, the backlog isn’t stabilizing – it’s growing. 

I see this tension in organizations all the time. AppSec teams are expected to keep up with the speed of development while maintaining strong security standards. In practice, they can’t fully do both. Slowing down development usually isn’t an option, so security is expected to adapt. 

Development Is Now Human + AI 

Development is no longer purely human-led – but it isn’t exclusively AI-led either. It is now driven by developers working alongside AI. 

AI is assisting, suggesting, generating, and accelerating, but humans are still making decisions and shipping code. The model has shifted from developers writing everything themselves to developers collaborating with AI systems throughout the process. 

This shift significantly increases output. Teams are producing more features, services, and integrations at a much faster pace. But AI is optimized for speed and plausibility, not security. It can produce functional code, but not inherently secure code. 

The speed AI delivers builds confidence and trust, but it also increases the likelihood of security gaps slipping through unnoticed – especially when developers are shipping code they didn’t write and don’t fully understand. We recently dug more into this trend in our Don’t Trust the Code paper and you can read more about it here.  

But these tools don’t just change how developers work – they also add new components to the software supply chain. Every model integration, MCP connection, and AI-assisted workflow becomes another potential entry point, and the environment is expanding faster than many security teams can track. 

I’ve seen cases where thousands of AI coding assistant licenses were active before the Head of Security even knew they existed. And when organizations don’t know which AI tools are in use or how data is flowing, they can’t properly assess risk – and the attack surface grows, unnoticed. 

Security Has To Evolve 

One of my biggest takeaways is that if AI-driven productivity is the new baseline, security can’t operate the way it did five years ago – it must evolve across these three categories: 

  1. How we identify vulnerabilities in code is changing.
  2. How we identify vulnerabilities in the tools we are using is changing.
  3. How we address vulnerabilities is changing. 

Traditional scanners weren’t built for this environment. They struggle with modern languages and frameworks, generate noise, and can’t keep pace with modern CI/CD pipelines.  

Meanwhile, AI is introducing new threat vectors: 

  • Generated logic that hasn’t been deeply reviewed
  • New dependencies
  • Expanded supply chain components

Organizations still need every line of that code scanned quickly, with findings developers can actually act on. This is why we’re seeing the rise of agentic scanning approaches: hybrid engines that combine deterministic analysis with AI reasoning, LLM-powered workflows, and automated context-aware triage. 

But securing the code is only half of the problem, we also need to secure the AI tools writing it. AI Bills of Materials (AI-BOMs) are emerging to provide visibility into where AI is being used, which models are connected, and how data flows through them. Securing the full AI stack is quickly becoming a core AppSec responsibility. 

From Backlog to Automation 

Detection alone won’t solve the scaling problem. The traditional identify – triage – remediate – verify cycle cannot be managed manually when code is growing exponentially. Without automation, quality declines and backlogs grow. 

Agents become valuable when they’re embedded directly into the development lifecycle, especially in high-volume stages like triage, remediation, and verification. These are areas where automation can absorb the workload security teams can’t handle manually. 

When agents operate within a defined AppSec strategy, they form the foundation for applications that can secure themselves, freeing teams to focus on policy and governance rather than reactive risk management. 

Securing at the Speed of Confidence 

The paradox is clear. AI increases output, which increases risk. At the same time, it increases confidence, and confident developers move faster, question less, and merge code more quickly. 

But beneath that momentum, the gap between perceived security and actual security continues to widen. Since slowing down is not a realistic option, the only path forward is to secure software at the speed AI now sets. 

Checkmarx is built for this shift. It combines deterministic scanning with AI-driven detection to give clear visibility into how AI is being used across development environments, while also automating remediation with tools like Checkmarx Developer Assist.  

The result is security embedded directly into the development process – instead of tacked at the end. 

And the goal isn’t to reduce developer confidence – confidence is a good thing! The goal is to ensure that this confidence is earned, backed by real visibility and security controls that scale with the volume of code being produced. 

At the end of the day, confident developers with guardrails in place move fast and stay secure. Confident developers without them just move fast. 

Tags:

AI

AI generated code

AppSec

developer assist