The latest IDC Link commentary on Checkmarx’s Agentic AI solution and the security risks of Vibe Coding shines a bright light on the fact that pre-AI application security is no longer sustainable in the era of AI-generated code and code vibing. In the IDC Link, ‘Checkmarx Targets DevSecOps Friction Points with AI Agents for Code, Policy, and Risk’, IDC analyst Katie Norton captures the urgent need for a change: AI is speeding up development but leaving security behind. The AI Dev Revolution Has a Catch Developers using GenAI tools like GitHub Copilot are reporting 35% productivity gains, according to IDC’s 2024 Generative AI Study; however, this gain is a double-edged sword. Faster code doesn’t mean safer code. In fact, 76% of AI-generated code requires security refactoring, according to The New Stack’s 2025 Sate of Web Dev AI Report. Worse still, as IDC analyst Katie Norton notes, the rise of GenAI tools has given rise to a behavioral pattern in engineering known as vibe coding. You’re likely familiar with this concept of fast, fluid interaction between developers and AI assistants, where developers assemble code by instinct, trust the suggestions, and ship without slowing down to scrutinize every line. It’s “vibing” with the AI. It feels productive. It is productive. It also introduces new layers of risk. This behavior of taking GenAI-suggested code at face value is expanding the attack surface. Behind the speed lies a growing set of vulnerabilities: outdated dependencies, insecure patterns, and misconfigurations that often go unnoticed in the flow. Security Tools Can’t Just Watch and Find, They Have to Act and Prevent For security teams, the issue isn’t a lack of awareness. It’s actionability: having enough resources to address the growing volume of security issues effectively. The IDC Link makes the case for a shift in AppSec from passive scanning to agentic intervention. AI agents need to do more than alert, they must act. Checkmarx’s Agentic AI model, the Checkmarx One Assist family of agents, is purpose-built for this new challenge. IDC frames the approach through three key friction points: Friction between security and speed (Developer Assist Agent) Friction between policy and practice (Policy Assist Agent) Friction between risk posture and comprehensive visibility (Insights Assist Agent) These agents fill fundamental operational gaps: Norton identifies them as not just common but unignorable in the age of AI-accelerated development. Crucially, she recognizes that addressing new risks requires AI to be deeply embedded, role-specific, and context-aware, rather than bolted on. “By focusing on secure code creation, policy standardization, and risk visibility—areas where many organizations already face operational friction—Checkmarx is aligning its initial agent rollout with functions that are more likely to drive adoption and measurable outcomes.” — Katie Norton, Research Manager, IDC IDC Implications for DevSecOps Leaders We believe the IDC Link commentary validates what many in AppSec have recognized: the AppSec lifecycle can’t be one-size-fits-all anymore. It requires autonomous agents embedded where security risks actually emerge in code, workflows, and strategy. The Developer Assist Agent, for example, doesn’t just point out a problem. It interprets findings, formulates fixes, and makes the pull request itself. It works within the tools your engineers already use (like VS Code and GitHub Copilot), closing the gap before code ever hits production. The Policy Assist Agent gives AppSec teams the ability to orchestrate and enforce policy as code, rather than relying on documentation. Insights Assist Agent enables security leaders who are increasingly asked to tie technical risk to business outcomes, to make the conversation quantifiable and reliable. What IDC Sees in AppSec That Others Might Miss While many vendors adding AI as a bolt-on features to their security products, IDC’s commentary identifies the foundation of next-generation security architecture. This approach isn’t about chasing buzzwords, but rather about reducing mean time to remediate, increasing DORA alignment, and restoring confidence in developer-security collaboration. The IDC commentary underscores that trust in AI doesn’t come from slick UX. It comes from transparency, auditability, and alignment with enterprise policy. AI is poised to transform security, but trust will determine adoption. It’s not just about how well agents perform it’s about how transparent they are, and how easily their actions can be audited and aligned with enterprise policies. The organizations that get this balance right could lead the way in AI adoption. The New Security Blueprint for DevSecOps For leaders of DevSecOps programs, this IDC commentary aligns with AI-driven application development. At certain speeds, a speed bump becomes a hazard rather than just a nuisance. At the current acceleration of development velocity, security can no longer be a speed bump. It must be a silent partner that’s intelligent, embedded, and proactive. Agentic AI aligns with this vision, as the IDC Link demonstrates. Read the full commentary. Tags: Agentic AI AI generated code AppSec DevSecOps