
If you asked a room full of CISOs how AI has changed their work, the answers would mix optimism with anxiety. This tension between acceleration and risk is where modern application security teams now live.
As Sandeep Johri, CEO of Checkmarx, shared during our recent Agentic AI Summit, AI coding assistants bring meaningful productivity gains, but they also increase concern.
“AI coding assistants are really a double-edged sword,” he said. “On one hand, they drive 20, 30, even 40 percent productivity gains. On the other, they raise anxiety because now organizations have to make sure they’re not also multiplying their vulnerabilities.”
To explore this reality in depth, Johri invited Katie Norton, Research Manager at IDC, to join the conversation.
Norton leads IDC’s DevSecOps and supply chain security practice, where she tracks how enterprises are adapting their security strategies for this new AI-powered era of software development.
Their discussion revealed what is working today and what is still evolving as organizations learn how to apply both AI coding assistants and autonomous AI application security agents in practical ways.
AI Coding Assistants Are Everywhere, But Not Without Risk
Norton opened with a striking data point: according to IDC, 91% of organizations are now using AI coding assistants in software development.
That number highlights how quickly these tools have become standard in engineering workflows in mere years.
The appeal is clear. Developers report productivity increases of up to 35%, thanks to faster code generation, reduced repetition, and a smoother path to meeting delivery deadlines, but that speed introduces a plethora of security concerns.
“Much of the code these assistants are trained on comes from open source repositories,” Norton explained.
“While open source is essential, it also contains vulnerabilities and outdated practices. That means AI-generated code can include insecure patterns, unvetted dependencies, or even code with unclear provenance.”
Higher Expectations, More Pressure, and the Role of Intelligent Support
The volume of code now being created adds pressure to AppSec teams. Productivity improvements come at a cost if they are not managed carefully. “If you say your team is coding 35% faster,” Norton said, “the business will start expecting that level of output”, which naturally leads to a potential decline in code scrutiny.
That pressure does not only apply to developers. AppSec teams are now expected to secure code just as quickly.
The problem here is that traditional security processes can’t keep pace with Gen AI. The more that gets built, the more security must validate. As Johri noted, the concern is not just about the speed itself, but the widening gap between development and security capacity.
Ensuring the safety of this new flow of code requires intention. Norton summed it up clearly. “Securing AI-generated code requires more than trust. It demands the deliberate integration of controls, oversight, and continuous governance.”
Agentic AI in AppSec Isn’t About Hype, But Help
Where AI coding assistants focus on creation, agentic AI in AppSec focuses on control: It introduces a layer of intelligent automation that supports application security efforts without slowing teams down.
These agents close the gap that already existed between development speed and security, and broadened by the introduction of AI-gen code. They reduce manual burdens, detect risks as early as the code is typed in, and embed protection where they need to be most.
“Agentic AI is still in the early days,” Norton shared. “But we’re seeing real promise. These agents can detect issues, enforce policies, and take predefined actions without waiting for humans.”
Organizations are embedding agents directly into developer environments. From within the IDE, these agents interpret code as it’s written and scan results, surface contextual security recommendations, and even propose code fixes.
Developers get immediate guidance without leaving their workspace. On the AppSec side, agents are already helping triage issues, remove duplicates, and in some cases, apply fixes automatically.
However, as Johri pointed out, “The goal is not just to detect faster. It is to reduce the gap between detection and remediation in a way that does not depend on hiring more people.”
Agentic AI steps in to help both sides manage those expectations without compromising quality or safety.
However, autonomous AppSec does not magically work on its own. Johri reminded us that “Technology only succeeds when it is aligned with people, process, and clear business outcomes.”
How The Best Organizations Are Doing AppSec Differently
Not every company is getting this right. The difference lies in strategy and clarity.
IDC research found that 57% of CIOs consider clearly defined business use cases the top success factor for adopting AI agents. Without this clarity, these tools risk becoming fragmented experiments that introduce more complexity than value.
“This is not just a tech decision,” Norton said. “These agents must tie directly to business outcomes and security priorities.”
The most successful organizations share several things in common:
- They define outcomes first. Whether it is filtering false positives, proposing secure code fixes, or automating triage, every use of agentic AI starts with a purpose.
- They embed AI into daily workflows. These tools live where the work happens, inside developer environments and integrated into CI pipelines.
- They establish visibility and control. Application security teams need transparency into how AI agents work in order to build trust.
- They build culture to support the change. New technology cannot thrive without cultural readiness. Change management, ethical frameworks, and clear governance matter as much as the tech itself.
As Norton put it, “Agentic AI offers a practical way to embed security. Not just by providing alerts, but by taking action.”
Watch the Full Session
Smarter Action, Not More Alerts
The most dangerous misconception about agentic AI is that it is just another feature bolted on to coding tools. It is much more than that. It is a necessary response to a fundamental shift in how software is built. Without it, organizations risk drowning in speed without safety.
Those who treat Agentic AI AppSec as a strategic pillar are already seeing results. Orgs are closing the gap between detection and resolution, giving developers fast feedback and helping security teams focus on the highest value work.
At Checkmarx, we are building our agentic AI strategy with this future in mind. As Johri said, “This is not about trends. It is about enabling our customers to build secure software at machine speed, with human oversight and business intent.”
The future of application security will not be defined by how fast we build, but by how confidently we secure what we build.
Missed the Agentic AI Summit? Watch the Full Sessions Now
Watch exclusive conversations from the recent Checkmarx Agentic AI Summit, featuring industry leaders in AI, development, and AppSec. Gain fresh, actionable insights into the real-world opportunities and challenges of AI in Application Security.