Application security, as it exists today, was shaped by the Software Development Lifecycle.
The SDLC assumed that code was written primarily by humans, progressed through recognizable phases, and paused naturally at points where review made sense.
Security controls were layered onto those pauses – during pull requests, before releases, or after builds – because that’s where time existed to apply them.
Those assumptions are becoming obsolete.
The SDLC Mental Model Is Breaking
AI has changed how code comes into existence. An increasing number of modern codebases are now generated, modified, and refactored continuously, often without a clear distinction between “writing,” “fixing,” and “improving.”
The lifecycle no longer advances in steps or clear breaks. It loops.
Once that happens, many of the places where AppSec traditionally operated, like stage gates, handoffs, centralized review queues, lose their effectiveness. They weren’t designed for continuous change, and they weren’t designed for machine-paced production.
What ADLC Actually Describes
The Agentic Development Lifecycle (ADLC) is a new methodology that is shaping a new reality.
In an ADLC environment, humans and AI systems work together to produce and evolve software continuously. Developers guide intent and direction, while AI systems generate, transform, and extend code at a rate that no longer maps cleanly to phases or milestones.
This changes the unit of work AppSec has to reason about: Instead of releases or pull requests, security has to contend with a constant stream of small, fast-moving changes.
Why Existing AppSec Models Struggle
Most AppSec programs were built around interruption: stop here, scan there, review later. That approach assumes development can afford to wait.
In ADLC, waiting becomes part of the risk.
Centralized security teams cannot manually review the volume of code produced by AI-assisted workflows, and stage-based tooling struggles to stay relevant when code is rewritten multiple times before it ever reaches a traditional checkpoint.
There’s also a growing false sense of safety around AI-assisted development.
Because AI-generated code often looks clean, idiomatic, and well-structured, it’s easy to assume it is safer than hand-written code.
In practice, it frequently reproduces insecure patterns, makes inconsistent trust assumptions, and introduces vulnerabilities that are harder to spot precisely because they appear reasonable.
The impact is felt on both sides of the organization: Security teams lose timely visibility and effective control as AI accelerates code creation beyond traditional review models.
At the same time, developers experience security as an after-the-fact interruption, flagging issues in code that has already changed.
ADLC exposes a fundamental mismatch: tools designed for sequential development cannot keep pace with AI-driven workflows without compromising either security or speed.
What AppSec Has to Become
If development is continuous, security has to operate continuously as well.
That means security systems need to evaluate code as it is created and modified, not after the fact. They need to understand context – how a piece of code fits into a broader system – and they need to act without relying on human intervention for every decision.
This is where agentic AI becomes necessary rather than aspirational. Security systems need the ability to reason about changes, apply organizational policies automatically, and persist alongside development rather than responding to snapshots.
In practical terms, this pushes AppSec closer to where development decisions are made: inside the IDE and before changes are committed. It’s where developers’ convenience and necessity intersect, because that’s where intent is expressed and where correction is still cheap.
The Developer Workflow Is Changing
As AI takes on more of the mechanical aspects of coding, developers spend more time directing, validating, and integrating output. Security decisions increasingly happen implicitly, through what developers accept, reject, or modify.
Independent research such as the BaxBench benchmark, which measures how well large language models generate backend applications that are both functionally correct and secure, shows a stark reality:
Even flagship models frequently produce code that may or may not work but still contain security vulnerabilities. In the BaxBench evaluation, many generated programs that passed functional tests still failed security checks when exposed to expert-designed exploits, indicating that correctness and security don’t automatically coincide in AI-generated outputs.
AppSec has to align with that reality. Guidance that arrives late or requires developers to context-switch will be ignored, regardless of policy. Guidance that arrives in-line, with enough context to be actionable, has a chance to influence outcomes at scale.
This doesn’t eliminate governance. Organizational standards, risk tolerances, and compliance requirements still matter. What changes is how they are enforced: automatically and continuously, rather than episodically and manually.
Organizational Consequences
In many organizations, this shift is already reshaping responsibility boundaries. AppSec capabilities are beginning to intersect more closely with platform engineering and emerging AI engineering teams, reflecting the fact that security, developer experience, and AI systems are now tightly coupled.
Security becomes less about approval and more about enablement, providing guardrails that operate at the same speed as development rather than trying to slow it down.
Closing
ADLC doesn’t leave much room for AppSec to catch up later. Code is produced continuously, changes compound quickly, and delayed feedback becomes indistinguishable from no feedback at all.
That reality forces a simple conclusion: security has to operate inside the development loop itself, aligned to how software is actually produced in an AI-driven lifecycle.
Checkmarx.dev offers a view on what ADLC-oriented security looks like in practice, with Checkmarx Developer Assist – an agentic security linter that operates directly inside supported IDEs to evaluate risk as code is written – before commits, pipelines, or handoffs exist.
Developers and AI engineers can try it hands-on through a free trial in IDEs like VS Code, Cursor, Windsurf, and AWS Kiro.
If SDLC framed how AppSec worked for the last decade, ADLC will define what works next.
Learn more and get your free trial at https://checkmarx.dev
This article was originally published on Checkmarx’s LinkedIn Newsletter, “The Monthly Checkup”.
AI generated code
AI in Engineering
SDLC
Secure SDLC