
When an AI tool suggests insecure code or silently skips essential security steps, who’s responsible?
The closing keynote at the recent Agentic AI Summit hosted by Checkmarx brought together two voices at the forefront of software engineering’s AI transformation: Eran Kinsbruner, VP of Portfolio Marketing at Checkmarx, and Andrew Zigler, Lead Engineer at LinearB and host of the popular Dev Interrupted podcast.
Their conversation was far from theoretical. It was grounded, urgent, and framed around a central challenge: how engineering leaders must rethink roles, workflows, and culture to meet the demands of AI-accelerated development.
Zigler opened with a powerful observation:
“Engineers are graduating from action doers to decision managers.”
This shift, he explained, is not just about productivity gains or tooling upgrades. It marks a fundamental change in how software is built.
Agentic AI tools like GitHub Copilot, Amazon Q, Cursor, and Windsurf are no longer simple assistants. They are participating actors in the development workflow, capable of orchestrating code generation, testing, and even deployment.
Yet with that autonomy comes accountability. As Zigler put it:
“The AI doesn’t carry the consequences of a security breach. Humans do.”
If a machine writes a vulnerable function and no one notices, the responsibility still lies with the team that shipped it.
Speed and Risk: The Dev’s Race Track
Speed without a strong foundation creates risk. Zigler compared it to driving a race car on suburban streets. Without a racetrack, a development environment built for high-velocity, high-visibility work, teams risk crashes.
Security, in this metaphor, is the track infrastructure: the barriers, the lanes, the marshals; Agentic AI is the car.
And human engineers? They are still the drivers, but with more dashboards, more decisions, and more responsibility than ever.
Engineering Roles Are Evolving Fast
Engineering organizations must now reimagine their team structures. The historical division between action executors (e.g., developers pushing code) and strategy setters (e.g., architects or product leads) is blurring.
Engineers are being asked to think more like platform owners, managing the impact and reliability of their workflows rather than just contributing tickets.
Zigler highlighted a trend he’s observed on Dev Interrupted:
“Platform engineers are becoming fundamental to how teams operate in this new AI-driven world.”
With AI, the developer role isn’t shrinking; it’s expanding. They’re expected to design and govern the workspaces where AI agents and human engineers collaborate.
Even junior engineers, Zigler noted, are evolving. Entry-level specialists (like front-end-only devs) may be replaced by generalists who understand the product and can manage AI-generated contributions responsibly.
The expectation isn’t just to code, but to guide AI workflows and validate their outputs.
Watch the Full Session
Cultural Shifts: Safe Sandboxes and Transparency
Technical change is only half the story. Zigler emphasized the need for both experimentation and trust:
“If you don’t build a safe space for developers to try these tools,” he warned, “they’ll go find one elsewhere.”
Experimentation, however, can’t mean chaos.
Teams must understand what their tools are doing and what they should be doing. AI cannot be a black box. Vendors must be deeply fluent in the limits and risks of their own AI features if they want to inspire trust with consumers and customers.
This flexion point is where observability and guardrails become essential. Application security leaders need to embed monitoring, explainability, and audit trails into every workflow.
That means policy-as-code, pipeline instrumentation, and an organizational vocabulary that treats AI as a peer in the process—not just a plugin.
Validating the Incremental Path
The conversation ended on a hopeful note, backed by data. Zigler shared findings from a recent workshop at LinearB, where over 400 developers and engineering leaders explored how they were adopting agentic AI tools. The most successful teams all followed a consistent pattern: start small, validate outcomes, expand with intention.
This insight aligns with Checkmarx’ own research into Developer Experience (DevEx) and security. The best-performing organizations aren’t just throwing AI at problems, but are building maturity matrices, measuring trust, and evolving gradually.
Zigler summarized it succinctly:
“The teams winning today aren’t the ones with the most AI. They’re the ones with the clearest strategy for how to use it.”
What DevSecOps Can Do Now
This keynote wasn’t just reflection. It was a call to action. For engineering and AppSec leaders navigating this shift, here are three takeaways:
- Rethink engineering roles and workflows to position AI agents as participants, not just tools. Decision-making frameworks must evolve alongside automation.
- Evaluate your team’s readiness and identify clear next steps on your adoption journey.
- Establish guardrails and observability early to foster safe experimentation while maintaining transparency with stakeholders and users.
Agentic AI is already shaping the next era of DevSecOps. Those who adapt with intention, clarity, and accountability will not just keep up. They’ll lead.
Missed the Agentic AI Summit? Watch the Full Sessions Now
Watch exclusive conversations from the recent Checkmarx Agentic AI Summit, featuring industry leaders in AI, development, and AppSec. Gain fresh, actionable insights into the real-world opportunities and challenges of AI in Application Security.