AI is accelerating software development faster than any previous technological shift, embedding itself into the everyday developer workflow. As a result, both development speed and productivity have surged, but security teams are experiencing the opposite: more complexity, less visibility, and growing uncertainty about what code is actually running in production. This gap exposed that many organizations are still relying heavily on AppSec tools that predate AI-generated code. And they’re quickly discovering, sometimes painfully, that these tools are struggling to make sense of (let alone protect) code created by AI. This convergence is driving an unexpected shift in application security: DAST is experiencing a renaissance. For years, DAST (Dynamic Application Security Testing) was dismissed as a “nice to have,” useful primarily for checking compliance boxes or just viewed as a pen testing tool. But as AI accelerates code creation and introduces new behaviors and attack surfaces, organizations are rediscovering that DAST is actually a critical pillar of AppSec. Only DAST can provide the broad deployment and meaningful security coverage needed in this new reality; coverage that static tools simply can’t deliver in an AI-driven world. This was the central theme of our recent webinar, The Future of DAST: Why AI-Generated Code Demands a New Strategy, hosted by Checkmarx product leaders. Grounded in data from our annual Future of AppSec Report, the discussion explored some pressing questions: In a world where AI is reshaping how applications are built, what’s working, what’s broken, and why is DAST suddenly rising from the dead to become a crucial safeguard? Meet the Expert Panel To explore these questions, we brought together three Checkmarx leaders uniquely positioned at the intersection of DAST innovation and AI-driven development: Simon Bennetts, ZAP Software Engineering Expert, Checkmarx and ZAP project leader and founder. Frank Emery, Director of Product Management, Checkmarx Moderated by Avi Hein, Senior Product Marketing Manager at Checkmarx, the conversation offered a candid look at the future of application security, and why DAST has become essential in the age of AI. The Hidden Reality of AI-Generated Code The webinar opened with a simple poll: What percentage of your organization’s application code is AI-generated? The result was revealing, nearly half of respondents answered, “We don’t know.” This sentiment aligns with findings from our Future of AppSec Report, which showed that while organizations recognize the risks of AI-generated code, they deploy it anyway. At the same time, however, the report revealed something surprising: DAST adoption is rising sharply. “47% said they have DAST in place for 2025, up from 38% last year,” Avi noted. “That’s nearly a 24% increase year over year.” This growth signals a critical shift: organizations are increasingly recognizing that AI will be present in their code, but they’re also admitting that their traditional security approaches aren’t keeping up. The result? They’re returning to runtime testing engines like DAST to close the gap. The DAST Renaissance For the past decade, DAST lived in the margins of AppSec programs. It wasn’t ignored entirely, but it wasn’t central either. Many organizations ran it infrequently, before a major release or to satisfy a compliance requirement. Simon described this evolution: “DAST started strong…But then as applications changed, DAST found it harder to explore these applications. Even authentication got really hard.” As modern frameworks and authentication flows grew more complex, DAST struggled to keep up. Meanwhile, SAST surged in popularity because it was so simple to use. As Simon put it, “SAST was much easier to set up. You point it at your repo, and it can just go from there.” Suddenly, organizations were treating it as a choice: DAST or SAST. But the truth is that no single testing method provides complete coverage. Simon emphasized: “I’ve never bought into the DAST or SAST thing. It’s much more important to combine these [two engines]. There is no one view of security.” In the AI era, DAST’s unique strength in being to see what actually happens when an application runs matters more than ever. DAST reveals what’s “genuinely vulnerable, delivering fewer false positives and a better signal-to-noise ratio than static analysis alone.” The AI Twist: Code That Looks Secure One of the most compelling insights that emerged from the discussion was about AI-generated code. Many developers assume that if AI writes the code, it must be secure. Frank explained why that assumption is dangerous: “People have this impression that AI-generated code is secure because the AI knows better. But what we’re finding is AI writes code that looks very secure but still has a lot of gaps.” And DAST plays a critical role in catching these hidden flaws. Frank put it bluntly: “DAST is acting as the police officer, confirming that all of the code that’s being written – especially by AI – is actually being written correctly.” With decades of development and maturity behind it, DAST can catch vulnerabilities that other tools miss. This is why organizations relying heavily on GitHub Copilot, ChatGPT, and other generative tools are increasingly turning to DAST for protection. DAST Adoption Lagged, But It’s Accelerating Now Although DAST has always been powerful, its adoption has historically been slow. Simon summarized the challenge: “DAST… is not as simple as SAST. You need a running system. You need to be able to authenticate. You need to be able to explore the application…Knowing how to tune [DAST] best for your applications is hard.” Frank agreed and added: “You start to see onboarding and adoption issues when you create a bottleneck around how DAST is used. Historically, you have experts…in charge of getting DAST up and running and that fundamentally restricted how much it could be adopted.” This complexity meant many organizations limited DAST usage to a handful of specialists, and you had to pick and choose what to test and how to test it. But modern DAST tools are focused on solving some of the usability challenges that more people within an organization can set up DAST. As Avi joked: “If I can set it up, anybody can.” It’s this new focus on accessibility that is driving much of DAST’s resurgence today. Will AI Replace DAST? Not Even Close. A major question during the session was whether agentic security systems might eventually replace DAST. Simon’s answer was unequivocal: “I don’t see agentic systems as being a threat to DAST and they won’t replace DAST, but I do see that DAST will feed into agentic systems, and we’ll also get LLMs configuring these systems.” He explained that there will be a shift in the marketplace, but DAST remains unmatched and it won’t be going anywhere any time soon. Frank echoed this view: “LLMs are not going to get rid of DAST at all. It’s just a more expensive way to solve a problem, but they will get rid of a lot of the manual stuff.” He sees LLMs playing a role in helping to configure and scale DAST. It will look different than how people are envisioning it. The consensus was that AI will be leveraged to enhance DAST by automating configuration, improving coverage, and reducing human effort – while DAST continues to anchor runtime security. Testing AI-Powered Apps As organizations deploy more AI-powered applications, a critical question emerged during the session: How do we test the security of AI-powered AppSec engines? Frank admitted that AI introduces the need for entirely new testing requirements that go beyond traditional DAST capabilities: “The end goal [of trying to secure your application and trying to find vulnerabilities] hasn’t changed. But, as new technologies come out, likely the engines you involve and how you orchestrate them together will look a little bit different. And that’s where some of the value of more modern DAST tools is going to come in.” But eventually we will need AI solutions that can secure themselves. Frank discussed the broader vision of self-securing applications, which he broke into four essential steps: identifying vulnerabilities, triaging them, fixing them, and verifying the fix. “People think this idea of a self-securing application is very Star Trek,” Frank said. “I’m a huge believer. I think we’re actually a lot closer than people realize.” DAST already plays a central role in three of these four steps – it’s the foundation that will ultimately make self-securing applications possible. Why Siloed Security Tools Are Failing The session concluded with a discussion about fragmented AppSec stacks – having separate tools for SAST, SCA, and DAST, with each producing isolated reports with no correlation between findings. When asked if this fragmentation is truly as problematic as it sounds, Simon didn’t hesitate: “No, it is as bad as you’re making it sound. It’s generally horrible.” Issues fall through the cracks, teams lose visibility, and developers drown in noise. Frank connected this directly to the AI acceleration challenge: “If you’re generating code ten times faster and your security team isn’t getting ten times faster, then you’re going to have to make difficult decisions, and that’s where risk emerges.” The solution lies in unified AppSec platforms like Checkmarx One, which consolidate findings across all testing engines, correlate signals to reduce noise, and deliver security feedback directly into developer workflows – at the speed AI demands. What Will 2026 Look Like? Everyone agrees that AI-generated applications will become more standardized, making DAST more effective over time: According to Frank, “with LLM-generated apps, there’s more standardization… The bulk will coalesce around standard ways of doing things, and that will make our jobs easier.” He also predicts growing reliance on DAST as the primary method for validating AI-generated code: “People are going to rely on DAST progressively more as the way to secure AI-generated code. It’s too easy a solution to the problems we’re seeing for it not to become standardized.” The Takeaway: DAST Is No Longer Optional Across the entire discussion, the message was unmistakable: DAST has shifted from a compliance checkbox to a mission-critical security control for the AI era. As AI accelerates development and introduces new runtime behaviors, only DAST can reveal what is truly exploitable in the live application. Organizations that treat DAST as optional will struggle to keep up with the pace and unpredictability of AI-driven development. Those that embrace it and integrate it into unified AppSec workflows will be best positioned to secure the next generation of software. Find Out More About DAST Want to see the future of DAST? Contact us for a demo and a discussion about the future – and present – of DAST and why DAST is so critical for the AI era. Tags: AI generated code AppSec dast Webinar