2025 CISO Guide to Securing AI-Generated Code

Blog

AI is Writing Your Code—Who’s Keeping It Secure? 

13 min.

June 12, 2025

Mark Twain famously said, “History doesn’t repeat itself, but it often rhymes.” In tech security, AI is creating a new verse that rhymes with Cloud. 

Just over a decade ago, CISOs tried to ban Dropbox and Google Drive to stop unsanctioned file sharing. That didn’t work. Cloud apps simply went underground—until security leaders realized that blocking wasn’t the answer. Governance was. 

Today, AI coding tools like GitHub Copilot and Amazon Q are the new Shadow IT. Developers are using them—sometimes with approval, but mostly without. And almost always with insufficient oversight or policy guardrails. 

They’re moving fast. Ignoring it won’t stop the adoption and trusting dedicated AI coding tools and existing security protocols to be ‘secure enough’ is a leap of faith CISOs can’t afford. 

This article skips the AI hype and gets practical, providing CISOs and security leaders with a brass-tacks guide to secure AI-generated code at the pace it’s being written—with real-time IDE scanning, instant feedback in Github repos, enforceable governance, and tools like Checkmarx One

But first, a quick review of the AI-generated code landscape.  

The Reality of AI Coding Adoption – the Train Has Long Left the Station 

Checkmarx’ upcoming 2025 global survey, conducted with Censuswide, found that AI coding tools have already become a core part of modern development workflows.  

Across CISOs, AppSec managers, and developers, nearly 70% of respondents estimated that more than 40% of their organization’s code was AI-generated in 2024, and 44.4% of respondents estimating 41–60% of their code is AI-generated.  

Percentage of developers generating AI code

This stat is corroborated by the Stack Overflow 2024 Developer Survey, showing that 76% of developers now use AI tools in their work. 

AI-Generated Code – The New Risky Norm

AI coding assistants like GitHub Copilot, Gemini Code Assist, Cursor, and Amazon Q Developer don’t replace built-in security. They are not a replacement for AST testing. While they can make development faster, even the vendors recommend using “automated tests and tooling.”  

Relying on AI coding assistants to be secure-by-default falls short. Among other risks, AI coding assistants potentially introduce new risks such as hallucinated code or prompt injection, and manual reviews alone struggle with scalability. Their transparency is also limited, as they provide only vague details on their model training or AI vulnerabilities.  

A 2024 empirical study on Security Weaknesses of Copilot-Generated Code in GitHub Projects analyzed 733 Copilot-generated snippets from GitHub projects. It found that 29.5% of Python and 24.2% of JavaScript snippets contained security weaknesses, including XSS and improper input validation. 

AI-generated code is not inherently more secure than human-generated code. Just as human-generated code imposes security risks, so does AI. But what’s different is the scale and speed of AI-generated code, as well as the psychological factors which lead to the lack of oversight.   

Why AI-Generated Code Gets Less Review and Creates Security Risks

Because developers might not fully understand or carefully review code created by AI, this code could end up having more security problems and errors compared to code written and checked by people. The way AI creates code can be unclear, and it might learn from flawed examples. If developers trust AI too much and don’t double-check its work, issues can easily be missed.  

Research shows AI-generated code often receives less careful checking than human-written code, creating serious security risks. Developers feel less responsible for AI-generated code and spend less time reviewing it properly.

Research also shows that coders using AI tools wrote more insecure code than those who didn’t. This false confidence is made worse because many developers have an unfounded sense of trust in AI-generated code and are less familiar with the logic behind it. Without proper review processes and specialized tools for checking AI-generated code, these problems will continue as developers trust AI too much without verifying its work. 

That’s why securing AI-generated code requires a new kind of strategy: one tailored to the unique challenges it poses. 

A CISO’s Strategy for Securing AI-Generated Code 

To address the rising complexity and scale of AI-written code, CISOs must implement a layered strategy that combines real-time technical controls with organizational governance.  

Governance Controls 

Governance controls help CISOs enforce responsible AI adoption at scale by defining boundaries, policies, and shared responsibilities that span development, security, and compliance teams. Some of these governance controls are good practices, even when dealing with human-generated code. But they become even more important when AI is added to the mix. 

Here’s what CISOs should be doing:  

AI Code Usage Policies 

Establish granular policies to govern AI tool usage: 

  • Specifying which AI tools are permitted, and for what capacity. 
  • Defining acceptable use cases (e.g., prototyping vs. production code). 
  • Ensuring that AI generated code is clearly identifiable. 
  • Limiting use of AI-generated code in sensitive or critical components, such as authentication modules or financial systems.  
  • Mandating peer reviews to ensure quality and security 

Security Review Processes 

Formalize the review process. This means: 

  • Establishing thresholds for when reviews are required (e.g., all AI-generated code touching sensitive systems or business logic) 
  • Assigning responsibility to trained AppSec reviewers or peer developers, and integrating those reviews into PR and CI/CD workflows  
  • Defining reviews in multiple places in the SDLC: Pre-review, following commit, within CI/CD tools using a tool like Vorpal,  etc. 
  • Aligning reviews to ensure code meets secure coding standards like the OWASP Top 10  
  • Training reviewers on how to review AI-generated code, and what to look for – going beyond functionality checks and into the inspected code handles inputs, sanitizes data, and manages privilege boundaries 

Developer Education 

Invest in training that goes beyond general secure coding principles and focuses on the unique risks posed by GenAI tools. 

Developers should understand how AI models generate code, the security weaknesses they’re prone to introducing, and how to critically evaluate AI-generated snippets before integrating them.  

This includes recognizing the limits of AI suggestions and validating logic paths. To reinforce this mindset, organizations can incorporate ongoing, role-specific education into developer workflows through platforms like Checkmarx Codebashing. 

Cross-Functional Accountability

 Build formal accountability frameworks that unite AppSec, DevOps, and compliance.  

This includes setting shared KPIs (like reducing AI-originated vulnerabilities or improving time-to-remediation), maintaining audit trails for how AI-generated code is reviewed and approved, and running regular cross-team assessments to track policy adherence. 

Culturally, it means shifting from siloed enforcement to shared ownership, where developers, too, are aware of compliance expectations, and security teams offer collaborative, context-aware guidance. 

Technical Controls 

Technical and governance controls complement one another. With technical controls, the focus is on automated, scalable solutions that integrate into development pipelines. 

 Implementation should leverage existing security tools, prioritize critical systems, and ensure measurable risk reduction without diving into granular configurations. Below are the main technical controls: 

Automated Security Testing

 AST testing, including SAST, DAST, API Security, and SCA are foundational tools for detecting known vulnerabilities in source code and applications, insecure dependencies, and misconfigurations across the SDLC. While they’re not enough on their own to secure AI-generated code, they remain essential as a baseline layer of protection in any application security strategy. 

Real-Time IDE Scanning (AI Secure Coding Assistants) 

AI Secure Coding Assistants guide developers when working with AI-generated code, by identifying insecure patterns and recommending secure alternatives in real time. 

They offer contextual suggestions as code is written, helping developers spot flaws early, before code reaches staging or production environments.  

Real-time scanning inside the IDE helps developers flag potential risks and coding patterns that lack best practices. This is useful for human-generated code, but it is critical for AI-generated code. 

These tools provide instant feedback on short snippets of code before it’s even committed, surfacing risks like unsafe input handling or insecure defaults.  

For developers using GitHub Copilot, ASCA can even generate remediation suggestions, turning AI from just a coding assistant into a security partner.

Unlike SAST, which analyzes entire applications post-commit, IDE scanning focuses on localized code blocks—not replacing deep analysis, but rather tightening feedback loops so developers learn secure coding practices in real-time. 

See ASCA in action: 

Embed code:

Pre-merge Developer Feedback  

Vorpal, a lightweight GitHub Action, provides a critical security checkpoint at the pull request stage. Acting as the last line of defense before code enters your main branch, Vorpal flags violations of secure coding best practices with results visible directly in GitHub’s interface.  

Available as a free GitHub Action for developers worldwide, Vorpal is particularly effective with AI-generated code, which may appear syntactically correct but carry hidden risks due to insecure patterns.

Unlike traditional security gates that slow development, Vorpal integrates seamlessly into existing workflows, allowing teams to maintain velocity while enhancing security.

Vorpal integration into Checkmarx

Additional AI Tools 

AI tools will sometimes suggest insecure open-source packages. SCA identifies a secure package. If no alternative secure package exists, Checkmarx can suggest a package with similar functionality. 

Security integration into AI tools is helpful. For example, Checkmarx One integrates into ChatGPT and GitHub Copilot to automatically scan source code and identify malicious packages, within the AI interface itself. 

Isolated Execution Environments 

Sandboxing or containerization to test AI-generated code in controlled environments, limit the blast radius of potential flaws or malicious logic.  

API Security 

APIs are particularly sensitive to risks introduced by AI-generated code because they’re high-exposure entry points into critical systems. AI tools might accidently generate code referencing non-existent APIs, misusing authentication flows, or API implementations. 

API security tools mitigate these risks by offering automated discovery, traffic inspection, anomaly detection, and AI-driven exploit prevention.  

They help enforce strong authentication (e.g., OAuth, JWT), validate inputs, and block business logic abuse, making them a vital control point for detecting and preventing AI-induced API vulnerabilities.  

How Checkmarx One Secures AI-Generated Code 

Checkmarx One protects AI-generated code through a layered defense strategy that spans the entire development lifecycle.

The platform combines foundational AppSec tools—SAST, DAST, SCA, Secrets Detection, Malicious Package Protection, Container Security, IaC Security, and API security—with AI-specific controls designed for the unique challenges of AI-generated code. 

What sets Checkmarx One apart is its comprehensive approach to security: ASCA catches issues as developers write code within the IDE, comprehensive SAST/DAST scans offer deeper analysis before deployment, and the platform integrates seamlessly with open-source tools.

For GitHub users, Checkmarx’s free, open source Vorpal tool provides an additional security checkpoint during pull requests, complementing the Checkmarx One platform. This multi-layered approach ensures AI-generated code receives appropriate scrutiny at each stage of development. 

Most importantly, Checkmarx One brings it all together within a single platform, providing CISOs, AppSec leaders and developers with complete visibility and consistent enforcement from code creation to deployment. 

Security integration directly into AI coding tools represents the newest frontier in application security. Checkmarx One now offers integrations with popular AI coding assistants, automatically scanning generated code and identifying security issues without requiring developers to switch contexts.

These integrations can also help identify potentially malicious packages that AI assistants might suggest, offering secure alternatives with similar functionality when available. This approach meets developers where they are—inside their preferred AI tools—rather than requiring them to adopt yet another security solution. 

Conclusion 

AI-generated code is no longer an emerging challenge – it’s the new normal. As with any technological breakthrough, it introduces significant benefits and new risks—both demand attention. With the right combination of tools and governance, CISOs can ensure their teams embrace the productivity gains of AI coding assistants without compromising security.  

The organizations that will thrive in this new landscape won’t be those that resist AI-generated code, but those that secure it effectively. Checkmarx One offers a unified approach to this challenge, helping security teams keep pace with AI-accelerated development while maintaining robust protection.

Request a demo to see how Checkmarx One helps secure AI-generated code while maintaining your development velocity.

Read More