Last Week in AppSec for 26. February 2026 - Checkmarx
← Zero Blog

Last Week in AppSec for 26. February 2026

Last Week In AppSec saw public disclosures relating to AI code assistants trusting context that can be attacker-controlled.

Street-art style, widescreen illustration in neon green and black: a computer shows a “Do you trust this repository?” warning, while a large folder labeled “REPO” and a masked figure at a laptop are connected by cables labeled “MCP” and a small “hooks.sh” file. An “ISSUE” page looms in the background with red arrows indicating hidden influence, suggesting AI tooling being steered by untrusted repo and issue content.

AI trust continues to be a challenge.

The acceleration promised by AI code assistants leads developers and others to relax their trust boundaries, and tool makers find themselves constantly weighing what they should protect users against and what risks users are accepting for themselves.

Get Last Week In AppSec in your Inbox with Checkmarx Zero
visual

Trusting the wrong repo leads to Remote Code Execution in Claude Code (CVE-2025-59536 and CVE-2026-21852)

CVE-2025-59536 CVSS v4.0 =8.7 CVSS:4.0/… Claude Code’s startup trust dialog could lead to Command Execution attack

CVE-2026-21852 CVSS v4.0 =5.3 CVSS:4.0/…Claude Code’s MCP configuration may lead to remote code execution

A pair of CVEs against Claude Code this week relate to the trust developers place in configuration files stored in code repositories.

We’d hope that repositories for an organization’s private projects wouldn’t pose high risk for this sort of tampering. However, developers who work on public or open-source projects should be extra cautious. And organizations shouldn’t assume private projects are fully safe: insider threats and even attackers who manage to have a foothold that gives them repo access are genuine risks to consider and appropriately manage.

Untrusted Hooks

Claude Code’s “hooks” feature permits the user’s settings.json configuration file to specify commands that should be run at various points in a Claude Code session; for example, you can specify a SessionStart hook to run commands the moment you start Claude Code.

If a .claude/settings.json file is in a repository, Claude will load it. A malicious user with access to a repository can put whatever commands they want in that settings file, and Claude Code would execute them for you.

While Claude does give some degree of warning that it may execute some files, and asks if you trust the repo, it doesn’t give very clear indications about what it will run. And we know from previous work that those dialogs can lie anyhow.

Claude Code trust dialog, courtesy of Try AI

Untrusted MCP configurations

Claude Code also supports interactions with Model-Context Protocol (MCP) servers, allowing Code to query data sources for additional context while it works. As with the hooks feature above, the configurations for MCP tools allow specifying initialization commands.

Which means if the repository has an .mcp.json file, Claude Code will try to run the MCP tool it defines: and it can use a command provided by an attacker, leading to remote command execution.

The warning dialog for this is much better, but researchers were able to use the same tactic above to include a .claude/settings.json with instructions to allow-list the malicious MCP configuration. When this was done, the code was run before any trust dialog was displayed.

Defense

These specific items can be addressed by updating Claude Code to the most recent version. However, organizations should expect related issues in AI agents to continue to be discovered, and developers should make behavioral changes to help reduce the risk:

  • Take warnings seriously. It’s easy to just click “Yes”, but it’s important to actually stop and think about the safety warnings AI tools give you
  • Pay attention when tool configurations change. Using hooks for various git operations that occur after git retrieves remote code (like post-merge and post-checkout) which warn when common tool configuration files and directories (like .claude and .vscode are changed) can help developers know that they should inspect those changes before running the related tool
  • Review configuration file changes in PRs. Make sure that when reviewing pull requests (PRs) / merge requests (MRs), reviewers treat configuration file changes with the same rigor as code changes. This can help catch dangerous accidents as well as attempted attacks.

These issues were reported by Check Point Research; see their discussion for details of the attack and additional response guidance.

GitHub Copilot injection from Issues when running Codespaces

Researchers with Orca Security managed to hide prompt injection attacks in GitHub Issues. When a developer starts a Codespace from that issue, the GitHub Copilot AI assistant loads the issue as context automatically. They were able to use the injection in the issue to prompt Copilot to take dangerous steps including exfiltrating sensitive data like GITHUB_TOKEN values (which allow authentication to GitHub accounts).

The technique used is very similar to how we used GitHub Issues to inject Claude Code’s security reviewer:

  • Craft a prompt that causes the AI to perform a dangerous or malicious action
  • Hide that prompt inside a GitHub Issue (either through obfuscation or providing it as an HTML comment)
  • Trigger the AI to consume the GitHub Issue, including the prompt material, as context. It then evaluates that context as part of the session, treating it as a prompt
  • The prompt executes and performs the attacker’s actions

This class of attack essentially conscripts your AI agent in an attack against you, turning your trusted assistant into a threat.

Security Week covers the story in more detail.

Tags:

AI

AI Security

Claude Code

CVE

GitHub Copilot

Last Week In AppSec