Checkmarx CheckAI GPT Plugin
CheckAI is an innovative plugin for ChatGPT, the industry’s pioneer in guarding against potential attacks within ChatGPT-generated code. The plugin enables developers and security teams to detect and prevent attacks caused by malicious open-source packages and dependencies, while working within the ChatGPT interface.
The video below provides a visual walkthrough of the outlined procedures:
Note
ChatGPT plugins are currently accessible to ChatGPT+ users only.
Getting started
To get started, activate the plugins feature within your ChatGPT+ account as described below:
Access your ChatGPT+ account settings.
Within the Beta Features section, enable the Plugins option.
Return to the ChatGPT UI and select the GPT-4 model.
Click the dropdown indicating No plugins enabled.
Click on the Plugin store link.
Use the search bar to find Checkmarx CheckAI.
Click Install.
It's as straightforward as that.
Scanning GPT generated code with CheckAI
Once you've successfully integrated the CheckAI plugin, continue your interaction with GPT as usual.
Upon detecting GPT-generated code, the CheckAI plugin will automatically initiate a scan of the generated code.
Note
If GPT prompts you to validate the code, answer yes.
There are three possible outcomes:
Valid: The generated code does not have any open-source issues.
Suspicious: The generated code includes a suspicious package, posing a potential risk of hallucination attacks.
Malicious: The generated code includes a package recognized as malicious.
If the SCA APIs identify a package with a known security vulnerability or issue, the CheckAI plugin may propose version pinning to the latest known secure version to address these concerns.

If the generated code includes a package with a restricted license, such as GPL, the plugin will alert you about the licensing issue. However, if the package has a permissive license, like MIT, the plugin will not present any specific license information.