MCP Server For Garak LLM Vulnerability Scanner
A lightweight MCP (Model Context Protocol) server for Garak.
Example:
https://github.com/user-attachments/assets/f6095d26-2b79-4ef7-a889-fd6be27bbbda
Tools Provided
Overview
| Name | Description |
|---|---|
| list_model_types | List all available model types (ollama, openai, huggingface, ggml) |
| list_models | List all available models for a given model type |
| list_garak_probes | List all available Garak attacks/probes |
| get_report | Get the report of the last run |
| run_attack | Run an attack with a given model and probe |
Detailed Description
-
list_model_types
- List all available model types that can be used for attacks
- Returns a list of supported model types (ollama, openai, huggingface, ggml)
-
list_models
- List all available models for a given model type
- Input parameters:
model_type(string, required): The type of model to list (ollama, openai, huggingface, ggml)
- Returns a list of available models for the specified type
-
list_garak_probes
- List all available Garak attacks/probes
- Returns a list of available probes/attacks that can be run
-
get_report
- Get the report of the last run
- Returns the path to the report file
-
run_attack
- Run an attack with the given model and probe
- Input parameters:
model_type(string, required): The type of model to usemodel_name(string, required): The name of the model to useprobe_name(string, required): The name of the attack/probe to use
- Returns a list of vulnerabilities found
Prerequisites
-
Python 3.11 or higher: This project requires Python 3.11 or newer.
# Check your Python version python --version -
Install uv: A fast Python package installer and resolver.
pip install uvOr use Homebrew:
brew install uv -
Optional: Ollama: If you want to run attacks on ollama models be sure that the ollama server is running.
ollama serve
Installation
- Clone this repository:
git clone https://github.com/BIGdeadLock/Garak-MCP.git
- Configure your MCP Host (Claude Desktop ,Cursor, etc):
{
"mcpServers": {
"garak-mcp": {
"command": "uv",
"args": ["--directory", "path-to/Garak-MCP", "run", "garak-server"],
"env": {}
}
}
}
Tested on:
- [X] Cursor
- [X] Claude Desktop
Running Vulnerability Scans
You can run Garak vulnerability scans directly using the included CLI tool.
Prerequisites for Scanning
-
Ollama must be running:
ollama serve -
Pull a model to scan:
ollama pull llama2
Using the CLI Scanner
After installation, you can use the garak-scan command:
# List available Ollama models
uv run garak-scan --list-models
# Scan a specific model with all probes
uv run garak-scan --model llama2
# Scan with specific probes
uv run garak-scan --model llama2 --probes encoding
# Scan with custom output directory
uv run garak-scan --model llama2 --output-dir ./my_scans
# Run multiple parallel attempts
uv run garak-scan --model llama2 --parallel-attempts 4
Scan Results
Scan results are saved in the output/ directory (or your specified directory) as JSONL files. Each scan creates a timestamped report file:
output/scan_llama2_20250125_143022.report.jsonl
GitHub Actions Integration
This repository includes a GitHub Actions workflow that automatically runs vulnerability scans:
- Triggers: Push to main/master, pull requests, weekly schedule (Mondays at 2am UTC)
- Manual runs: Go to Actions → Garak LLM Vulnerability Scan → Run workflow
- Custom options: Specify model and probes when running manually
- Results: Scan results are uploaded as workflow artifacts
To enable automated scanning:
- Ensure the workflow file exists at
.github/workflows/garak-scan.yml - Push to your repository
- Check the Actions tab to view scan results
Future Steps
- [ ] Add support for Smithery AI: Docker and config
- [ ] Improve Reporting
- [ ] Test and validate OpenAI models (GPT-3.5, GPT-4)
- [ ] Test and validate HuggingFace models
- [ ] Test and validate local GGML models
Recommend MCP Servers 💡
mcp-server-notify
Send system notification when Agent task is done.
cyberchef_api_mcp_server
This model context protocol (MCP) server interfaces with the [CyberChef Server](https://github.com/gchq/CyberChef-server) API. Allowing you to use any LLM/MCP client of your choosing to utilise the tools and resources within CyberChef.
fastmcp
The fast, Pythonic way to build MCP servers and clients
deepsource-mcp-server
An MCP server that integrates with DeepSource to provide AI assistants with access to code quality metrics, issues, and analysis results
@ip2location/mcp-ip2location-io
IP Geolocation MCP server using IP2Location.io API
koido/extreme-p-mcp
An MCP server providing API access to extreme p-value calculation functions (z, t, F, chi-square, SAIGE) using R and Python.