Model Context Protocol - with PagerDuty
This is an MCP server has integration with PagerDuty. The integration supports basic queries like
"who is oncall for NASA team right now ?"
The server can be happily integrated with Claude. With few simpel steps
Integration with PD
You should update the token, just run
export PAGERDUTY_API_KEY=your_api_key_here
Integration with Claude
First, make sure you have Claude for Desktop installed. You can install the latest version here.
We’ll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at
~/Library/Application Support/Claude/claude_desktop_config.json
in a text editor. Make sure to create the file if it doesn’t exist. Refer here for more. Updated the config with below entry
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"--directory",
"/ABSOLUTE/PATH/TO/PARENT/FOLDER/server/pagerduty",
"run",
"pagerduty.py"
]
}
}
}
You may need to put the full path to the uv executable in the command field. You can get this by running which uv on MacOS/Linux
Configuration Testing
At the end, once you've configured Claude. Select the tools button, to verify 3 MCP tools are available.
- The prompt should show something as below:

Demo
Recommend MCP Servers 💡
@sentry/mcp-server
An MCP server enabling LLM interactions with Sentry, supporting stdio and remote transports.
@strowk/mcp-k8s
MCP server connecting to Kubernetes
s3-mcp-server
An MCP server implementation that exposes AWS S3 data, including PDF documents, as resources and provides tools for listing buckets, listing objects, and retrieving objects, primarily for use with LLMs.
@tiberriver256/mcp-server-azure-devops
An MCP server implementation for Azure DevOps, allowing AI assistants to interact with Azure DevOps APIs through a standardized protocol.
stefanoamorelli/codemagic-mcp
A lightweight MCP server for seamless access to Codemagic CI/CD APIs, enabling natural language interaction for CI/CD tasks.
hyperbolic-gpu
Interact with Hyperbolic's GPU cloud, enabling agents and LLMs to view and rent available GPUs, SSH into them, and run GPU-powered workloads.