Loading languages...
PI

pinecone-assistant

@Pinecone

Pinecone Assistant

assistant
pinecone
vector-database
ai-agent
langchain
claude

The Pinecone Assistant MCP server allows you to integrate Pinecone's powerful vector database and assistant capabilities with various AI agents and applications. It provides a standardized Model Context Protocol (MCP) interface for retrieving context snippets, building assistants, and enabling advanced AI workflows.

What it does:

  • Provides an MCP endpoint for Pinecone Assistant.
  • Facilitates retrieval of relevant context snippets for AI models.
  • Enables integration with popular AI frameworks and tools like LangChain, Claude Desktop, and Cursor.

How to use: There are two primary ways to use the Pinecone Assistant MCP server:

  1. Remote MCP Server (SSE/Streamable HTTP): Connect directly to your hosted Pinecone Assistant instance via HTTP. The endpoint URL follows the format: https://<YOUR_PINECONE_ASSISTANT_HOST>/mcp/assistants/<YOUR_ASSISTANT_NAME>. It supports both Streamable HTTP and SSE transports. Example LangChain integration:

    from langchain_mcp_adapters.client import MultiServerMCPClient
    # ... (other imports and setup)
    async with MultiServerMCPClient(
        {
            "assistant_ai_news": {
                "url": "https://prod-1-data.ke.pinecone.io/mcp/assistants/ai-news",
                "transport": "streamable_http",
                "headers": {"Authorization": f"Bearer {pinecone_api_key}"}
            }
        }
    ) as client:
        # ... (use client.get_tools() with your agent)
    
  2. Local MCP Server (Stdio via Docker): Run the Pinecone Assistant MCP server locally using Docker. This provides a stdio interface. To start the server:

    docker run -i --rm \
      -e PINECONE_API_KEY=<PINECONE_API_KEY> \
      -e PINECONE_ASSISTANT_HOST=<PINECONE_ASSISTANT_HOST> \
      pinecone/assistant-mcp
    

    You can configure this in your mcp.json for tools like Claude Desktop or Cursor:

    {
      "mcpServers": {
        "pinecone-assistant": {
          "command": "docker",
          "args": [
            "run", "-i", "--rm", "-e", "PINECONE_API_KEY", "-e", "PINECONE_ASSISTANT_HOST", "pinecone/assistant-mcp"
          ],
          "env": {
            "PINECONE_API_KEY": "<YOUR_PINECONE_API_KEY>",
            "PINECONE_ASSISTANT_HOST": "<YOUR_PINECONE_ASSISTANT_HOST>"
          }
        }
      }
    }
    

Implementation Details:

  • The remote server leverages Pinecone's cloud infrastructure.
  • The local server is provided as a Docker image (pinecone/assistant-mcp) for easy deployment and integration.

# mcpServer Config

{
  "mcpServers": {
    "pinecone-assistant": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "-e",
        "PINECONE_API_KEY",
        "-e",
        "PINECONE_ASSISTANT_HOST",
        "pinecone/assistant-mcp"
      ],
      "env": {
        "PINECONE_API_KEY": "<YOUR_PINECONE_API_KEY>",
        "PINECONE_ASSISTANT_HOST": "<YOUR_PINECONE_ASSISTANT_HOST>"
      }
    }
  }
}

# stdio

docker run -i --rm -e PINECONE_API_KEY -e PINECONE_ASSISTANT_HOST pinecone/assistant-mcp

# sseURL

https://<YOUR_PINECONE_ASSISTANT_HOST>/mcp/assistants/<YOUR_ASSISTANT_NAME>/sse

# streamableURL

https://<YOUR_PINECONE_ASSISTANT_HOST>/mcp/assistants/<YOUR_ASSISTANT_NAME>
Transport:
stdio
streamable
sse
Language:
Updated: 7/31/2025