MCP Server

Run prodlint inside your AI code editor. Get real-time production readiness feedback on security, reliability, performance, and AI quality — without leaving your workflow.

What it does

The prodlint MCP server connects to any editor that supports the Model Context Protocol. It exposes a scan tool that runs the same 52-check production readiness engine as the CLI — directly from your AI assistant.

Local executionRuns on your machine via stdio. No data sent externally.
Same engineIdentical rules, scoring, and smart detection as the CLI.
Structured outputResults returned as data your AI can reason about and act on.

Setup

Claude Code

One command — prodlint is available in your next conversation.

Cursor

Add to .cursor/mcp.json in your project root:

{
  "mcpServers": {
    "prodlint": {
      "command": "npx",
      "args": ["-y", "prodlint-mcp"]
    }
  }
}

Windsurf

Add to your Windsurf MCP configuration:

{
  "mcpServers": {
    "prodlint": {
      "command": "npx",
      "args": ["-y", "prodlint-mcp"]
    }
  }
}

Example prompts

“Run prodlint on this project”

“Check this file for security issues”

“Scan my API routes and fix any critical findings”

“What's my prodlint score?”

How it works

  1. 1.Your AI editor starts the MCP server locally via npx prodlint-mcp
  2. 2.Communication happens over stdio — no network requests, no external servers
  3. 3.When you ask about code quality, the AI calls the scan tool
  4. 4.prodlint runs the same 52-rule engine locally and returns structured results
  5. 5.Your AI reads the findings and can suggest or apply fixes directly