I gave myself one weekend to set up 6 MCP servers. Filesystem, Postgres, web scraping, GitHub, Slack, browser automation — across both Claude Desktop and Cursor. By Sunday night, two of them were uninstalled. Two more were on probation. The other four stayed.
This isn’t another explainer on what the Model Context Protocol is — official docs cover that in 90 seconds. This mcp servers setup guide is what actually happened when I plugged the servers in: the exact JSON that worked, the failures that ate forty minutes apiece, and which ones earned a permanent slot in my context budget.
Before any of that, there’s exactly one thing about MCP setup you need to know.
Where the Configs Actually Live (the Only Setup Knowledge You Need)
The plumbing is simpler than the docs make it sound. Two clients, two file paths, one JSON shape.
Claude Desktop reads from ~/Library/Application Support/Claude/claude_desktop_config.json on macOS, or %APPDATA%\Claude\claude_desktop_config.json on Windows. Cursor reads from .cursor/mcp.json inside your project for project-scoped servers, or ~/.cursor/mcp.json for global ones. The claude mcp server configuration and the cursor mcp setup 2026 file use the same JSON shape: a top-level mcpServers object where each entry has command, args, and env.
The only operational rule worth memorizing: restart the client after every config change. There is no hot-reload. Edit, save, quit completely, reopen. Skip the quit step and you’ll spend an hour debugging a config that’s already correct.
With that out of the way, here’s what happened when I plugged in six of them.
The 6 Servers I Tested (and Exactly What Broke)
I went into this expecting one or two failures. I got at least one snag on every single server.
Filesystem (@modelcontextprotocol/server-filesystem). Worked first try — but only after I learned that relative paths in args silently fail. The server starts, the tools register, and every read returns nothing useful. Quote absolute paths only. Five minutes lost figuring that out.
Postgres (@modelcontextprotocol/server-postgres). Connected on the first restart. The read-only schema is real — write attempts return cryptic errors that look like network issues. The bigger trap: I’d put DATABASE_URL in args, which dumped my full connection string into Claude’s logs. Move every credential to env. Always.
Web scraping (Firecrawl). The biggest win of the weekend. Worth its tokens for any research workflow. Free tier API key works fine. My first call 401’d until I noticed the env var is FIRECRAWL_API_KEY, not FIRE_CRAWL_API_KEY. The package name has the underscore; the env var doesn’t. Read the README, not the package name.
GitHub (@modelcontextprotocol/server-github). Needs a personal access token with the right scopes. Without repo scope, every list_issues call returns an empty array — no error, no warning, just nothing. I spent fifteen minutes convinced the server was broken. It was working. My token wasn’t.
Slack (@modelcontextprotocol/server-slack). The OAuth dance is the worst part of this whole exercise. The bot needs channels:history and channels:read, plus you have to invite it to every channel you want it to see. I burned forty minutes on a channel_not_found error that turned out to be the bot not being in the channel. The error message tells you nothing.
Browser automation (Playwright). Heavy. Roughly 30,000 tokens of tool definitions register before you ask anything — twenty-five tools, most of which I’d never call from inside a chat session. Useful for one-off automation tasks, painful as an always-on server.
Two of these earned a permanent slot. Two were close calls. And two got uninstalled before Sunday night.
The 2 I Uninstalled (and Why It Wasn’t Close)
Slack got cut first. Useful in theory, but the auth pain plus the constant “what channel are we in” clarifications meant I reached for the actual Slack app every time anyway. The model never saved me time on Slack work — it added a layer.
Playwright went next. Twenty-five tools is a context tax I’d rather pay only when I need it. I’ll spin it up per-project for scraping or end-to-end testing, not as an always-on server burning tokens before my first message.
The pattern: a server gets cut when its tool count is high and the workflow trigger is rare. Always-on cost is real. Five servers at roughly 15 tools each adds up to about 75,000 tokens of tool definitions loaded before you type a single question. That math is the whole reason fewer beats more — and it’s the same math behind every LLM token accounting surprise on your monthly bill.
So what does the keeper config actually look like? Here it is, in a shape that works for both clients.
My Final 4-Server Config (Copy-Paste for Both Clients)
The four that survived: filesystem, Postgres, Firecrawl, GitHub. The mcp server config file guide below works in claude_desktop_config.json. The same block — byte-for-byte — works in .cursor/mcp.json. Only the file path changes.
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"]
},
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": { "DATABASE_URL": "${DATABASE_URL}" }
},
"firecrawl": {
"command": "npx",
"args": ["-y", "firecrawl-mcp"],
"env": { "FIRECRAWL_API_KEY": "${FIRECRAWL_API_KEY}" }
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "${GITHUB_TOKEN}" }
}
}
}
One subtle difference Cursor users get for free: project-scoped .cursor/mcp.json lets you have per-repo configs Claude Desktop can’t match. Different filesystem roots, different database URLs, different tokens per project — without polluting your global setup.
Store every secret in your shell env, not in the committed JSON. The ${VAR} interpolation is your friend.
One last thing before you paste any of this — a 30-second safety check that would have saved me on server #3.
The 30-Second Safety Check Before You Paste
Three quick checks before any MCP server gets your tokens.
One: provenance. Is it in the official @modelcontextprotocol GitHub org or maintained by the vendor whose service it wraps? If it’s neither, treat it like any other random npm package — sandbox it, read the source, or skip it.
Two: tool count. Fewer, focused tools beat sprawling ones, both for context cost and for attack surface. A server registering thirty tools is doing too much; you’ll pay for all of them on every turn.
Three: scope. Read the env vars and file paths it asks for. A filesystem server requesting your home directory root is a red flag, not a feature. Scope it to the project folder you actually want it touching.
I tested 6, kept 4, and the thirty seconds I didn’t spend on this checklist cost me a Saturday afternoon. Don’t repeat it. Start with filesystem and one API-backed server — Firecrawl if you research, GitHub if you ship — and only add a third when an actual workflow demands it. The best agent setup is the smallest one that gets the job done.