← Back to blog
·By MCPCore Teammcptutorialgetting-startedtools

How to Create an MCP Server: Step-by-Step Tutorial (2026)

A complete guide to building your first Model Context Protocol server — from understanding the spec to writing your first tool and exposing it to AI assistants like Claude and Cursor.

The Model Context Protocol (MCP) has quickly become the standard way to connect AI assistants to external tools and data. If you've been wondering how to build an MCP server — one that Claude, Cursor, Windsurf, or any other compatible AI can call — this guide walks you through everything from scratch.

What Is an MCP Server, Exactly?

An MCP server is a service that exposes tools to AI models over a standardized protocol. When an AI assistant wants to fetch data, call an API, or query a database, it sends a request to your MCP server, which executes the tool and returns the result.

The protocol uses JSON-RPC 2.0 over HTTP (specifically the Streamable HTTP transport introduced in the 2025 spec). Your server:

  1. Responds to tools/list — advertising which tools are available and their input schemas
  2. Handles tools/call — executing a tool with the AI-provided arguments and returning the result

That's the core of it. Everything else — authentication, observability, rate limiting — is layered on top.

The Tool Structure

Each tool in MCP has a defined shape. At minimum, a tool definition includes:

{ "name": "get_weather", "description": "Get current weather information for a city", "inputSchema": { "type": "object", "properties": { "city": { "type": "string", "description": "The city name to get weather for" } }, "required": ["city"] } }

The inputSchema is a standard JSON Schema object. The AI model reads this schema, understands what parameters the tool needs, and passes them in when calling the tool. If the AI can't populate a required parameter from context, it asks the user.

The 2025 spec also added outputSchema — an optional JSON Schema that describes the shape of the tool's return value. This helps clients validate the response and lets AI models parse structured data more reliably.

Building a Server from Scratch (Node.js)

Here's a minimal MCP server in Node.js that exposes a single tool.

1. Install the SDK

npm install @modelcontextprotocol/sdk

Anthropic publishes official SDKs for TypeScript/JavaScript and Python. The SDK handles the JSON-RPC plumbing, session management, and transport so you don't have to.

2. Create the server

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js"; import { z } from "zod"; import express from "express"; const app = express(); app.use(express.json()); const server = new McpServer({ name: "weather-server", version: "1.0.0", }); // Define a tool server.tool( "get_weather", "Get current weather information for a city", { city: z.string().describe("The city name"), }, async ({ city }) => { // Your actual logic here — call a weather API, query a DB, etc. const response = await fetch( `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${process.env.OPENWEATHER_KEY}` ); const data = await response.json(); return { content: [ { type: "text", text: `Weather in ${city}: ${data.weather[0].description}, ${Math.round(data.main.temp - 273.15)}°C`, }, ], }; } ); // Wire up the Streamable HTTP transport app.post("/mcp", async (req, res) => { const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined }); await server.connect(transport); await transport.handleRequest(req, res, req.body); }); app.get("/mcp", async (req, res) => { res.status(405).send("Use POST for MCP requests"); }); app.listen(3000, () => console.log("MCP server running on port 3000"));

3. Test it locally

Once running, you can test your server by pointing Claude Desktop or Cursor at http://localhost:3000/mcp. The AI will call tools/list to discover your tools, then invoke them when relevant.

Common Mistakes to Avoid

Descriptions matter more than you think. The AI model reads your description field to decide when to call a tool. Vague descriptions like "does stuff" will confuse the model. Write descriptions like you're explaining the tool to a junior developer — be specific about what it does and when it's useful.

Validate inputs defensively. The AI passes parameters based on user input and context inference. Always validate that the values are what you expect before passing them to an external API or database query. The spec requires servers to validate all tool inputs.

Return errors with isError: true. When a tool fails (API error, invalid data, etc.), return the error inside the result with isError: true — don't throw an uncaught exception. This tells the AI that the tool was called successfully but the operation failed, which it can handle gracefully.

return { content: [{ type: "text", text: `Failed to fetch weather: ${error.message}` }], isError: true, };

Transport: Why Streamable HTTP?

The current MCP spec (2025-06-18) defines two standard transports:

  • stdio — the server runs as a subprocess; the AI client spawns it and communicates over stdin/stdout. Simple for local tools, but can't be shared or deployed to a remote host.
  • Streamable HTTP — the server runs as an independent HTTP process. Clients send POST requests, and the server can optionally stream responses via Server-Sent Events (SSE).

For any tool you want to share, deploy to production, or call from multiple AI clients simultaneously, Streamable HTTP is the right choice. The server runs independently, scales normally, and can be hosted anywhere that serves HTTP traffic.

Skipping the Boilerplate

Writing the server code is just the beginning. A production MCP server also needs:

  • TLS / HTTPS (AI clients won't connect to plain HTTP endpoints in production)
  • Authentication (who can call your tools?)
  • Rate limiting (so one noisy AI session can't consume all your resources)
  • Logging and error tracking
  • A deployment pipeline

Setting this up from scratch takes time. If you'd rather focus on writing the tool logic itself, MCPCore gives you a hosted MCP server out of the box — you write JavaScript in a browser-based editor, configure security and rate limits from a dashboard, and your endpoint is live immediately. No server setup, no Dockerfile, no deployment pipeline.

What's Next

Once your server is running, the next step is connecting it to an AI client. The configuration differs slightly between Claude Desktop, Cursor, Windsurf, and VS Code — check out our guide on connecting AI clients to a custom MCP server for copy-paste config examples for each.


The MCP specification is maintained at modelcontextprotocol.io. The current protocol version is 2025-06-18.