Tutorial: Building a Minimal MCP Server

A Step-by-Step Guide Using Python and FastAPI

This walkthrough shows you how to set up a minimalist Model Context Protocol (MCP) server from the ground up. We'll make a server with a single "tool," then build a simple LLM agent client to use it. For our demo, the tool will provide weather information.

Prerequisites

You'll need Python 3.7+ installed. We'll be using the FastAPI framework for our server and `requests` for our client.

# Install the necessary libraries
pip install "fastapi[all]" uvicorn python-multipart requests

Step-by-Step Guide

Step 1: Define the Tool Logic

First, we'll define the main function our server will expose. It's a simple Python function. Make a file called `tools.py`.

# tools.py
import random

def get_weather(location: str):
    Here is a rewritten line of similar size:

'''A sample function to retrieve weather data for a specified place.'''
    if "tokyo" in location.lower():
        return {"location": "Tokyo", "temperature": "15°C", "condition": "Cloudy"}
    
    temps = ["25°C", "-5°C", "30°C"]
    conditions = ["Sunny", "Snowing", "Rainy"]
    
    return {
        "location": location, 
        "temperature": random.choice(temps),
        "condition": random.choice(conditions)
    }

Step 2: Build the MCP Server

Set up a server to expose this tool through an API endpoint—this forms our MCP logic. Name the file `server.py`.

# server.py
from fastapi import FastAPI
from pydantic import BaseModel
from typing import Dict, Any
from tools import get_weather

app = FastAPI()

# This defines the expected input for our endpoint
class ToolCallRequest(BaseModel):
    tool_name: str
    parameters: Dict[str, Any]

# Our "tool registry" is just a simple dictionary
AVAILABLE_TOOLS = {
    "get_weather": get_weather
}

@app.post("/tools/call")
def call_tool(request: ToolCallRequest):
    tool_function = AVAILABLE_TOOLS.get(request.tool_name)
    
    if not tool_function:
        return {"error": "Tool not found"}

    # Call the actual tool function with its parameters
    result = tool_function(**request.parameters)
    return {"status": "success", "result": result}

Step 3: Create the LLM Agent Client

This agent mimics an LLM choosing a tool. It takes a prompt, picks the relevant tool, and sends a request to our MCP server. Create `agent.py`.

# agent.py
import requests
import json

MCP_SERVER_URL = "http://127.0.0.1:8000/tools/call"

def run_agent(prompt: str):
    print(f"Agent received prompt: '{prompt}'")
    
    # --- This part simulates the LLM's reasoning ---
    tool_name = "get_weather"
    parameters = {"location": "Tokyo"}
    print(f"LLM decided to call tool '{tool_name}' with params {parameters}")
    # --- End of simulation ---

    # Construct the request to the MCP server
    payload = {
        "tool_name": tool_name,
        "parameters": parameters
    }
    
    try:
        response = requests.post(MCP_SERVER_URL, json=payload)
        response.raise_for_status() # Raise an exception for bad status codes
        
        result = response.json()
        print("\n--- Server Response ---")
        print(json.dumps(result, indent=2))
        
    except requests.exceptions.RequestException as e:
        print(f"\nError calling MCP server: {e}")

if __name__ == "__main__":
    user_prompt = "What's the weather like in Tokyo today?"
    run_agent(user_prompt)

Step 4: Run the Demo

Open two terminal windows. In the first, start the server.

# Terminal 1: Run the server
uvicorn server:app --reload

In the second terminal, run the agent client.

# Terminal 2: Run the agent
python agent.py

The agent will show its chosen action, followed by the JSON response received from the server!

You've Built an MCP Server!

Great job! You’ve built the basic pieces of an agentic system. This pattern—an API server for tools and a client to use them—is central to MCP. Next, you can expand with extra tools, add resource controls, and develop advanced agent logic.