Freelancing

Shared Python Tools for AI Agents Using MCP

Part 3 of “My Journey in Building AI Agents from Scratch”

Introduction

In Part 2, I built my first agent from scratch — LLM + tool calling + context management. It worked. But as I started thinking about real-world deployment, a new problem emerged:

How do I define Shared Python tools for AI agents using MCP?

Writing JSON tool schemas manually was tedious. And when I imagined multiple agents needing the same tools, I knew I needed a better approach. This post is about that journey — from manual tool definitions to a Python-first utility to deploying MCP servers, and how shared Python tools for AI agents with MCP make this process scalable and efficient

The Pain of Manual Tool Definitions for AI Agents

Remember the tool definitions from Post 2?

#Python
tools = [
    {
        "type": "function",
        "function": {
            "name": "add",
            "description": "Add two numbers",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "number"},
                    "b": {"type": "number"}
                },
                "required": ["a", "b"]
            }
        }
    },
    # ... more tools
]

For one or two tools, this is fine. But imagine 10 tools. Or 20. Each needs:

  • A name
  • A description
  • Parameter definitions with types
  • Required fields

And if you change the function signature? You have to update the schema manually. Miss a parameter? The agent breaks.

Most developers I work with are comfortable writing Python. They shouldn’t have to think in JSON schemas.

My First Solution: Building Shared Python Tools for AI Agents

I thought: What if developers just write normal Python functions, and I generate the tool definitions automatically?

The idea was simple:

  • Give me a Python file path
  • I’ll read all the functions
  • I’ll convert them to tool definitions

Here’s the concept:

# tools.py — What developers write
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

def reverse_string(text: str) -> str:
    """Reverse the given string."""
    return text[::-1]

And the utility that converts it:

import ast
import inspect
from typing import get_type_hints

def python_file_to_tools(file_path: str) -> list:
    """Load a Python file and convert all functions to tool definitions."""
    
    # Load the module dynamically
    spec = importlib.util.spec_from_file_location("tools_module", file_path)
    module = importlib.util.module_from_spec(spec)
    spec.loader.exec_module(module)
    
    tools = []
    
    # Get all functions from the module
    for name, func in inspect.getmembers(module, inspect.isfunction):
        if name.startswith('_'):  # Skip private functions
            continue
            
        tool_def = function_to_tool(func)
        tools.append(tool_def)
    
    return tools

def function_to_tool(func) -> dict:
    """Convert a Python function to a tool definition."""
    
    # Get function signature and type hints
    sig = inspect.signature(func)
    hints = get_type_hints(func)
    
    # Build parameters schema
    properties = {}
    required = []
    
    for param_name, param in sig.parameters.items():
        param_type = hints.get(param_name, str)
        properties[param_name] = {
            "type": python_type_to_json_type(param_type)
        }
        if param.default == inspect.Parameter.empty:
            required.append(param_name)
    
    return {
        "type": "function",
        "function": {
            "name": func.__name__,
            "description": func.__doc__ or "",
            "parameters": {
                "type": "object",
                "properties": properties,
                "required": required
            }
        }
    }

def python_type_to_json_type(python_type) -> str:
    """Map Python types to JSON schema types."""
    type_map = {
        int: "integer",
        float: "number",
        str: "string",
        bool: "boolean",
        list: "array",
        dict: "object"
    }
    return type_map.get(python_type, "string")

Now in my agent:

# Load tools from Python file
tools = python_file_to_tools("tools.py")

# Use in agent
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=messages,
    tools=tools
)

Write Python, get tools.

Developers write normal functions with type hints and docstrings. The utility handles the rest.

The Multi-Agent Problem: Sharing Python Tools Across AI Agents

This worked great for a single agent. But then I hit the next wall.

I was building multiple agents — each specialized for different tasks. But many of them needed the same tools. A math agent, an analysis agent, a general assistant — all needed basic utility functions.

Now, if all agents were in the same script, this would be easy:

# Same script — shared tools object works fine
tools = python_file_to_tools("common_tools.py")

agent_1 = Agent(tools=tools)
agent_2 = Agent(tools=tools)
agent_3 = Agent(tools=tools)

But that’s not how real systems work.

Agents live in different files. Different services. Different deployments.

project/
├── agents/
│   ├── math_agent.py      # Needs tools
│   ├── analysis_agent.py  # Needs same tools
│   └── assistant.py       # Also needs same tools
├── tools/
│   └── common_tools.py

Each agent script would have to:

  • Know where the tools file is
  • Load it independently
  • Keep the path in sync across deployments

And if I wanted to update a tool? I had to make sure every agent deployment had access to the updated file. In a distributed system — with agents running on different servers, containers, or services — this becomes a deployment nightmare.

I thought: Why can’t tools be a service? Something agents connect to from anywhere, rather than load locally?

Discovering MCP: Centralizing Python Tools for AI Agents

I started reading tech articles about agent architectures. That’s when I came across MCP — the Model Context Protocol.

MCP is an open standard that lets you expose tools (and other resources) as a server. Any client — including AI agents — can connect and use those tools.

The concept clicked immediately:

  • MCP Server: Hosts the tools
  • MCP Client: Connects to the server and gets tool definitions
  • Any agent can connect to the same server

Instead of each agent loading tools from files, they all connect to one MCP server. Update the server, all agents get the update.

I took an MCP crash course on Udemy to understand the protocol deeply. Then I started building.

Building My MCP Server with FastMCP 2.0 for AI Agents

I used FastMCP 2.0 — a Python library that makes creating MCP servers straightforward.

Here’s the beautiful part: I could use the same Python file I already had.

# mcp_server.py
from fastmcp import FastMCP

# Create MCP server
mcp = FastMCP("My Tools Server")

# Same functions as before — now exposed via MCP
@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers together."""
    return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers."""
    return a * b

@mcp.tool()
def reverse_string(text: str) -> str:
    """Reverse the given string."""
    return text[::-1]

if __name__ == "__main__":
    mcp.run()

That’s it. The `@mcp.tool()` decorator registers the function as an MCP tool. FastMCP 2.0 handles the schema generation automatically — just like my utility did.

The Testing Problem: How Do I Know It Works?

This is where I got stuck.

I had built the MCP server. It was running. But… how do I test it? How do I know the tools are actually exposed correctly?

I couldn’t just call an HTTP endpoint and see the tools — MCP isn’t a simple REST API. It has its own protocol.

Then I remembered: Claude Desktop supports MCP.

I configured Claude Desktop to connect to my local MCP server:

// Claude Desktop config
{
  "mcpServers": {
    "my-tools": {
      "command": "python",
      "args": ["mcp_server.py"]
    }
  }
}

Restarted Claude Desktop, and there they were — my tools, available in Claude’s interface. I could ask Claude to add numbers, and it would call my `add` function through MCP.

Claude Desktop became my MCP testing tool.

This was huge. I could iterate quickly: change a function, restart the server, test in Claude Desktop.

The Missing Piece: MCP Client for AI Agents

But here’s the thing — Claude Desktop loaded the tools automatically. I didn’t have to do anything. It just… worked.

Great for testing. But what about my own agent?

My agent wasn’t Claude Desktop. It was my Python code. How do I programmatically fetch tools from an MCP server?

I had built the server. I could test it with Claude Desktop. But I had no idea how to consume it from code.

That’s when I discovered the MCP Client.

It seems obvious in hindsight, but at the time I was so focused on the server side that I completely missed this. MCP has two parts:

  • MCP Server — exposes tools (what I built)
  • MCP Client — connects and fetches tools (what I needed)
from mcp import Client

async def get_tools_from_mcp(endpoint: str):
    """Connect to MCP server and get available tools."""
    
    async with Client(endpoint) as client:
        # This is what I was missing!
        tools = await client.list_tools()
        
        for tool in tools:
            print(f"Tool: {tool.name}")
            print(f"Description: {tool.description}")
            print(f"Parameters: {tool.input_schema}")
    
    return tools

Once I understood this, everything connected. The server exposes tools. The client fetches them. My agent uses the client to get tools and convert them to the format OpenAI expects.

The lesson: **Don’t just build. Understand the full picture — both sides of the protocol.**

Dynamic MCP Server from Any Python File for AI Agents

But I wanted to go further. What if I could create an MCP server from **any** Python file containing functions — without modifying it?

I built a wrapper:

# create_mcp_server.py
from fastmcp import FastMCP
import importlib.util
import inspect

def create_mcp_from_file(file_path: str, server_name: str = "Tools Server"):
    """Create an MCP server from any Python file containing functions."""
    
    mcp = FastMCP(server_name)
    
    # Load the module
    spec = importlib.util.spec_from_file_location("tools_module", file_path)
    module = importlib.util.module_from_spec(spec)
    spec.loader.exec_module(module)
    
    # Register all functions as tools
    for name, func in inspect.getmembers(module, inspect.isfunction):
        if not name.startswith('_'):
            mcp.tool()(func)
    
    return mcp

# Usage
if __name__ == "__main__":
    import sys
    file_path = sys.argv[1]  # Pass Python file as argument
    
    mcp = create_mcp_from_file(file_path)
    mcp.run()

Now I can run:

python create_mcp_server.py tools.py

And the MCP server starts, exposing all functions from `tools.py` as tools. Same Python file — now accessible as a service.

Understanding MCP Transports: stdio vs HTTP for AI Agents

This is where I got confused initially. MCP supports different **transports**:

stdio Transport

  • Server runs locally as a subprocess
  • Communication via standard input/output
  • Must be started by the client
  • Good for: Local development, desktop apps
python mcp_server.py  # Runs locally, client connects via stdio

HTTP Transport (SSE)

  • Server runs as a web service
  • Communication via HTTP/Server-Sent Events
  • Always available — just connect to the endpoint
  • Good for: Production, shared tools, multiple agents
# Run MCP server with HTTP transport
mcp.run(transport="sse", host="0.0.0.0", port=8000)

The key insight:

stdio = local, starts with client

HTTP = service, always available

For my use case — multiple agents sharing tools — HTTP was the right choice.

Connecting AI Agents to Shared Python Tools via MCP

On the agent side, I built an MCP client into my tool utility:

from mcp import Client

class ToolLoader:
    """Load tools from Python file or MCP server."""
    
    def __init__(self):
        self.tools = []
        self.tool_functions = {}
    
    def load_from_file(self, file_path: str):
        """Load tools from a local Python file."""
        tools = python_file_to_tools(file_path)
        self.tools.extend(tools)
        # Also load functions for execution
        self._load_functions(file_path)
    
    def load_from_mcp(self, endpoint: str):
        """Load tools from an MCP server."""
        client = Client(endpoint)
        
        # Get available tools from MCP server
        mcp_tools = client.list_tools()
        
        # Convert to OpenAI tool format
        for tool in mcp_tools:
            self.tools.append({
                "type": "function",
                "function": {
                    "name": tool.name,
                    "description": tool.description,
                    "parameters": tool.input_schema
                }
            })
        
        # Store client for execution
        self.mcp_client = client
    
    def execute_tool(self, name: str, arguments: dict):
        """Execute a tool by name."""
        if hasattr(self, 'mcp_client'):
            # Execute via MCP
            return self.mcp_client.call_tool(name, arguments)
        else:
            # Execute locally
            return self.tool_functions[name](**arguments)

Now my agent can load tools from either source:

# Option 1: Local Python file (development)
loader = ToolLoader()
loader.load_from_file("tools.py")

# Option 2: MCP server (production)
loader = ToolLoader()
loader.load_from_mcp("http://tools-server:8000")

# Agent uses tools the same way
response = client.chat.completions.create(
    model="gpt-4.1",
    messages=messages,
    tools=loader.tools
)

Why Docstrings and Descriptions Matter for Real-World Tools

Once everything was working—MCP server deployed, agents connecting, tools loading—I noticed a subtle but important issue. For simple tools like `add` and `multiply`, the agent rarely gets confused, especially if the names are clear. But as soon as you introduce tools with similar names or overlapping functionality, things get trickier.

For example, imagine you have:

@mcp.tool()
def summarize_text(text: str) -> str:
    """Summarize a block of text."""
    ...

@mcp.tool()
def summarize_document(document: str) -> str:
    """Summarize a document, extracting key points."""
    ...

If the docstrings are vague or nearly identical, the LLM may not reliably pick the right tool for the right context. For example, if a user asks, “Summarize this PDF,” the agent might call `summarize_text` instead of `summarize_document`, or vice versa, depending on how the descriptions are written.

The lesson: The LLM relies heavily on the tool’s description and docstring to decide which tool to call.

After seeing this in practice, I started writing more detailed docstrings, especially for tools with similar purposes. I now include:

  • What the tool is for
  • When to use it
  • What kind of input it expects
  • Any special cases or limitations
  • This isn’t just about documentation—it’s about making your tools agent-friendly and reducing ambiguity for the LLM.

Now, every function I write—whether for an MCP server, direct agent use, or just regular code—gets a clear, specific docstring. It’s a small habit that makes a big difference in agent reliability.

Real-World Tools: Beyond Math Functions

Once I had the MCP infrastructure working, I started building real tools.

The first serious one was a **Git MCP server**:

@mcp.tool()
def git_status(repo_path: str) -> str:
    """
    Get the current git status of a repository.
    
    Shows which files are:
    - Modified but not staged
    - Staged for commit
    - Untracked
    
    Args:
        repo_path: Path to the git repository
    
    Returns:
        Git status output showing the state of the working directory
    """
    result = subprocess.run(
        ["git", "status"],
        cwd=repo_path,
        capture_output=True,
        text=True
    )
    return result.stdout

@mcp.tool()
def git_diff(repo_path: str, file_path: str = None) -> str:
    """
    Show changes between commits, commit and working tree, etc.
    
    Use this to see what has been modified in the repository.
    
    Args:
        repo_path: Path to the git repository
        file_path: Optional specific file to diff. If not provided, shows all changes.
    
    Returns:
        Diff output showing line-by-line changes
    """
    cmd = ["git", "diff"]
    if file_path:
        cmd.append(file_path)
    
    result = subprocess.run(
        cmd,
        cwd=repo_path,
        capture_output=True,
        text=True
    )
    return result.stdout

@mcp.tool()
def git_commit(repo_path: str, message: str) -> str:
    """
    Create a new commit with the staged changes.
    
    Before using this, ensure files are staged using git add.
    
    Args:
        repo_path: Path to the git repository
        message: Commit message describing the changes
    
    Returns:
        Commit confirmation with the new commit hash
    """
    result = subprocess.run(
        ["git", "commit", "-m", message],
        cwd=repo_path,
        capture_output=True,
        text=True
    )
    return result.stdout or result.stderr

Now any agent could interact with Git repositories. Code review agents, deployment agents, documentation agents — all connecting to the same Git MCP server.

This is when MCP’s value became real. It wasn’t about toy math functions anymore. It was about giving agents capabilities in a standardized, shareable way.

The Final Architecture: Shared Python Tools for AI Agents with MCP

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Agent 1   │     │   Agent 2   │     │   Agent 3   │
└──────┬──────┘     └──────┬──────┘     └──────┬──────┘
       │                   │                   │
       └───────────────────┼───────────────────┘
                           │
                           ▼
                  ┌─────────────────┐
                  │   MCP Server    │
                  │  (HTTP/SSE)     │
                  │                 │
                  │  - add()        │
                  │  - multiply()   │
                  │  - reverse()    │
                  │  - ... more     │
                  └─────────────────┘
  • One Python file defines the tools
  • MCP server exposes them as a service
  • Multiple agents connect and use them
  • Update once, all agents get the change

Key Takeaways

  • Write Python, get tools — Use type hints and docstrings; auto-generate tool schemas
  • Same file, multiple uses — One Python file can be loaded directly OR deployed as MCP server
  • MCP centralizes tools — No more duplicating tool definitions across agents in different files/services
  • stdio vs HTTP — Local development vs always-available service
  • Test with Claude Desktop — It’s the easiest way to verify your MCP server works
  • Don’t forget the client — Building a server is half the work; you need the MCP client to consume it
  • Docstrings are LLM instructions — The description is how the agent decides which tool to call. Be verbose.
  • FastMCP 2.0 makes it easy — `@mcp.tool()` decorator handles everything

Try It Yourself

  • Create a `tools.py` file with 2-3 functions (with **detailed** type hints and docstrings)
  • Build a utility to convert Python functions to tool definitions
  • Install FastMCP: `pip install fastmcp`
  • Create an MCP server from your functions
  • Test it with Claude Desktop before writing any client code
  • Run with HTTP transport and connect from a simple client
  • Experiment: Write a vague docstring, then a detailed one — see how it affects tool selection

Previous: Building My First Agent

Recommended Articles

Leave a Reply

Your email address will not be published. Required fields are marked *