Skip to content

Exposing RT Tools as MCP Tools

Warning

This area of RT is under construction. We would love some contributions to support this effort on our Github

Overview

You can expose any RT Tool as an MCP-compatible tool, making it accessible to any MCP client or LLM agent that supports the Model Context Protocol (MCP). This allows you to share your custom RT logic with other frameworks, agents, or applications that use MCP.

RC provides utilities to convert your Nodes into MCP tools and run a FastMCP server, so your tools are discoverable and callable via standard MCP transports (HTTP, SSE, stdio).

Prerequisites

  • RC Framework installed (pip install railtracks[core])

Basic Usage

1. Convert RT Nodes to MCP Tools

Use the create_mcp_server utility to expose your RT nodes as MCP tools:

import railtracks as rt

# Start by creating your tools
@rt.function_node
def add_nums_plus_ten(num1: int, num2: int):
    """Simple tool example."""
    return num1 + num2 + 10

# Create your MCP server with the function node
mcp = rt.create_mcp_server([add_nums_plus_ten], server_name="My MCP Server")

# Now run the MCP server
mcp.run(transport="streamable-http", host="127.0.0.1", port=8000)

This exposes your RT tool at http://127.0.0.1:8000/mcp for any MCP client.

2. Accessing Your MCP Tools

Any MCP-compatible client or LLM agent can now discover and invoke your tool. As an example, you can use Railtracks itself to try your tool:

server = rt.connect_mcp(rt.MCPHttpParams(url="http://127.0.0.1:8000/mcp"))
tools = server.tools

Advanced Topics

  • Multiple Tools: Pass a list of Node classes to create_mcp_server to expose several tools.
  • Transport Options: Use streamable-http, sse, or stdio as needed.