Quickstart
Large Language Models (LLMs) are powerful, but they’re not enough on their own.
Railtracks gives them structure, tools, and visibility so you can build agents that actually get things done.
In this quickstart, you’ll install Railtracks, run your first agent, and visualize its execution — all in a few minutes.
1. Installation
Note
railtracks-cli
is optional, but required for the visualization step.
2. Running your Agent
Define an agent with a model and system message, then call it with a prompt:
import asyncio
import railtracks as rt
# To create your agent, you just need a model and a system message.
Agent = rt.agent_node(
llm=rt.llm.OpenAILLM("gpt-4o"),
system_message="You are a helpful AI assistant."
)
# Now to call the Agent, we just need to use the `rt.call` function
async def main():
result = await rt.call(
Agent,
"Hello, what can you do?"
)
return result
result = asyncio.run(main())
Example Output
Your exact output will vary depending on the model.
No API key set?
Make sure you are calling a model you have an API key set in your .env
file.
Railtracks supports many of the most popular model providers. See the full list
Jupyter Notebooks
If you’re running this in a Jupyter notebook, remember that notebooks already run inside an event loop. In that case, call await rt.call(...)
directly:
3. Visualize the Run
Railtracks has a built-in visualizer to inspect and review your agent runs.
This will open a web interface showing the execution flow, node interactions, and performance metrics of your agentic system.
Next Steps
You’ve got your first agent running! Here’s where to go next:
Learn the Basics
Build Something