Quickstart
Large Language Models (LLMs) are powerful, but they’re not enough on their own.
Railtracks gives them structure, tools, and visibility so you can build agents that actually get things done.
In this quickstart, you’ll install Railtracks, run your first agent, and visualize its execution — all in a few minutes.
1. Installation
Note
railtracks[cli] is optional, but required for the visualization step.
2. Running your Agent
Define an agent with a model and system message, then call it with a prompt:
import railtracks as rt
# To create your agent, you just need a model and a system message.
Agent = rt.agent_node(
llm=rt.llm.OpenAILLM("gpt-4o"),
system_message="You are a helpful AI assistant."
)
# Now to call the Agent, we just need to use the `rt.call` function
@rt.function_node
async def main(message: str):
result = await rt.call(
Agent,
message,
)
return result
flow = rt.Flow("Quickstart Example", entry_point=main)
result = flow.invoke("Hello, what can you do?")
Example Output
Your exact output will vary depending on the model.
No API key set?
Make sure you are calling a model you have an API key set in your .env file.
Railtracks supports many of the most popular model providers. See the full list
Jupyter Notebooks
If you’re running this in a Jupyter notebook, remember that notebooks already run inside an event loop. In that case, call await flow.ainvoke(...) directly:
3. Visualize the Run
With Railtracks CLI you can dive deep on your runs. Our observability runs locally from the command line.
Setup
This will open a web interface with all of your agent runs. You can dive deep into each step, see token usage, and more.
Next Steps
You’ve got your first agent running! Here’s where to go next:
Learn the Basics
Build Something