How to Run Your First Agent
Calling the Agent directly
Once you have defined your agent class (Build Your First Agent) you can then run your workflow and see results!
To begin you just have to use call
method from RailTracks. This is an asynchronous method so you will need to run it in an async context.
Agent input options
There are multiple ways to provide input to your agent.
single user message
If you'd like to simply provide a single user message, you can pass it as a string directly to the call
few-shot prompting
If you want to provide a few-shot prompt, you can pass a list of messages to the call
functions, with the specific message for each role being passed as an input to its specific role ie (rt.llm.UserMessage
for user, rt.llm.AssistantMessage
for assistant):
Asynchronous Execution
Since the call
function is asynchronous and needs to be awaited, you should ensure that you are running this code within an asynchronous context like the main
function in the code snippet above.
Jupyter Notebooks: If you are using in a notebook, you can run the code in a cell with await
directly.
For more info on using async/await
in RT, see Async/Await in Python.
Dynamic Runtime Configuration
If you pass llm
to agent_node
and then a different llm model to call
function, RailTracks will use the latter one. If you pass system_message
to agent_node
and then another system_message
to call
, the system messages will be stacked.
Example
import railtracks as rt
from pydantic import BaseModel
class WeatherResponse(BaseModel):
temperature: float
condition: str
system_message = rt.llm.SystemMessage(
"You can also geolocate the user"
)
user_message = rt.llm.UserMessage(
"Would you please be able to tell me the forecast for the next week?"
)
async def main():
response = await rt.call(
StructuredToolCallWeatherAgent,
user_input=rt.llm.MessageHistory([system_message, user_message]),
llm=rt.llm.AnthropicLLM("claude-3-5-sonnet-20241022"),
)
return response
system_message
will become
"You are a helpful assistant that answers weather-related questions. If not specified, the user is talking about Vancouver."
Just like that you have run your first agent!
Calling the Agent within a Session
Alternatively, you can run your agent within a session using the rt.Session
context manager. This allows you to manage the session state and run multiple agents or workflows within the same session and providing various options such as setting a timeout, a shared context (Context), and more.
async def session_based():
with rt.Session(
context=weather_context,
timeout=60 # seconds
):
response = await rt.call(
WeatherAgent,
"What is the weather like in Vancouver?"
)
For more details on how to use sessions, please refer to the Sessions documentation.
Retrieving the Results of a Run
All agents return a response object which you can use to get the last message or the entire message history if you would prefer.
Reponse of a Run
In the unstructured response example, the last message from the agent and the entire message history can be accessed using the text
and message_history
attributes of the response object, respectively.
In the structured response example, the output_schema
parameter is used to define the expected output structure. The response can then be accessed using the structured
attribute.