Skip to content

RAG Tutorial

This tutorial will take you from zero to a RAG-powered agent in just a few minutes. We'll cover how Railtracks can help you build the perfect RAG agent for your needs.

New to RAG?

Not sure what RAG is or why you might want to use it? Check out our brief explainer here.

Vector Stores

To use RAG in Railtracks, you’ll need to understand how our vector stores work. You can read about them here.

You have two options when connecting your agent to RAG. Let's start with the best, most “agentic” method.

1. Vector Query as a Tool

The recommended approach is to give your agent access to a tool that it can use to collect whatever information it needs. All you need to do is set up your vector store, and you’re good to go.

import railtracks as rt
from railtracks.llm import OpenAILLM

# 1) Build the retrieval node
vs = rt.vector_stores.ChromaVectorStore("My Vector Store", embedding_function=embedding_function)
@rt.function_node
def vector_query(query: str):
    return vs.search(query)


# 2) Create Agent and connect your vector query
agent = rt.agent_node(
    llm=OpenAILLM("gpt-4o"),
    tool_nodes=[vector_query],
)

# 3) Run the agent.
@rt.session()
async def main():
    question = "What is the work from home policy?"

    response = await rt.call(
        agent,
        user_input=(
            "Question:\n"
            f"{question}\n"
            "Answer based only on the context provided."
            "If the answer is not in the context, say \"I don't know\"."
        )
    )

    return response

2. Using a Pre-Configured RAG Node

You can also use our pre-configured RAG node that automatically collects context for the incoming question and places it in the system message. We are working diligently to expose more configurability for this functionality.

import asyncio
import railtracks as rt

def embedding_function(chunk: list[str]) -> list[list[float]]: ... # your embedding function here (Railtracks will be providing them soon [see issue #_]) 

vs = rt.vector_stores.ChromaVectorStore("My Vector Store", embedding_function)

Agent = rt.agent_node(
    "Simple Rag Agent",
    rag=rt.RagConfig(vector_store=vs, top_k=3),
    system_message="You are a helpful assistant",
    llm=rt.llm.OpenAILLM("gpt-4o"),
)


question = "What does Steve like?"
response = asyncio.run(rt.call(Agent, question))

Next Steps

- Check out the [RAG Reference Documentation](../tools_mcp/RAG.md) to learn how to build RAG applications in Railtracks.
- Explore the [Tools Documentation](../tools_mcp/tools/tools.md) for integrating any type of tool.