Supported Providers
We currently support connecting to different available LLMs through the following providers:
- OpenAI - GPT models
- Anthropic - Claude models
- Gemini - Google's Gemini models
- Azure AI Foundry - Azure-hosted models
- Ollama - Local and self-hosted models
- HuggingFace - HuggingFace Serverless Inference models
This allows you to use the same codebase to interact with different LLMs, making it easy to switch providers or use multiple providers in parallel, completely abstracting the underlying API differences.
Take a look at the examples below to see how using different providers look for achieving the same task.
Quick Start Examples
Environment Variables Configuration
Make sure you set the appropriate environment variable keys for your specific provider. By default, RailTracks uses the dotenv
framework to load environment variables from a .env
file.
Variable name for the API key: OPENAI_API_KEY
Environment Variables Configuration
Make sure you set the appropriate environment variable keys for your specific provider. By default, RailTracks uses the dotenv
framework to load environment variables from a .env
file.
Variable name for the API key: ANTHROPIC_API_KEY
Environment Variables Configuration
Make sure you set the appropriate environment variable keys for your specific provider. By default, RailTracks uses the dotenv
framework to load environment variables from a .env
file.
Variable name for the API key: GEMINI_API_KEY
Environment Variables Configuration
Make sure you set the appropriate environment variable keys for your specific provider. By default, RailTracks uses the dotenv
framework to load environment variables from a .env
file.
Variable name for the API key: HF_TOKEN
Tool Calling Support
For HuggingFace serverless inference models, you need to make sure that the model you are using supports tool calling. We DO NOT check for tool calling support in HuggingFace models. If you are using a model that does not support tool calling, it will default to regular chat, even if the tool_nodes
parameter is provided.
In case of HuggingFace, model_name
must be of the format:
huggingface/<provider>/<hf_org_or_user>/<hf_model>
<provider>/<hf_org_or_user>/<hf_model>
"
Here are a few example models that you can use:
# Insert the model you want to use in your agent.
GeneralAgent = rt.agent_node(
llm=model,
system_message="You are a helpful AI assistant.",
)
Tool Calling Capabilities
If you want to use tool calling capabilities by passing the tool_nodes
parameter to the agent_node
, you can do so with any of the above providers. However, you need to ensure that the provider and the specific LLM model you are using support tool calling.
Writing Custom LLM Providers
We hope to cover most of the common and widely used LLM providers, but if you need to use a provider that is not currently supported, you can implement your own LLM provider by subclassing LLMProvider
and implementing the required methods.
For our implementation, we have benefited from the amazing LiteLLM framework, which provides excellent multi-provider support.
Custom Provider Documentation
Please check out the llm
module for more details on how to build a integration.