🤖Multi-provider AI
A guide on how to use the new Ruby LLM library, which allows you to connect to several LLMs instead of just using Open AI
Overview
The ruby_llm
gem is a powerful, multi-provider LLM client for Ruby that makes it simple to interact with models like OpenAI, Anthropic, Gemini (Google), and Deepseek. It offers rich features like:
Real-time streaming responses
Multi-modal inputs (images, audio, PDFs)
Easy-to-define tools
Native Rails integration with
acts_as_chat
This guide walks you through how to add the ruby_llm
gem in a Lightning Rails project, including configuration, usage, and upgrading of any existing service classes.
Step 1: Install the ruby_llm
Gem
ruby_llm
GemAdd the new gem to your Gemfile
:
Then install it:
Alternatively, install it directly:
Step 2: Set Up Your API Keys
In your .env
file (Lightning Rails uses dotenv-rails
by default), add:
Then create a RubyLLM config initializer:
Step 3: Create a new ruby_llm service
Create app/services/MultiProviderModels.rb
:
Step 4: Enable Streaming (Optional)
If you are creating a Chatbot and don't want the users to have to reload their pages for every message, you can stream responses in real-time, by modifying the ask
method:
Or, in a controller with Turbo Streams:
Bonus: Enable Native Rails Models (Optional)
If you'd like to track (Save/update) chats and messages in the database, use the acts_as_chat
setup:
Best Practices
Prefer
RubyLLM.chat(model: "gpt-4o-mini")
for model-specific tasks.Use streaming when showing responses live in the UI.
Leverage multi-modal inputs (images, PDFs, audio) for advanced functionality.
Use
RubyLLM::Tool
to define reusable actions the LLM can call dynamically.Always sanitize and validate input before using it in prompts if user-submitted.
Troubleshooting
❌ Getting “Missing API key” errors?
Make sure your .env
file includes OPENAI_API_KEY
(or the LLM of your choice) and it is loaded via dotenv
.
❌ Getting undefined method acts_as_chat
?
Ensure you’ve added the proper ActiveRecord models with acts_as_chat
, and that the ruby_llm
gem is loaded correctly. Also, don't forget to restart your server after installing the gem.
❌ No response from ask
method?
Wrap your call in a puts
or log output. Also, try a basic prompt like:
Real-World Use Cases
Product analysis and market research
Image captioning or PDF summarization
Building internal tools with custom RubyLLM::Tool classes
Streaming AI-powered responses in chat UIs or dashboards
Read more in the official documentation 🔥
Let me know if you’d like a Stimulus controller example or UI integration using Turbo Streams for this!
Last updated
Was this helpful?