๐คMulti-provider AI
A guide on how to use the new Ruby LLM library, which allows you to connect to several LLMs instead of just using Open AI
Overview
The ruby_llm
gem is a powerful, multi-provider LLM client for Ruby that makes it simple to interact with models like OpenAI, Anthropic, Gemini (Google), and Deepseek. It offers rich features like:
Real-time streaming responses
Multi-modal inputs (images, audio, PDFs)
Easy-to-define tools
Native Rails integration with
acts_as_chat
This guide walks you through how to add the ruby_llm
gem in a Lightning Rails project, including configuration, usage, and upgrading of any existing service classes.
Step 1: Install the ruby_llm
Gem
ruby_llm
GemAdd the new gem to your Gemfile
:
gem "ruby_llm"
Then install it:
bundle install
Alternatively, install it directly:
gem install ruby_llm
Step 2: Set Up Your API Keys
In your .env
file (Lightning Rails uses dotenv-rails
by default), add:
OPENAI_API_KEY=sk-...
# Add others as needed
# ANTHROPIC_API_KEY=...
# GEMINI_API_KEY=...
# DEEPSEEK_API_KEY=...
Then create a RubyLLM config initializer:
touch config/initializers/ruby_llm.rb
# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
config.openai_api_key = ENV['OPENAI_API_KEY']
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
config.gemini_api_key = ENV['GEMINI_API_KEY']
config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
end
Step 3: Create a new ruby_llm service
Create app/services/MultiProviderModels.rb
:
# app/services/MultiProviderModels.rb
class MultiProviderModels
def initialize
@chat = RubyLLM.chat(model: "gpt-4o-mini") # pass the parameters here :)
end
def analyze_product(product)
# Add a system prompt to give it context for better results.
@chat.add_message role: :system, content: "You are a Product Hunt expert. Always include examples in your responses and explain them line by line."
prompt = <<~PROMPT
You are a product analyst expert specialized in evaluating digital products and services.
You have deep knowledge of market trends, user experience, and business models.
Your analysis should be structured, data-driven, and actionable.
Here is a product to analyze: #{product}
Please follow this process:
1. Identify the key features and unique selling points
2. Evaluate the market potential and target audience
3. Analyze pricing strategy and business model
4. Assess technical implementation and scalability
5. Provide specific recommendations for improvement
Format your response in clear sections with bullet points where appropriate.
Be concise but thorough in your analysis.
PROMPT
@chat.ask(prompt)
end
end
Step 4: Enable Streaming (Optional)
If you are creating a Chatbot and don't want the users to have to reload their pages for every message, you can stream responses in real-time, by modifying the ask
method:
@chat.ask(prompt) do |chunk|
print chunk.content
end
Or, in a controller with Turbo Streams:
@chat.ask(prompt) do |chunk|
Turbo::StreamsChannel.broadcast_append_to(
"ai_response",
target: "response",
partial: "messages/chunk",
locals: { chunk: chunk }
)
end
Bonus: Enable Native Rails Models (Optional)
If you'd like to track (Save/update) chats and messages in the database, use the acts_as_chat
setup:
# app/models/chat.rb
class Chat < ApplicationRecord
acts_as_chat
broadcasts_to ->(chat) { "chat_#{chat.id}" }
end
# app/models/message.rb
class Message < ApplicationRecord
acts_as_message
end
# app/models/tool_call.rb
class ToolCall < ApplicationRecord
acts_as_tool_call
end
Best Practices
Prefer
RubyLLM.chat(model: "gpt-4o-mini")
for model-specific tasks.Use streaming when showing responses live in the UI.
Leverage multi-modal inputs (images, PDFs, audio) for advanced functionality.
Use
RubyLLM::Tool
to define reusable actions the LLM can call dynamically.Always sanitize and validate input before using it in prompts if user-submitted.
Troubleshooting
โ Getting โMissing API keyโ errors?
Make sure your .env
file includes OPENAI_API_KEY
(or the LLM of your choice) and it is loaded via dotenv
.
โ Getting undefined method acts_as_chat
?
Ensure youโve added the proper ActiveRecord models with acts_as_chat
, and that the ruby_llm
gem is loaded correctly. Also, don't forget to restart your server after installing the gem.
โ No response from ask
method?
Wrap your call in a puts
or log output. Also, try a basic prompt like:
RubyLLM.chat.ask("Hello, how are you?")
Real-World Use Cases
Product analysis and market research
Image captioning or PDF summarization
Building internal tools with custom RubyLLM::Tool classes
Streaming AI-powered responses in chat UIs or dashboards
Read more in the official documentation ๐ฅ
Let me know if youโd like a Stimulus controller example or UI integration using Turbo Streams for this!
Last updated
Was this helpful?