โšก
LightningRails
  • ๐Ÿ‘‹Welcome
  • Access the Repo
  • Getting Started
    • ๐Ÿš€Quickstart
    • ๐ŸŽจThemes
    • ๐Ÿ–ผ๏ธCustomize the views
    • ๐Ÿ”ฎDaisyUI Library
    • โšกLightning Landing
      • Quickstart
      • Theme and branding
      • Page structure
      • Publish your landing page
  • Features setup
    • ๐Ÿ“ธImages & media
    • ๐Ÿ”Admin Dashboard
    • Search Engine Optimization
    • ๐Ÿ“งAutomatic Emails
      • ๐ŸŸจPostmark
      • ๐Ÿ”ฒResend
    • ๐ŸšชLogin with Devise
    • ๐Ÿช„Magic Link Signup
    • ๐Ÿ’ณStripe Payment Gateway
    • Github Signup
    • Lucide icons
    • ๐Ÿค–Multi-provider AI
    • Open AI API
    • ๐Ÿง™โ€โ™‚๏ธMulti-Step Form Wizard
  • UI Components
    • ๐ŸฆธHeros
    • โ”FAQs
    • ๐Ÿƒcards
    • ๐Ÿ’ฌTestimonials
    • ๐Ÿ‘‹Call To Actions
    • ๐Ÿ”ฆFeatures
  • Deploying to production
    • โฌ†๏ธHeroku Deploy
    • ๐Ÿ›ก๏ธSecurity
      • ๐ŸŽ›๏ธRate Limiting
  • RESOURCES
    • ๐Ÿš€Vote for new features
    • Report an issue
    • ๐Ÿ†˜Get help on Slack
    • ๐ŸญDesign Resources
      • Maria Ba Illustrations
      • Assets Mockup Generator
      • Logo Generator
      • Tailwind Cheatsheet
      • HyperUI Tailwind Library
Powered by GitBook
On this page
  • Overview
  • Step 1: Install the ruby_llm Gem
  • Step 2: Set Up Your API Keys
  • Step 3: Create a new ruby_llm service
  • Step 4: Enable Streaming (Optional)
  • Bonus: Enable Native Rails Models (Optional)
  • Best Practices
  • Troubleshooting
  • Real-World Use Cases

Was this helpful?

  1. Features setup

Multi-provider AI

A guide on how to use the new Ruby LLM library, which allows you to connect to several LLMs instead of just using Open AI

PreviousLucide iconsNextOpen AI API

Last updated 2 months ago

Was this helpful?

Overview

  • Real-time streaming responses

  • Multi-modal inputs (images, audio, PDFs)

  • Easy-to-define tools

  • Native Rails integration with acts_as_chat

This guide walks you through how to add the ruby_llm gem in a Lightning Rails project, including configuration, usage, and upgrading of any existing service classes.


Step 1: Install the ruby_llm Gem

gem "ruby_llm"

Then install it:

bundle install

Alternatively, install it directly:

gem install ruby_llm

Step 2: Set Up Your API Keys

In your .env file (Lightning Rails uses dotenv-rails by default), add:

OPENAI_API_KEY=sk-...
# Add others as needed
# ANTHROPIC_API_KEY=...
# GEMINI_API_KEY=...
# DEEPSEEK_API_KEY=...

Then create a RubyLLM config initializer:

touch config/initializers/ruby_llm.rb
# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
  config.openai_api_key = ENV['OPENAI_API_KEY']
  config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
  config.gemini_api_key = ENV['GEMINI_API_KEY']
  config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
end

Step 3: Create a new ruby_llm service

Create app/services/MultiProviderModels.rb:

# app/services/MultiProviderModels.rb
class MultiProviderModels
  def initialize
    @chat = RubyLLM.chat(model: "gpt-4o-mini") # pass the parameters here :)
  end

  def analyze_product(product)
    # Add a system prompt to give it context for better results.
    @chat.add_message role: :system, content: "You are a Product Hunt expert. Always include examples in your responses and explain them line by line."

    prompt = <<~PROMPT
      You are a product analyst expert specialized in evaluating digital products and services. 
      You have deep knowledge of market trends, user experience, and business models. 
      Your analysis should be structured, data-driven, and actionable.

      Here is a product to analyze: #{product}

      Please follow this process:
      1. Identify the key features and unique selling points
      2. Evaluate the market potential and target audience
      3. Analyze pricing strategy and business model
      4. Assess technical implementation and scalability
      5. Provide specific recommendations for improvement

      Format your response in clear sections with bullet points where appropriate. 
      Be concise but thorough in your analysis.
    PROMPT

    @chat.ask(prompt)
  end
end

Step 4: Enable Streaming (Optional)

If you are creating a Chatbot and don't want the users to have to reload their pages for every message, you can stream responses in real-time, by modifying the ask method:

@chat.ask(prompt) do |chunk|
  print chunk.content
end

Or, in a controller with Turbo Streams:

@chat.ask(prompt) do |chunk|
  Turbo::StreamsChannel.broadcast_append_to(
    "ai_response",
    target: "response",
    partial: "messages/chunk",
    locals: { chunk: chunk }
  )
end

Bonus: Enable Native Rails Models (Optional)

If you'd like to track (Save/update) chats and messages in the database, use the acts_as_chat setup:

# app/models/chat.rb
class Chat < ApplicationRecord
  acts_as_chat
  broadcasts_to ->(chat) { "chat_#{chat.id}" }
end

# app/models/message.rb
class Message < ApplicationRecord
  acts_as_message
end

# app/models/tool_call.rb
class ToolCall < ApplicationRecord
  acts_as_tool_call
end

Best Practices

  • Prefer RubyLLM.chat(model: "gpt-4o-mini") for model-specific tasks.

  • Use streaming when showing responses live in the UI.

  • Leverage multi-modal inputs (images, PDFs, audio) for advanced functionality.

  • Use RubyLLM::Tool to define reusable actions the LLM can call dynamically.

  • Always sanitize and validate input before using it in prompts if user-submitted.


Troubleshooting

โŒ Getting โ€œMissing API keyโ€ errors?

Make sure your .env file includes OPENAI_API_KEY (or the LLM of your choice) and it is loaded via dotenv.

โŒ Getting undefined method acts_as_chat?

Ensure youโ€™ve added the proper ActiveRecord models with acts_as_chat, and that the ruby_llm gem is loaded correctly. Also, don't forget to restart your server after installing the gem.

โŒ No response from ask method?

Wrap your call in a puts or log output. Also, try a basic prompt like:

RubyLLM.chat.ask("Hello, how are you?")

Real-World Use Cases

  • Product analysis and market research

  • Image captioning or PDF summarization

  • Building internal tools with custom RubyLLM::Tool classes

  • Streaming AI-powered responses in chat UIs or dashboards


Let me know if youโ€™d like a Stimulus controller example or UI integration using Turbo Streams for this!

The is a powerful, multi-provider LLM client for Ruby that makes it simple to interact with models like OpenAI, Anthropic, Gemini (Google), and Deepseek. It offers rich features like:

Add the new to your Gemfile:

Read more in ๐Ÿ”ฅ

๐Ÿค–
ruby_llm gem
gem
the official documentation
Tutorial on how to add RubyLLM gem to your Rails app