Loading...
Loading...
Build AI-powered Ruby applications with RubyLLM. Full lifecycle - chat, tools, streaming, Rails integration, embeddings, and production deployment. Covers all providers (OpenAI, Anthropic, Gemini, etc.) with one unified API.
npx skill4agent add faqndo97/ai-skills ruby-llm# Chat with any provider - same interface
chat = RubyLLM.chat(model: 'gpt-4.1')
chat = RubyLLM.chat(model: 'claude-sonnet-4-5')
chat = RubyLLM.chat(model: 'gemini-2.0-flash')
# All return the same RubyLLM::Message object
response = chat.ask("Hello!")
puts response.content# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
config.openai_api_key = ENV['OPENAI_API_KEY']
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
config.gemini_api_key = ENV['GEMINI_API_KEY']
config.request_timeout = 120
config.max_retries = 3
endRubyLLM::Tooldescriptionparamexecuteclass Weather < RubyLLM::Tool
description "Get current weather for a location"
param :latitude, type: 'number', desc: "Latitude"
param :longitude, type: 'number', desc: "Longitude"
def execute(latitude:, longitude:)
# Return structured data, not exceptions
{ temperature: 22, conditions: "Sunny" }
rescue => e
{ error: e.message } # Let LLM handle errors gracefully
end
end
chat.with_tool(Weather).ask("What's the weather in Berlin?")class Chat < ApplicationRecord
acts_as_chat
end
chat = Chat.create!(model: 'gpt-4.1')
chat.ask("Hello!") # Automatically persists messageschat.ask("Tell me a story") do |chunk|
print chunk.content # Print as it arrives
end# 1. Does it load?
bin/rails console -e test
> RubyLLM.chat.ask("Test")
# 2. Do tests pass?
bin/rails test test/models/chat_test.rb
# 3. Check for errors
bin/rails test 2>&1 | grep -E "(Error|Fail|exception)"references/workflows/| File | Purpose |
|---|---|
| build-new-feature.md | Create new AI feature from scratch |
| add-rails-chat.md | Add persistent chat to Rails app |
| implement-tools.md | Create custom tools/function calling |
| add-streaming.md | Add real-time streaming responses |
| debug-llm.md | Find and fix LLM issues |
| optimize-performance.md | Production optimization |
| </workflows_index> |