model-failover

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Model Failover Skill

模型故障转移Skill

Automatically switch between LLM providers when one fails. Supports configurable fallback chains, rate limiting, and health monitoring. Inspired by OpenClaw's model failover system.
当某一LLM提供商故障时,自动切换至其他提供商。支持可配置的备用链、速率限制和健康监控。灵感来源于OpenClaw的模型故障转移系统。

Setup

配置步骤

Configure your provider chain in environment variables:
bash
undefined
在环境变量中配置你的提供商链:
bash
undefined

Comma-separated list of providers (in fallback order)

按故障转移顺序排列的提供商列表(逗号分隔)

export LLM_PROVIDER_CHAIN="anthropic:claude-3-5-sonnet-20241022,openai:gpt-4o-mini,google:gemini-1.5-flash"
export LLM_PROVIDER_CHAIN="anthropic:claude-3-5-sonnet-20241022,openai:gpt-4o-mini,google:gemini-1.5-flash"

API keys for each provider

各提供商的API密钥

export ANTHROPIC_API_KEY="sk-ant-..." export OPENAI_API_KEY="sk-..." export GOOGLE_API_KEY="..."
undefined
export ANTHROPIC_API_KEY="sk-ant-..." export OPENAI_API_KEY="sk-..." export GOOGLE_API_KEY="..."
undefined

Usage

使用方法

Chat with automatic failover

自动故障转移模式聊天

bash
{baseDir}/model-failover.js chat "Your message here"
bash
{baseDir}/model-failover.js chat "你的消息内容"

Add a new provider to the chain

向链中添加新提供商

bash
{baseDir}/model-failover.js add-provider anthropic claude-3-5-sonnet-20241022
bash
{baseDir}/model-failover.js add-provider anthropic claude-3-5-sonnet-20241022

Remove a provider from the chain

从链中移除提供商

bash
{baseDir}/model-failover.js remove-provider openai
bash
{baseDir}/model-failover.js remove-provider openai

List providers in chain

列出链中的提供商

bash
{baseDir}/model-failover.js list
bash
{baseDir}/model-failover.js list

Check provider health

检查提供商健康状态

bash
{baseDir}/model-failover.js health
bash
{baseDir}/model-failover.js health

Reset failure counts

重置故障计数

bash
{baseDir}/model-failover.js reset
bash
{baseDir}/model-failover.js reset

Configuration

配置项

Environment VariableDescriptionDefault
LLM_PROVIDER_CHAIN
Comma-separated
provider:model
pairs
anthropic:claude-3-5-sonnet-20241022
ANTHROPIC_API_KEY
Anthropic API key-
OPENAI_API_KEY
OpenAI API key-
GOOGLE_API_KEY
Google API key-
CUSTOM_API_KEY
Custom provider API key-
MAX_RETRIES
Max retries per provider2
RETRY_DELAY_MS
Delay between retries1000
环境变量描述默认值
LLM_PROVIDER_CHAIN
逗号分隔的
provider:model
配对组合
anthropic:claude-3-5-sonnet-20241022
ANTHROPIC_API_KEY
Anthropic API密钥-
OPENAI_API_KEY
OpenAI API密钥-
GOOGLE_API_KEY
Google API密钥-
CUSTOM_API_KEY
自定义提供商API密钥-
MAX_RETRIES
每个提供商的最大重试次数2
RETRY_DELAY_MS
重试间隔时长(毫秒)1000

Provider Format

提供商格式

provider:model
Supported providers:
  • anthropic
    - Anthropic Claude models
  • openai
    - OpenAI GPT models
  • google
    - Google Gemini models
  • custom
    - Custom OpenAI-compatible endpoint (set
    OPENAI_BASE_URL
    )
provider:model
支持的提供商:
  • anthropic
    - Anthropic Claude系列模型
  • openai
    - OpenAI GPT系列模型
  • google
    - Google Gemini系列模型
  • custom
    - 自定义OpenAI兼容端点(需设置
    OPENAI_BASE_URL

How It Works

工作原理

  1. Try the first provider in the chain
  2. If it fails (rate limit, error, timeout), wait and retry
  3. If retries exhausted, move to next provider
  4. Continue until success or all providers exhausted
  5. Track failures per provider for health monitoring
  1. 尝试链中的第一个提供商
  2. 若出现故障(速率限制、错误、超时),等待后重试
  3. 重试次数耗尽后,切换至下一个提供商
  4. 持续此过程直至请求成功或所有提供商尝试完毕
  5. 跟踪每个提供商的故障情况以实现健康监控