llm-cli

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LLM CLI Skill

LLM CLI 技能

Purpose

用途

This skill enables seamless interaction with multiple LLM providers (OpenAI, Anthropic, Google Gemini, Ollama) through the
llm
CLI tool. It processes textual and multimedia information with support for both one-off executions and interactive conversation modes.
该技能通过
llm
CLI工具实现与多家LLM服务商(OpenAI、Anthropic、Google Gemini、Ollama)的无缝交互。它支持处理文本和多媒体信息,同时支持一次性执行和交互式对话两种模式。

When to Use This Skill

适用场景

Trigger this skill when:
  • User wants to process text/files with an LLM
  • User needs to choose between multiple available LLMs
  • User wants interactive conversation with an LLM
  • User needs to pipe content through an LLM for processing
  • User wants to use specific model aliases (e.g., "claude-opus", "gpt-4o")
Example user requests:
  • "Process this file with Claude"
  • "Analyze this text with the fastest available model"
  • "Start an interactive chat with OpenAI"
  • "Use Gemini to summarize this document"
  • "Chat mode with my local Ollama instance"
在以下场景触发该技能:
  • 用户希望使用LLM处理文本/文件
  • 用户需要在多个可用LLM中进行选择
  • 用户希望与LLM进行交互式对话
  • 用户需要通过管道传递内容给LLM进行处理
  • 用户希望使用特定的模型别名(如"claude-opus"、"gpt-4o")
示例用户请求:
  • "使用Claude处理这个文件"
  • "用最快的可用模型分析这段文本"
  • "启动与OpenAI的交互式聊天"
  • "使用Gemini总结这份文档"
  • "与本地Ollama实例进行聊天模式"

Supported Providers & Models

支持的服务商与模型

OpenAI

OpenAI

  • Latest Models (2025):
    • gpt-5
      - Most advanced model
    • gpt-4-1
      /
      gpt-4.1
      - Latest high-performance
    • gpt-4-1-mini
      /
      gpt-4.1-mini
      - Smaller, faster version
    • gpt-4o
      - Multimodal omni model
    • gpt-4o-mini
      - Lightweight multimodal
    • o3
      - Advanced reasoning
    • o3-mini
      /
      o3-mini-high
      - Reasoning variants
Aliases:
openai
,
gpt
  • 2025年最新模型
    • gpt-5
      - 最先进的模型
    • gpt-4-1
      /
      gpt-4.1
      - 最新高性能模型
    • gpt-4-1-mini
      /
      gpt-4.1-mini
      - 更轻量、更快的版本
    • gpt-4o
      - 多模态全能模型
    • gpt-4o-mini
      - 轻量级多模态模型
    • o3
      - 高级推理模型
    • o3-mini
      /
      o3-mini-high
      - 推理变体模型
别名
openai
,
gpt

Anthropic

Anthropic

  • Latest Models (2025):
    • claude-sonnet-4.5
      - Latest flagship model
    • claude-opus-4.1
      - Complex task specialist
    • claude-opus-4
      - Coding specialist
    • claude-sonnet-4
      - Balanced performance
    • claude-3.5-sonnet
      - Previous generation
    • claude-3.5-haiku
      - Fast & efficient
Aliases:
anthropic
,
claude
  • 2025年最新模型
    • claude-sonnet-4.5
      - 最新旗舰模型
    • claude-opus-4.1
      - 复杂任务专用模型
    • claude-opus-4
      - 编码专用模型
    • claude-sonnet-4
      - 性能均衡模型
    • claude-3.5-sonnet
      - 上一代模型
    • claude-3.5-haiku
      - 快速高效模型
别名
anthropic
,
claude

Google Gemini

Google Gemini

  • Latest Models (2025):
    • gemini-2.5-pro
      - Most advanced
    • gemini-2.5-flash
      - Default fast model
    • gemini-2.5-flash-lite
      - Speed optimized
    • gemini-2.0-flash
      - Previous generation
    • gemini-2.5-computer-use
      - UI interaction
Aliases:
google
,
gemini
  • 2025年最新模型
    • gemini-2.5-pro
      - 最先进模型
    • gemini-2.5-flash
      - 默认快速模型
    • gemini-2.5-flash-lite
      - 速度优化模型
    • gemini-2.0-flash
      - 上一代模型
    • gemini-2.5-computer-use
      - UI交互模型
别名
google
,
gemini

Ollama (Local)

Ollama(本地)

  • Popular Models:
    • llama3.1
      - Meta's latest (8b, 70b, 405b)
    • llama3.2
      - Compact versions (1b, 3b)
    • mistral-large-2
      - Mistral flagship
    • deepseek-coder
      - Code specialist
    • starcode2
      - Code models
Aliases:
ollama
,
local
  • 热门模型
    • llama3.1
      - Meta最新模型(8b、70b、405b参数版本)
    • llama3.2
      - 紧凑版本(1b、3b参数版本)
    • mistral-large-2
      - Mistral旗舰模型
    • deepseek-coder
      - 编码专用模型
    • starcode2
      - 代码模型
别名
ollama
,
local

Workflow Overview

工作流程概述

User Input (with optional model)
Check Available Providers (env vars)
Determine Model to Use:
  - If specified: Use provided model
  - If ambiguous: Show selection menu
  - Otherwise: Use last remembered choice
Load/Create Config (~/.claude/llm-skill-config.json)
Detect Input Type:
  - stdin/piped
  - file path
  - inline text
Execute llm CLI:
  - Non-interactive: Process & return
  - Interactive: Keep conversation loop
Save Model Choice to Config
User Input (with optional model)
Check Available Providers (env vars)
Determine Model to Use:
  - If specified: Use provided model
  - If ambiguous: Show selection menu
  - Otherwise: Use last remembered choice
Load/Create Config (~/.claude/llm-skill-config.json)
Detect Input Type:
  - stdin/piped
  - file path
  - inline text
Execute llm CLI:
  - Non-interactive: Process & return
  - Interactive: Keep conversation loop
Save Model Choice to Config

Features

功能特性

1. Provider Detection

1. 服务商检测

  • Checks environment variables for API keys
  • Suggests available LLM providers on first run
  • Detects:
    OPENAI_API_KEY
    ,
    ANTHROPIC_API_KEY
    ,
    GOOGLE_API_KEY
    ,
    OLLAMA_BASE_URL
  • 检查环境变量中的API密钥
  • 首次运行时推荐可用的LLM服务商
  • 检测的变量:
    OPENAI_API_KEY
    ,
    ANTHROPIC_API_KEY
    ,
    GOOGLE_API_KEY
    ,
    OLLAMA_BASE_URL

2. Model Selection

2. 模型选择

  • Accept model aliases (
    gpt-4o
    ,
    claude-opus
    ,
    gemini-2.5-pro
    )
  • Accept provider aliases (
    openai
    ,
    anthropic
    ,
    google
    ,
    ollama
    )
  • Interactive menu when selection is ambiguous
  • Remembers last used model in
    ~/.claude/llm-skill-config.json
  • 支持模型别名(
    gpt-4o
    ,
    claude-opus
    ,
    gemini-2.5-pro
  • 支持服务商别名(
    openai
    ,
    anthropic
    ,
    google
    ,
    ollama
  • 当选择不明确时显示交互式菜单
  • ~/.claude/llm-skill-config.json
    中记住上次使用的模型

3. Input Processing

3. 输入处理

  • Accepts stdin/piped input
  • Processes file paths (detects: .txt, .md, .json, .pdf, images)
  • Handles inline text prompts
  • Supports multimedia files with appropriate encoding
  • 支持标准输入/管道输入
  • 处理文件路径(支持的格式:.txt、.md、.json、.pdf、图片)
  • 处理内嵌文本提示
  • 支持对多媒体文件进行适当编码

4. Execution Modes

4. 执行模式

Non-Interactive (Default)

非交互模式(默认)

bash
llm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"
bash
llm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"

Interactive Mode

交互模式

bash
llm --interactive
llm -i
llm --model claude-opus --interactive
bash
llm --interactive
llm -i
llm --model claude-opus --interactive

5. Configuration

5. 配置

Persistent config location:
~/.claude/llm-skill-config.json
json
{
  "last_model": "claude-sonnet-4.5",
  "default_provider": "anthropic",
  "available_providers": ["openai", "anthropic", "google", "ollama"]
}
持久化配置文件路径:
~/.claude/llm-skill-config.json
json
{
  "last_model": "claude-sonnet-4.5",
  "default_provider": "anthropic",
  "available_providers": ["openai", "anthropic", "google", "ollama"]
}

Implementation Details

实现细节

Core Files

核心文件

  • llm_skill.py
    - Main skill orchestration
  • providers.py
    - Provider detection & config
  • models.py
    - Model definitions & aliases
  • executor.py
    - Execution logic (interactive/non-interactive)
  • input_handler.py
    - Input type detection
  • llm_skill.py
    - 技能主协调文件
  • providers.py
    - 服务商检测与配置文件
  • models.py
    - 模型定义与别名文件
  • executor.py
    - 执行逻辑(交互/非交互模式)
  • input_handler.py
    - 输入类型检测文件

Key Functions

关键函数

detect_providers()

detect_providers()

  • Scans environment for provider API keys
  • Returns dict of available providers
  • 扫描环境变量中的服务商API密钥
  • 返回可用服务商的字典

get_model_selector(input_text, provider=None)

get_model_selector(input_text, provider=None)

  • Returns selected model, showing menu if needed
  • Respects
    last_model
    config preference
  • 返回选中的模型,必要时显示选择菜单
  • 遵循
    last_model
    配置偏好

load_input(input_source)

load_input(input_source)

  • Handles stdin, file paths, or inline text
  • Returns content string
  • 处理标准输入、文件路径或内嵌文本
  • 返回内容字符串

execute_llm(content, model, interactive=False)

execute_llm(content, model, interactive=False)

  • Calls
    llm
    CLI with appropriate parameters
  • Manages stdin/stdout for interactive mode
  • 使用适当的参数调用
    llm
    CLI
  • 管理交互模式下的标准输入/输出

Usage in Claude Code

在Claude Code中的使用方式

When user invokes this skill, Claude should:
  1. Parse input for model specification (e.g.,
    --model gpt-4o
    )
  2. Call skill with content and optional model parameter
  3. Wait for provider/model selection if needed
  4. Execute and return results
  5. For interactive mode, maintain conversation loop
当用户调用该技能时,Claude应:
  1. 解析输入中的模型指定(如
    --model gpt-4o
  2. 携带内容和可选的模型参数调用技能
  3. 必要时等待服务商/模型选择
  4. 执行并返回结果
  5. 对于交互模式,维持对话循环

Error Handling

错误处理

  • If no providers available: Suggest installing API keys
  • If model not found: Show available models for chosen provider
  • If llm CLI not installed: Suggest installation via
    pip install llm
  • If file not readable: Fall back to treating as inline text
  • 若无可用服务商:建议安装API密钥
  • 若模型未找到:显示所选服务商的可用模型
  • 若未安装llm CLI:建议通过
    pip install llm
    进行安装
  • 若文件无法读取:退回到将其视为内嵌文本

Configuration

配置

Users can pre-configure preferences:
json
{
  "last_model": "claude-sonnet-4.5",
  "default_provider": "anthropic",
  "interactive_mode": false,
  "available_providers": ["openai", "anthropic"]
}
用户可以预先配置偏好:
json
{
  "last_model": "claude-sonnet-4.5",
  "default_provider": "anthropic",
  "interactive_mode": false,
  "available_providers": ["openai", "anthropic"]
}

Slash Command Integration

斜杠命令集成

Support
/llm
command:
/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this
支持
/llm
命令:
/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this