Loading...
Loading...
Process textual and multimedia files with various LLM providers using the llm CLI. Supports both non-interactive and interactive modes with model selection, config persistence, and file input handling.
npx skill4agent add glebis/claude-skills llm-clillmgpt-5gpt-4-1gpt-4.1gpt-4-1-minigpt-4.1-minigpt-4ogpt-4o-minio3o3-minio3-mini-highopenaigptclaude-sonnet-4.5claude-opus-4.1claude-opus-4claude-sonnet-4claude-3.5-sonnetclaude-3.5-haikuanthropicclaudegemini-2.5-progemini-2.5-flashgemini-2.5-flash-litegemini-2.0-flashgemini-2.5-computer-usegooglegeminillama3.1llama3.2mistral-large-2deepseek-coderstarcode2ollamalocalUser Input (with optional model)
↓
Check Available Providers (env vars)
↓
Determine Model to Use:
- If specified: Use provided model
- If ambiguous: Show selection menu
- Otherwise: Use last remembered choice
↓
Load/Create Config (~/.claude/llm-skill-config.json)
↓
Detect Input Type:
- stdin/piped
- file path
- inline text
↓
Execute llm CLI:
- Non-interactive: Process & return
- Interactive: Keep conversation loop
↓
Save Model Choice to ConfigOPENAI_API_KEYANTHROPIC_API_KEYGOOGLE_API_KEYOLLAMA_BASE_URLgpt-4oclaude-opusgemini-2.5-proopenaianthropicgoogleollama~/.claude/llm-skill-config.jsonllm "Your prompt here"
llm --model gpt-4o "Process this text"
llm < file.txt
cat document.md | llm "Summarize"llm --interactive
llm -i
llm --model claude-opus --interactive~/.claude/llm-skill-config.json{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"available_providers": ["openai", "anthropic", "google", "ollama"]
}llm_skill.pyproviders.pymodels.pyexecutor.pyinput_handler.pydetect_providers()get_model_selector(input_text, provider=None)last_modelload_input(input_source)execute_llm(content, model, interactive=False)llm--model gpt-4opip install llm{
"last_model": "claude-sonnet-4.5",
"default_provider": "anthropic",
"interactive_mode": false,
"available_providers": ["openai", "anthropic"]
}/llm/llm process this text
/llm --interactive
/llm --model gpt-4o analyze this