cli-anything-ollama
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinesecli-anything-ollama
cli-anything-ollama
Local LLM inference and model management via the Ollama REST API. Designed for AI agents and power users who need to manage models, generate text, chat, and create embeddings without a GUI.
通过Ollama REST API实现本地LLM推理与模型管理。专为无需GUI即可管理模型、生成文本、聊天和创建嵌入的AI agents及高级用户设计。
Installation
安装
This CLI is installed as part of the cli-anything-ollama package:
bash
pip install cli-anything-ollamaPrerequisites:
- Python 3.10+
- Ollama must be installed and running ()
ollama serve
该CLI作为cli-anything-ollama包的一部分进行安装:
bash
pip install cli-anything-ollama前置要求:
- Python 3.10+
- 必须安装并运行Ollama(执行)
ollama serve
Usage
使用方法
Basic Commands
基础命令
bash
undefinedbash
undefinedShow help
显示帮助信息
cli-anything-ollama --help
cli-anything-ollama --help
Start interactive REPL mode
启动交互式REPL模式
cli-anything-ollama
cli-anything-ollama
List available models
列出可用模型
cli-anything-ollama model list
cli-anything-ollama model list
Run with JSON output (for agent consumption)
以JSON格式输出(供Agent调用)
cli-anything-ollama --json model list
undefinedcli-anything-ollama --json model list
undefinedREPL Mode
REPL模式
When invoked without a subcommand, the CLI enters an interactive REPL session:
bash
cli-anything-ollama不指定子命令调用时,CLI会进入交互式REPL会话:
bash
cli-anything-ollamaEnter commands interactively with tab-completion and history
交互式输入命令,支持自动补全和历史记录
undefinedundefinedCommand Groups
命令组
Model
Model(模型)
Model management commands.
| Command | Description |
|---|---|
| List locally available models |
| Show model details (parameters, template, license) |
| Download a model from the Ollama library |
| Delete a model from local storage |
| Copy a model to a new name |
| List models currently loaded in memory |
模型管理命令。
| 命令 | 描述 |
|---|---|
| 列出本地可用模型 |
| 显示模型详情(参数、模板、许可证) |
| 从Ollama库下载模型 |
| 从本地存储删除模型 |
| 将模型复制为新名称 |
| 列出当前加载到内存中的模型 |
Generate
Generate(生成)
Text generation and chat commands.
| Command | Description |
|---|---|
| Generate text from a prompt |
| Send a chat completion request |
文本生成与聊天命令。
| 命令 | 描述 |
|---|---|
| 根据提示词生成文本 |
| 发送聊天补全请求 |
Embed
Embed(嵌入)
Embedding generation commands.
| Command | Description |
|---|---|
| Generate embeddings for text |
嵌入生成命令。
| 命令 | 描述 |
|---|---|
| 为文本生成嵌入向量 |
Server
Server(服务器)
Server status and info commands.
| Command | Description |
|---|---|
| Check if Ollama server is running |
| Show Ollama server version |
服务器状态与信息命令。
| 命令 | 描述 |
|---|---|
| 检查Ollama服务器是否运行 |
| 显示Ollama服务器版本 |
Session
Session(会话)
Session state commands.
| Command | Description |
|---|---|
| Show current session state |
| Show chat history for current session |
会话状态命令。
| 命令 | 描述 |
|---|---|
| 显示当前会话状态 |
| 显示当前会话的聊天历史 |
Examples
示例
List and Pull Models
列出并拉取模型
bash
undefinedbash
undefinedList available models
列出可用模型
cli-anything-ollama model list
cli-anything-ollama model list
Pull a model
拉取模型
cli-anything-ollama model pull llama3.2
cli-anything-ollama model pull llama3.2
Show model details
显示模型详情
cli-anything-ollama model show llama3.2
undefinedcli-anything-ollama model show llama3.2
undefinedGenerate Text
生成文本
bash
undefinedbash
undefinedStream text (default)
流式输出文本(默认模式)
cli-anything-ollama generate text --model llama3.2 --prompt "Explain quantum computing in one sentence"
cli-anything-ollama generate text --model llama3.2 --prompt "用一句话解释量子计算"
Non-streaming with JSON output (for agents)
非流式输出并返回JSON格式(供Agent调用)
cli-anything-ollama --json generate text --model llama3.2 --prompt "Hello" --no-stream
undefinedcli-anything-ollama --json generate text --model llama3.2 --prompt "Hello" --no-stream
undefinedChat
聊天
bash
undefinedbash
undefinedSingle-turn chat
单轮聊天
cli-anything-ollama generate chat --model llama3.2 --message "user:What is Python?"
cli-anything-ollama generate chat --model llama3.2 --message "user:什么是Python?"
Multi-turn chat
多轮聊天
cli-anything-ollama generate chat --model llama3.2
--message "user:What is Python?"
--message "user:How does it compare to JavaScript?"
--message "user:What is Python?"
--message "user:How does it compare to JavaScript?"
cli-anything-ollama generate chat --model llama3.2
--message "user:什么是Python?"
--message "user:它和JavaScript相比有什么区别?"
--message "user:什么是Python?"
--message "user:它和JavaScript相比有什么区别?"
Chat from JSON file
从JSON文件加载聊天内容
cli-anything-ollama generate chat --model llama3.2 --file messages.json
undefinedcli-anything-ollama generate chat --model llama3.2 --file messages.json
undefinedEmbeddings
嵌入向量
bash
cli-anything-ollama embed text --model nomic-embed-text --input "Hello world"
cli-anything-ollama embed text --model nomic-embed-text --input "Hello" --input "World"bash
cli-anything-ollama embed text --model nomic-embed-text --input "Hello world"
cli-anything-ollama embed text --model nomic-embed-text --input "Hello" --input "World"Interactive REPL Session
交互式REPL会话
Start an interactive session for exploratory use.
bash
cli-anything-ollama启动交互式会话用于探索性操作。
bash
cli-anything-ollamaEnter commands interactively
交互式输入命令
Use 'help' to see available commands
输入'help'查看可用命令
undefinedundefinedConnect to Remote Host
连接远程主机
bash
cli-anything-ollama --host http://192.168.1.100:11434 model listbash
cli-anything-ollama --host http://192.168.1.100:11434 model listState Management
状态管理
The CLI maintains lightweight session state:
- Current host URL: Configurable via
--host - Chat history: Tracked for multi-turn conversations in REPL
- Last used model: Shown in REPL prompt
该CLI维护轻量级会话状态:
- 当前主机URL:可通过配置
--host - 聊天历史:在REPL模式下跟踪多轮对话
- 上次使用的模型:显示在REPL提示符中
Output Formats
输出格式
All commands support dual output modes:
- Human-readable (default): Tables, colors, formatted text
- Machine-readable (flag): Structured JSON for agent consumption
--json
bash
undefined所有命令支持两种输出模式:
- 人类可读格式(默认):表格、彩色显示、格式化文本
- 机器可读格式(参数):结构化JSON,供Agent调用
--json
bash
undefinedHuman output
人类可读输出
cli-anything-ollama model list
cli-anything-ollama model list
JSON output for agents
供Agent调用的JSON输出
cli-anything-ollama --json model list
undefinedcli-anything-ollama --json model list
undefinedFor AI Agents
针对AI Agents的使用说明
When using this CLI programmatically:
- Always use flag for parseable output
--json - Check return codes - 0 for success, non-zero for errors
- Parse stderr for error messages on failure
- Use for generate/chat to get complete responses
--no-stream - Verify Ollama is running with before other commands
server status
以编程方式使用该CLI时:
- 始终使用参数以获得可解析的输出
--json - 检查返回码 - 0表示成功,非0表示错误
- 解析stderr获取失败时的错误信息
- 使用参数用于生成/聊天功能以获取完整响应
--no-stream - 在执行其他命令前,通过验证Ollama是否运行
server status
More Information
更多信息
- Full documentation: See README.md in the package
- Test coverage: See TEST.md in the package
- Methodology: See HARNESS.md in the cli-anything-plugin
- 完整文档:查看包中的README.md
- 测试覆盖率:查看包中的TEST.md
- 实现方法:查看cli-anything-plugin中的HARNESS.md
Version
版本
1.0.1
1.0.1