llm-config
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLLM Configuration
LLM配置
Configure RuVLLM for local inference and fine-tuning.
配置RuVLLM以实现本地推理与微调。
When to use
使用场景
When you need to configure local LLM inference, create MicroLoRA adapters for task-specific fine-tuning, or set up SONA for real-time adaptation.
当你需要配置本地LLM推理、创建用于特定任务微调的MicroLoRA适配器,或设置SONA以实现实时适配时。
Steps
步骤
- Check status — call to see current model and adapter state
mcp__claude-flow__ruvllm_status - Generate config — call with model parameters
mcp__claude-flow__ruvllm_generate_config - Create MicroLoRA — call for task-specific adapters
mcp__claude-flow__ruvllm_microlora_create - Adapt MicroLoRA — call with training data
mcp__claude-flow__ruvllm_microlora_adapt - Create SONA — call for real-time neural adaptation
mcp__claude-flow__ruvllm_sona_create - Adapt SONA — call with feedback signals
mcp__claude-flow__ruvllm_sona_adapt
- 检查状态 — 调用查看当前模型与适配器状态
mcp__claude-flow__ruvllm_status - 生成配置 — 传入模型参数调用
mcp__claude-flow__ruvllm_generate_config - 创建MicroLoRA — 调用生成特定任务适配器
mcp__claude-flow__ruvllm_microlora_create - 适配MicroLoRA — 传入训练数据调用
mcp__claude-flow__ruvllm_microlora_adapt - 创建SONA — 调用实现实时神经适配
mcp__claude-flow__ruvllm_sona_create - 适配SONA — 传入反馈信号调用
mcp__claude-flow__ruvllm_sona_adapt
MicroLoRA vs SONA
MicroLoRA vs SONA
| Feature | MicroLoRA | SONA |
|---|---|---|
| Speed | Minutes to train | <0.05ms adaptation |
| Scope | Task-specific fine-tuning | Real-time micro-adjustments |
| Persistence | Saved as adapter weights | Session-scoped |
| Use case | Specialized domain tasks | Continuous feedback loops |
| 特性 | MicroLoRA | SONA |
|---|---|---|
| 速度 | 训练需数分钟 | 适配耗时<0.05ms |
| 适用范围 | 特定任务微调 | 实时微调整 |
| 持久性 | 保存为适配器权重 | 会话级 |
| 使用场景 | 专业领域任务 | 持续反馈循环 |