llm-config

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LLM Configuration

LLM配置

Configure RuVLLM for local inference and fine-tuning.
配置RuVLLM以实现本地推理与微调。

When to use

使用场景

When you need to configure local LLM inference, create MicroLoRA adapters for task-specific fine-tuning, or set up SONA for real-time adaptation.
当你需要配置本地LLM推理、创建用于特定任务微调的MicroLoRA适配器,或设置SONA以实现实时适配时。

Steps

步骤

  1. Check status — call
    mcp__claude-flow__ruvllm_status
    to see current model and adapter state
  2. Generate config — call
    mcp__claude-flow__ruvllm_generate_config
    with model parameters
  3. Create MicroLoRA — call
    mcp__claude-flow__ruvllm_microlora_create
    for task-specific adapters
  4. Adapt MicroLoRA — call
    mcp__claude-flow__ruvllm_microlora_adapt
    with training data
  5. Create SONA — call
    mcp__claude-flow__ruvllm_sona_create
    for real-time neural adaptation
  6. Adapt SONA — call
    mcp__claude-flow__ruvllm_sona_adapt
    with feedback signals
  1. 检查状态 — 调用
    mcp__claude-flow__ruvllm_status
    查看当前模型与适配器状态
  2. 生成配置 — 传入模型参数调用
    mcp__claude-flow__ruvllm_generate_config
  3. 创建MicroLoRA — 调用
    mcp__claude-flow__ruvllm_microlora_create
    生成特定任务适配器
  4. 适配MicroLoRA — 传入训练数据调用
    mcp__claude-flow__ruvllm_microlora_adapt
  5. 创建SONA — 调用
    mcp__claude-flow__ruvllm_sona_create
    实现实时神经适配
  6. 适配SONA — 传入反馈信号调用
    mcp__claude-flow__ruvllm_sona_adapt

MicroLoRA vs SONA

MicroLoRA vs SONA

FeatureMicroLoRASONA
SpeedMinutes to train<0.05ms adaptation
ScopeTask-specific fine-tuningReal-time micro-adjustments
PersistenceSaved as adapter weightsSession-scoped
Use caseSpecialized domain tasksContinuous feedback loops
特性MicroLoRASONA
速度训练需数分钟适配耗时<0.05ms
适用范围特定任务微调实时微调整
持久性保存为适配器权重会话级
使用场景专业领域任务持续反馈循环