memory-lancedb-pro-openclaw

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

memory-lancedb-pro OpenClaw Plugin

memory-lancedb-pro OpenClaw 插件

Skill by ara.so — Daily 2026 Skills collection.
memory-lancedb-pro
is a production-grade long-term memory plugin for OpenClaw agents. It stores preferences, decisions, and project context in a local LanceDB vector database and automatically recalls relevant memories before each agent reply. Key features: hybrid retrieval (vector + BM25 full-text), cross-encoder reranking, LLM-powered smart extraction (6 categories), Weibull decay-based forgetting, multi-scope isolation (agent/user/project), and a full management CLI.

ara.so开发的技能——2026每日技能合集。
memory-lancedb-pro
是一款为OpenClaw agents打造的生产级长期记忆插件。它将偏好、决策和项目上下文存储在本地LanceDB向量数据库中,并在Agent每次回复前自动召回相关记忆。核心功能包括:混合检索(向量+BM25全文检索)、交叉编码器重排序、基于LLM的智能提取(6个分类)、基于Weibull衰减的遗忘机制、多范围隔离(Agent/用户/项目)以及完整的管理CLI。

Installation

安装

Option A: One-Click Setup Script (Recommended)

选项A:一键安装脚本(推荐)

bash
curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
bash setup-memory.sh
Flags:
bash
bash setup-memory.sh --dry-run       # Preview changes only
bash setup-memory.sh --beta          # Include pre-release versions
bash setup-memory.sh --uninstall     # Revert config and remove plugin
bash setup-memory.sh --selfcheck-only  # Health checks, no changes
The script handles fresh installs, upgrades from git-cloned versions, invalid config fields, broken CLI fallback, and provider presets (Jina, DashScope, SiliconFlow, OpenAI, Ollama).
bash
curl -fsSL https://raw.githubusercontent.com/CortexReach/toolbox/main/memory-lancedb-pro-setup/setup-memory.sh -o setup-memory.sh
bash setup-memory.sh
可用参数:
bash
bash setup-memory.sh --dry-run       # Preview changes only
bash setup-memory.sh --beta          # Include pre-release versions
bash setup-memory.sh --uninstall     # Revert config and remove plugin
bash setup-memory.sh --selfcheck-only  # Health checks, no changes
该脚本支持全新安装、从Git克隆版本升级、修复无效配置字段、CLI故障回退,以及预设提供商(Jina、DashScope、SiliconFlow、OpenAI、Ollama)。

Option B: OpenClaw CLI

选项B:OpenClaw CLI

bash
openclaw plugins install memory-lancedb-pro@beta
bash
openclaw plugins install memory-lancedb-pro@beta

Option C: npm

选项C:npm

bash
npm i memory-lancedb-pro@beta
Critical: When installing via npm, you must add the plugin's absolute install path to
plugins.load.paths
in
openclaw.json
. This is the most common setup issue.

bash
npm i memory-lancedb-pro@beta
重要提示: 使用npm安装时,必须将插件的绝对安装路径添加到
openclaw.json
中的
plugins.load.paths
。这是最常见的配置问题。

Minimal Configuration (
openclaw.json
)

最简配置(
openclaw.json

json
{
  "plugins": {
    "load": {
      "paths": ["/absolute/path/to/node_modules/memory-lancedb-pro"]
    },
    "slots": { "memory": "memory-lancedb-pro" },
    "entries": {
      "memory-lancedb-pro": {
        "enabled": true,
        "config": {
          "embedding": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "text-embedding-3-small"
          },
          "autoCapture": true,
          "autoRecall": true,
          "smartExtraction": true,
          "extractMinMessages": 2,
          "extractMaxChars": 8000,
          "sessionMemory": { "enabled": false }
        }
      }
    }
  }
}
Why these defaults:
  • autoCapture
    +
    smartExtraction
    → agent learns from conversations automatically, no manual calls needed
  • autoRecall
    → memories injected before each reply
  • extractMinMessages: 2
    → triggers in normal two-turn chats
  • sessionMemory.enabled: false
    → avoids polluting retrieval with session summaries early on

json
{
  "plugins": {
    "load": {
      "paths": ["/absolute/path/to/node_modules/memory-lancedb-pro"]
    },
    "slots": { "memory": "memory-lancedb-pro" },
    "entries": {
      "memory-lancedb-pro": {
        "enabled": true,
        "config": {
          "embedding": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "text-embedding-3-small"
          },
          "autoCapture": true,
          "autoRecall": true,
          "smartExtraction": true,
          "extractMinMessages": 2,
          "extractMaxChars": 8000,
          "sessionMemory": { "enabled": false }
        }
      }
    }
  }
}
默认配置的设计原因:
  • autoCapture
    +
    smartExtraction
    → Agent自动从对话中学习,无需手动调用
  • autoRecall
    → 记忆在每次回复前自动注入
  • extractMinMessages: 2
    → 在常规两轮对话中触发
  • sessionMemory.enabled: false
    → 避免早期会话摘要污染检索结果

Full Production Configuration

完整生产环境配置

json
{
  "plugins": {
    "slots": { "memory": "memory-lancedb-pro" },
    "entries": {
      "memory-lancedb-pro": {
        "enabled": true,
        "config": {
          "embedding": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "text-embedding-3-small",
            "baseURL": "https://api.openai.com/v1"
          },
          "reranker": {
            "provider": "jina",
            "apiKey": "${JINA_API_KEY}",
            "model": "jina-reranker-v2-base-multilingual"
          },
          "extraction": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "gpt-4o-mini"
          },
          "autoCapture": true,
          "captureAssistant": false,
          "autoRecall": true,
          "smartExtraction": true,
          "extractMinMessages": 2,
          "extractMaxChars": 8000,
          "enableManagementTools": true,
          "retrieval": {
            "mode": "hybrid",
            "vectorWeight": 0.7,
            "bm25Weight": 0.3,
            "topK": 10
          },
          "rerank": {
            "enabled": true,
            "type": "cross-encoder",
            "candidatePoolSize": 12,
            "minScore": 0.6,
            "hardMinScore": 0.62
          },
          "decay": {
            "enabled": true,
            "model": "weibull",
            "halfLifeDays": 30
          },
          "sessionMemory": { "enabled": false },
          "scopes": {
            "agent": true,
            "user": true,
            "project": true
          }
        }
      }
    }
  }
}
json
{
  "plugins": {
    "slots": { "memory": "memory-lancedb-pro" },
    "entries": {
      "memory-lancedb-pro": {
        "enabled": true,
        "config": {
          "embedding": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "text-embedding-3-small",
            "baseURL": "https://api.openai.com/v1"
          },
          "reranker": {
            "provider": "jina",
            "apiKey": "${JINA_API_KEY}",
            "model": "jina-reranker-v2-base-multilingual"
          },
          "extraction": {
            "provider": "openai-compatible",
            "apiKey": "${OPENAI_API_KEY}",
            "model": "gpt-4o-mini"
          },
          "autoCapture": true,
          "captureAssistant": false,
          "autoRecall": true,
          "smartExtraction": true,
          "extractMinMessages": 2,
          "extractMaxChars": 8000,
          "enableManagementTools": true,
          "retrieval": {
            "mode": "hybrid",
            "vectorWeight": 0.7,
            "bm25Weight": 0.3,
            "topK": 10
          },
          "rerank": {
            "enabled": true,
            "type": "cross-encoder",
            "candidatePoolSize": 12,
            "minScore": 0.6,
            "hardMinScore": 0.62
          },
          "decay": {
            "enabled": true,
            "model": "weibull",
            "halfLifeDays": 30
          },
          "sessionMemory": { "enabled": false },
          "scopes": {
            "agent": true,
            "user": true,
            "project": true
          }
        }
      }
    }
  }
}

Provider Options for Embedding

嵌入模型提供商选项

Provider
provider
value
Notes
OpenAI / compatible
"openai-compatible"
Requires
apiKey
, optional
baseURL
Jina
"jina"
Requires
apiKey
Gemini
"gemini"
Requires
apiKey
Ollama
"ollama"
Local, zero API cost, set
baseURL
DashScope
"dashscope"
Requires
apiKey
SiliconFlow
"siliconflow"
Requires
apiKey
, free reranker tier
提供商
provider
取值
说明
OpenAI / 兼容平台
"openai-compatible"
需要
apiKey
,可选
baseURL
Jina
"jina"
需要
apiKey
Gemini
"gemini"
需要
apiKey
Ollama
"ollama"
本地部署,零API费用,需设置
baseURL
DashScope
"dashscope"
需要
apiKey
SiliconFlow
"siliconflow"
需要
apiKey
,提供免费重排序模型 tier

Deployment Plans

部署方案

Full Power (Jina + OpenAI):
json
{
  "embedding": { "provider": "jina", "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v3" },
  "reranker": { "provider": "jina", "apiKey": "${JINA_API_KEY}", "model": "jina-reranker-v2-base-multilingual" },
  "extraction": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini" }
}
Budget (SiliconFlow free reranker):
json
{
  "embedding": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "text-embedding-3-small" },
  "reranker": { "provider": "siliconflow", "apiKey": "${SILICONFLOW_API_KEY}", "model": "BAAI/bge-reranker-v2-m3" },
  "extraction": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini" }
}
Fully Local (Ollama, zero API cost):
json
{
  "embedding": { "provider": "ollama", "baseURL": "http://localhost:11434", "model": "nomic-embed-text" },
  "extraction": { "provider": "ollama", "baseURL": "http://localhost:11434", "model": "llama3" }
}

全功能方案(Jina + OpenAI):
json
{
  "embedding": { "provider": "jina", "apiKey": "${JINA_API_KEY}", "model": "jina-embeddings-v3" },
  "reranker": { "provider": "jina", "apiKey": "${JINA_API_KEY}", "model": "jina-reranker-v2-base-multilingual" },
  "extraction": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini" }
}
经济型方案(SiliconFlow免费重排序模型):
json
{
  "embedding": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "text-embedding-3-small" },
  "reranker": { "provider": "siliconflow", "apiKey": "${SILICONFLOW_API_KEY}", "model": "BAAI/bge-reranker-v2-m3" },
  "extraction": { "provider": "openai-compatible", "apiKey": "${OPENAI_API_KEY}", "model": "gpt-4o-mini" }
}
全本地部署方案(Ollama,零API费用):
json
{
  "embedding": { "provider": "ollama", "baseURL": "http://localhost:11434", "model": "nomic-embed-text" },
  "extraction": { "provider": "ollama", "baseURL": "http://localhost:11434", "model": "llama3" }
}

CLI Reference

CLI参考

Validate config and restart after any changes:
bash
openclaw config validate
openclaw gateway restart
openclaw logs --follow --plain | grep "memory-lancedb-pro"
Expected startup log output:
memory-lancedb-pro: smart extraction enabled
memory-lancedb-pro@1.x.x: plugin registered
任何配置变更后,验证配置并重启服务:
bash
openclaw config validate
openclaw gateway restart
openclaw logs --follow --plain | grep "memory-lancedb-pro"
预期启动日志输出:
memory-lancedb-pro: smart extraction enabled
memory-lancedb-pro@1.x.x: plugin registered

Memory Management CLI

记忆管理CLI

bash
undefined
bash
undefined

Stats overview

统计概览

openclaw memory-pro stats
openclaw memory-pro stats

List memories (with optional scope/filter)

列出记忆(可选范围/过滤条件)

openclaw memory-pro list openclaw memory-pro list --scope user --limit 20 openclaw memory-pro list --filter "typescript"
openclaw memory-pro list openclaw memory-pro list --scope user --limit 20 openclaw memory-pro list --filter "typescript"

Search memories

搜索记忆

openclaw memory-pro search "coding preferences" openclaw memory-pro search "database decisions" --scope project
openclaw memory-pro search "coding preferences" openclaw memory-pro search "database decisions" --scope project

Delete a memory by ID

根据ID删除记忆

openclaw memory-pro forget <memory-id>
openclaw memory-pro forget <memory-id>

Export memories (for backup or migration)

导出记忆(用于备份或迁移)

openclaw memory-pro export --scope global --output memories-backup.json openclaw memory-pro export --scope user --output user-memories.json
openclaw memory-pro export --scope global --output memories-backup.json openclaw memory-pro export --scope user --output user-memories.json

Import memories

导入记忆

openclaw memory-pro import --input memories-backup.json
openclaw memory-pro import --input memories-backup.json

Upgrade schema (when upgrading plugin versions)

升级 schema(插件版本升级时使用)

openclaw memory-pro upgrade --dry-run # Preview first openclaw memory-pro upgrade # Run upgrade
openclaw memory-pro upgrade --dry-run # 先预览 openclaw memory-pro upgrade # 执行升级

Plugin info

插件信息

openclaw plugins info memory-lancedb-pro

---
openclaw plugins info memory-lancedb-pro

---

MCP Tool API

MCP工具API

The plugin exposes MCP tools to the agent. Core tools are always available; management tools require
enableManagementTools: true
in config.
该插件向Agent暴露MCP工具。核心工具始终可用;管理工具需要在配置中设置
enableManagementTools: true

Core Tools (always available)

核心工具(始终可用)

memory_recall

memory_recall

Retrieve relevant memories for a query.
typescript
// Agent usage pattern
const results = await memory_recall({
  query: "user's preferred code style",
  scope: "user",        // "agent" | "user" | "project" | "global"
  topK: 5
});
根据查询检索相关记忆。
typescript
// Agent使用示例
const results = await memory_recall({
  query: "user's preferred code style",
  scope: "user",        // "agent" | "user" | "project" | "global"
  topK: 5
});

memory_store

memory_store

Manually store a memory.
typescript
await memory_store({
  content: "User prefers tabs over spaces, always wants error handling",
  category: "preference",   // "profile" | "preference" | "entity" | "event" | "case" | "pattern"
  scope: "user",
  tags: ["coding-style", "typescript"]
});
手动存储一条记忆。
typescript
await memory_store({
  content: "User prefers tabs over spaces, always wants error handling",
  category: "preference",   // "profile" | "preference" | "entity" | "event" | "case" | "pattern"
  scope: "user",
  tags: ["coding-style", "typescript"]
});

memory_forget

memory_forget

Delete a specific memory by ID.
typescript
await memory_forget({ id: "mem_abc123" });
根据ID删除特定记忆。
typescript
await memory_forget({ id: "mem_abc123" });

memory_update

memory_update

Update an existing memory.
typescript
await memory_update({
  id: "mem_abc123",
  content: "User now prefers 2-space indentation (changed from tabs on 2026-03-01)",
  category: "preference"
});
更新现有记忆。
typescript
await memory_update({
  id: "mem_abc123",
  content: "User now prefers 2-space indentation (changed from tabs on 2026-03-01)",
  category: "preference"
});

Management Tools (requires
enableManagementTools: true
)

管理工具(需设置
enableManagementTools: true

memory_stats

memory_stats

typescript
const stats = await memory_stats({ scope: "global" });
// Returns: total count, category breakdown, decay stats, db size
typescript
const stats = await memory_stats({ scope: "global" });
// 返回:总数量、分类统计、衰减统计、数据库大小

memory_list

memory_list

typescript
const list = await memory_list({ scope: "user", limit: 20, offset: 0 });
typescript
const list = await memory_list({ scope: "user", limit: 20, offset: 0 });

self_improvement_log

self_improvement_log

Log an agent learning event for meta-improvement tracking.
typescript
await self_improvement_log({
  event: "user corrected indentation preference",
  context: "User asked me to switch from tabs to spaces",
  improvement: "Updated coding-style preference memory"
});
记录Agent的学习事件,用于元改进跟踪。
typescript
await self_improvement_log({
  event: "user corrected indentation preference",
  context: "User asked me to switch from tabs to spaces",
  improvement: "Updated coding-style preference memory"
});

self_improvement_extract_skill

self_improvement_extract_skill

Extract a reusable pattern from a conversation.
typescript
await self_improvement_extract_skill({
  conversation: "...",
  domain: "code-review",
  skillName: "typescript-strict-mode-setup"
});
从对话中提取可复用的模式。
typescript
await self_improvement_extract_skill({
  conversation: "...",
  domain: "code-review",
  skillName: "typescript-strict-mode-setup"
});

self_improvement_review

self_improvement_review

Review and consolidate recent self-improvement logs.
typescript
await self_improvement_review({ days: 7 });

回顾并整合近期的自我改进日志。
typescript
await self_improvement_review({ days: 7 });

Smart Extraction: 6 Memory Categories

智能提取:6类记忆分类

When
smartExtraction: true
, the LLM automatically classifies memories into:
CategoryWhat gets storedExample
profile
User identity, background"User is a senior TypeScript developer"
preference
Style, tool, workflow choices"Prefers functional programming patterns"
entity
Projects, people, systems"Project 'Falcon' uses PostgreSQL + Redis"
event
Decisions made, things that happened"Chose Vite over webpack on 2026-02-15"
case
Solutions to specific problems"Fixed CORS by adding proxy in vite.config.ts"
pattern
Recurring behaviors, habits"Always asks for tests before implementation"

smartExtraction: true
时,LLM会自动将记忆分类为以下6类:
分类存储内容示例
profile
用户身份、背景"User is a senior TypeScript developer"
preference
风格、工具、工作流选择"Prefers functional programming patterns"
entity
项目、人员、系统"Project 'Falcon' uses PostgreSQL + Redis"
event
已做出的决策、发生的事件"Chose Vite over webpack on 2026-02-15"
case
特定问题的解决方案"Fixed CORS by adding proxy in vite.config.ts"
pattern
重复行为、习惯"Always asks for tests before implementation"

Hybrid Retrieval Internals

混合检索内部机制

With
retrieval.mode: "hybrid"
, every recall runs:
  1. Vector search — semantic similarity via embeddings (weight:
    vectorWeight
    , default 0.7)
  2. BM25 full-text search — keyword matching (weight:
    bm25Weight
    , default 0.3)
  3. Score fusion — results merged with weighted RRF (Reciprocal Rank Fusion)
  4. Cross-encoder rerank — top
    candidatePoolSize
    candidates reranked by a cross-encoder model
  5. Score filtering — results below
    hardMinScore
    are dropped
json
"retrieval": {
  "mode": "hybrid",
  "vectorWeight": 0.7,
  "bm25Weight": 0.3,
  "topK": 10
},
"rerank": {
  "enabled": true,
  "type": "cross-encoder",
  "candidatePoolSize": 12,
  "minScore": 0.6,
  "hardMinScore": 0.62
}
Retrieval mode options:
  • "vector"
    — pure semantic search only
  • "bm25"
    — pure keyword search only
  • "hybrid"
    — both fused (recommended)

当设置
retrieval.mode: "hybrid"
时,每次召回流程如下:
  1. 向量搜索 —— 通过嵌入模型实现语义相似度匹配(权重:
    vectorWeight
    ,默认0.7)
  2. BM25全文搜索 —— 关键词匹配(权重:
    bm25Weight
    ,默认0.3)
  3. 分数融合 —— 使用加权RRF( reciprocal Rank Fusion)合并结果
  4. 交叉编码器重排序 —— 对前
    candidatePoolSize
    个候选结果使用交叉编码器模型重排序
  5. 分数过滤 —— 低于
    hardMinScore
    的结果被丢弃
json
"retrieval": {
  "mode": "hybrid",
  "vectorWeight": 0.7,
  "bm25Weight": 0.3,
  "topK": 10
},
"rerank": {
  "enabled": true,
  "type": "cross-encoder",
  "candidatePoolSize": 12,
  "minScore": 0.6,
  "hardMinScore": 0.62
}
检索模式选项:
  • "vector"
    —— 仅纯语义搜索
  • "bm25"
    —— 仅纯关键词搜索
  • "hybrid"
    —— 两者融合(推荐)

Multi-Scope Isolation

多范围隔离

Scopes let you isolate memories by context. Enabling all three gives maximum flexibility:
json
"scopes": {
  "agent": true,    // Memories specific to this agent instance
  "user": true,     // Memories tied to a user identity
  "project": true   // Memories tied to a project/workspace
}
When recalling, specify scope to narrow results:
typescript
// Get only project-level memories
await memory_recall({ query: "database choices", scope: "project" });

// Get user preferences across all agents
await memory_recall({ query: "coding style", scope: "user" });

// Global recall across all scopes
await memory_recall({ query: "error handling patterns", scope: "global" });

范围功能允许按上下文隔离记忆。启用全部三个范围可获得最大灵活性:
json
"scopes": {
  "agent": true,    // 特定Agent实例专属记忆
  "user": true,     // 与用户身份绑定的记忆
  "project": true   // 与项目/工作区绑定的记忆
}
召回记忆时,可指定范围以缩小结果范围:
typescript
// 仅获取项目级记忆
await memory_recall({ query: "database choices", scope: "project" });

// 获取所有Agent中的用户偏好
await memory_recall({ query: "coding style", scope: "user" });

// 全局召回所有范围的记忆
await memory_recall({ query: "error handling patterns", scope: "global" });

Weibull Decay Model

Weibull衰减模型

Memories naturally fade over time. The decay model prevents stale memories from polluting retrieval.
json
"decay": {
  "enabled": true,
  "model": "weibull",
  "halfLifeDays": 30
}
  • Memories accessed frequently get their decay clock reset
  • Important, repeatedly-recalled memories effectively become permanent
  • Noise and one-off mentions fade naturally after ~30 days

记忆会随时间自然淡化。衰减模型可防止陈旧记忆污染检索结果。
json
"decay": {
  "enabled": true,
  "model": "weibull",
  "halfLifeDays": 30
}
  • 频繁访问的记忆会重置衰减时钟
  • 重要、反复召回的记忆实际上会成为永久记忆
  • 噪音和一次性提及的内容会在约30天后自然淡化

Upgrading

升级指南

From pre-v1.1.0

从v1.1.0之前版本升级

bash
undefined
bash
undefined

1. Backup first — always

1. 先备份——始终要做

openclaw memory-pro export --scope global --output memories-backup-$(date +%Y%m%d).json
openclaw memory-pro export --scope global --output memories-backup-$(date +%Y%m%d).json

2. Preview schema changes

2. 预览schema变更

openclaw memory-pro upgrade --dry-run
openclaw memory-pro upgrade --dry-run

3. Run the upgrade

3. 执行升级

openclaw memory-pro upgrade
openclaw memory-pro upgrade

4. Verify

4. 验证

openclaw memory-pro stats

See `CHANGELOG-v1.1.0.md` in the repo for behavior changes and upgrade rationale.

---
openclaw memory-pro stats

请查看仓库中的`CHANGELOG-v1.1.0.md`了解行为变更和升级原因。

---

Troubleshooting

故障排除

Plugin not loading

插件未加载

bash
undefined
bash
undefined

Check plugin is recognized

检查插件是否被识别

openclaw plugins info memory-lancedb-pro
openclaw plugins info memory-lancedb-pro

Validate config (catches JSON errors, unknown fields)

验证配置(捕获JSON错误、未知字段)

openclaw config validate
openclaw config validate

Check logs for registration

检查日志中的注册信息

openclaw logs --follow --plain | grep "memory-lancedb-pro"

**Common causes:**
- Missing or relative `plugins.load.paths` (must be absolute when using npm install)
- `plugins.slots.memory` not set to `"memory-lancedb-pro"`
- Plugin not listed under `plugins.entries`
openclaw logs --follow --plain | grep "memory-lancedb-pro"

**常见原因:**
- `plugins.load.paths`缺失或使用相对路径(使用npm安装时必须为绝对路径)
- `plugins.slots.memory`未设置为`"memory-lancedb-pro"`
- 插件未在`plugins.entries`中列出

autoRecall
not injecting memories

autoRecall
未注入记忆

By default
autoRecall
is
false
in some versions — explicitly set it to
true
:
json
"autoRecall": true
Also confirm the plugin is bound to the
memory
slot, not just loaded.
在某些版本中,
autoRecall
默认值为
false
——需显式设置为
true
json
"autoRecall": true
同时确认插件已绑定到
memory
插槽,而不仅仅是加载。

Jiti cache issues after upgrade

升级后Jiti缓存问题

bash
undefined
bash
undefined

Clear jiti transpile cache

清除jiti转译缓存

rm -rf ~/.openclaw/.cache/jiti openclaw gateway restart
undefined
rm -rf ~/.openclaw/.cache/jiti openclaw gateway restart
undefined

Memories not being extracted from conversations

记忆未从对话中提取

  • Check
    extractMinMessages
    — must be ≥ number of turns in the conversation (set to
    2
    for normal chats)
  • Check
    extractMaxChars
    — very long contexts may be truncated; increase to
    12000
    if needed
  • Verify extraction LLM config has a valid
    apiKey
    and reachable endpoint
  • Check logs:
    openclaw logs --follow --plain | grep "extraction"
  • 检查
    extractMinMessages
    ——必须≥对话的轮次(常规对话设置为
    2
  • 检查
    extractMaxChars
    ——过长的上下文可能被截断;如有需要可增加到
    12000
  • 验证提取LLM配置是否有有效的
    apiKey
    和可访问的端点
  • 查看日志:
    openclaw logs --follow --plain | grep "extraction"

Retrieval returns nothing or poor results

检索无结果或结果质量差

  1. Confirm
    retrieval.mode
    is
    "hybrid"
    not
    "bm25"
    alone (BM25 requires indexed content)
  2. Lower
    rerank.hardMinScore
    temporarily (try
    0.4
    ) to see if results exist but are being filtered
  3. Check embedding model is consistent between store and recall operations — changing models requires re-embedding
  1. 确认
    retrieval.mode
    "hybrid"
    而非仅
    "bm25"
    (BM25需要索引内容)
  2. 暂时降低
    rerank.hardMinScore
    (尝试
    0.4
    ),查看是否存在结果但被过滤
  3. 确认存储和召回操作使用的嵌入模型一致——更换模型需要重新嵌入所有记忆

Environment variable not resolving

环境变量未解析

Ensure env vars are exported in the shell that runs OpenClaw, or use a
.env
file loaded by your process manager. The
${VAR}
syntax in
openclaw.json
is resolved at startup.
bash
export OPENAI_API_KEY="sk-..."
export JINA_API_KEY="jina_..."
openclaw gateway restart

确保环境变量在运行OpenClaw的Shell中已导出,或使用进程管理器加载的
.env
文件。
openclaw.json
中的
${VAR}
语法会在启动时解析。
bash
export OPENAI_API_KEY="sk-..."
export JINA_API_KEY="jina_..."
openclaw gateway restart

Telegram Bot Quick Config Import

Telegram Bot快速配置导入

If using OpenClaw's Telegram integration, send this to the bot to auto-configure:
Help me connect this memory plugin with the most user-friendly configuration:
https://github.com/CortexReach/memory-lancedb-pro

Requirements:
1. Set it as the only active memory plugin
2. Use Jina for embedding
3. Use Jina for reranker
4. Use gpt-4o-mini for the smart-extraction LLM
5. Enable autoCapture, autoRecall, smartExtraction
6. extractMinMessages=2
7. sessionMemory.enabled=false
8. captureAssistant=false
9. retrieval mode=hybrid, vectorWeight=0.7, bm25Weight=0.3
10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
11. Generate the final openclaw.json config directly, not just an explanation

如果使用OpenClaw的Telegram集成,可向机器人发送以下内容以自动配置:
Help me connect this memory plugin with the most user-friendly configuration:
https://github.com/CortexReach/memory-lancedb-pro

Requirements:
1. Set it as the only active memory plugin
2. Use Jina for embedding
3. Use Jina for reranker
4. Use gpt-4o-mini for the smart-extraction LLM
5. Enable autoCapture, autoRecall, smartExtraction
6. extractMinMessages=2
7. sessionMemory.enabled=false
8. captureAssistant=false
9. retrieval mode=hybrid, vectorWeight=0.5, bm25Weight=0.5
10. rerank=cross-encoder, candidatePoolSize=12, minScore=0.6, hardMinScore=0.62
11. Generate the final openclaw.json config directly, not just an explanation

Resources

资源