transcendence-memory

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

What This Skill Does

该skill的功能

Provides self-hosted long-term memory for AI agents by connecting to the transcendence-memory-server backend.
Core capabilities:
  • Connect: complete authentication in one step with a connection token or manual configuration
  • Text memory: manage structured memories through lightweight CRUD endpoints
  • Multimodal RAG: upload documents (PDF, image, or Markdown) or raw text into the RAG-Anything pipeline, then ask natural-language questions and get LLM-generated answers
  • Container management: list and delete containers
  • Troubleshooting: diagnose connection and retrieval issues
通过连接transcendence-memory-server后端,为AI Agent提供自托管长期记忆能力。
核心能力:
  • 连接:通过连接令牌一步完成身份验证,或手动配置连接
  • 文本记忆:通过轻量级CRUD接口管理结构化记忆
  • 多模态RAG:将文档(PDF、图片或Markdown)或原始文本上传到RAG-Anything流水线,之后即可用自然语言提问并获取LLM生成的答案
  • 容器管理:列出和删除容器
  • 故障排查:诊断连接和检索相关问题

Install

安装

bash
npx skills add https://github.com/leekkk2/transcendence-memory --skill transcendence-memory
Or inside a Claude Code session:
text
/plugin marketplace add leekkk2/transcendence-memory
/plugin install transcendence-memory
bash
npx skills add https://github.com/leekkk2/transcendence-memory --skill transcendence-memory
或在Claude Code会话中执行:
text
/plugin marketplace add leekkk2/transcendence-memory
/plugin install transcendence-memory

Principles

设计原则

  • Keep builtin memory: server-side memory augments the agent's builtin memory instead of replacing it
  • Zero dependency: no extra package installation is required; the agent can do everything with native tools such as curl, file I/O, and the Python standard library
  • Progressive loading: read
    references/setup.md
    during first-time setup, then this file is enough for day-to-day use
  • 保留内置内存:服务端内存是Agent内置内存的补充,而非替代
  • 零依赖:无需安装额外依赖包,Agent可通过curl、文件I/O、Python标准库等原生工具完成所有操作
  • 渐进式加载:首次设置时读取
    references/setup.md
    ,日常使用仅需参考本文件即可

Built-in Commands

内置命令

These commands can be invoked through
/transcendence-memory <command>
or the short form
/tm <command>
:
CommandPurposeExample
connect <token>
Import a connection token and write local config
/tm connect eyJlbmRw...
connect --manual
Enter endpoint, api_key, and container manually
/tm connect --manual
status
Check connection status and server health
/tm status
search <query>
Run semantic search over memories
/tm search architecture decision from the last deployment
remember <text>
Store one memory quickly
/tm remember Port conflicts caused the deployment failure
embed
Rebuild the index for the current container
/tm embed
query <question>
Run a multimodal RAG query and get an LLM-generated answer
/tm query What is the overall project architecture?
upload <file>
Upload a file into the knowledge graph
/tm upload ./design.pdf
containers
List all containers
/tm containers
batch <file.jsonl>
Bulk import memories
/tm batch memories.jsonl
auto on
Enable automatic memory on git commits
/tm auto on
auto off
Disable automatic memory
/tm auto off
auto status
Show auto-memory configuration
/tm auto status
这些命令可通过
/transcendence-memory <命令>
调用,也可使用简写
/tm <命令>
命令用途示例
connect <token>
导入连接令牌并写入本地配置
/tm connect eyJlbmRw...
connect --manual
手动输入endpoint、api_key和container
/tm connect --manual
status
检查连接状态和服务端健康状态
/tm status
search <query>
对记忆执行语义搜索
/tm search architecture decision from the last deployment
remember <text>
快速存储单条记忆
/tm remember Port conflicts caused the deployment failure
embed
为当前容器重建索引
/tm embed
query <question>
执行多模态RAG查询并获取LLM生成的答案
/tm query What is the overall project architecture?
upload <file>
上传文件到知识图谱
/tm upload ./design.pdf
containers
列出所有容器
/tm containers
batch <file.jsonl>
批量导入记忆
/tm batch memories.jsonl
auto on
开启git提交自动记忆功能
/tm auto on
auto off
关闭自动记忆功能
/tm auto off
auto status
查看自动记忆配置
/tm auto status

Command:
connect

命令:
connect

Import a connection token or configure the connection manually.
Token mode (recommended):
bash
undefined
导入连接令牌或手动配置连接。
令牌模式(推荐):
bash
undefined

Automatically run by the agent after it receives a token:

Agent收到令牌后自动执行:

TOKEN="$1" # base64 token provided by the user DECODED=$(echo "$TOKEN" | base64 -d) ENDPOINT=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['endpoint'])") API_KEY=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])") CONTAINER=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['container'])")
mkdir -p ~/.transcendence-memory && chmod 700 ~/.transcendence-memory cat > ~/.transcendence-memory/config.toml << EOF [connection] endpoint = "$ENDPOINT" container = "$CONTAINER"
[auth] mode = "api_key" api_key = "$API_KEY" EOF chmod 600 ~/.transcendence-memory/config.toml
TOKEN="$1" # 用户提供的base64令牌 DECODED=$(echo "$TOKEN" | base64 -d) ENDPOINT=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['endpoint'])") API_KEY=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])") CONTAINER=$(echo "$DECODED" | python3 -c "import sys,json; print(json.load(sys.stdin)['container'])")
mkdir -p ~/.transcendence-memory && chmod 700 ~/.transcendence-memory cat > ~/.transcendence-memory/config.toml << EOF [connection] endpoint = "$ENDPOINT" container = "$CONTAINER"
[auth] mode = "api_key" api_key = "$API_KEY" EOF chmod 600 ~/.transcendence-memory/config.toml

Verify the connection

验证连接

curl -sS "$ENDPOINT/health"

**Manual mode**: ask the user for `endpoint`, `api_key`, and `container`, then write `config.toml`.
curl -sS "$ENDPOINT/health"

**手动模式**:向用户询问`endpoint`、`api_key`和`container`,然后写入`config.toml`。

Command:
status

命令:
status

Check connection and server status:
bash
undefined
检查连接和服务端状态:
bash
undefined

Read local config

读取本地配置

CONFIG="$HOME/.transcendence-memory/config.toml" ENDPOINT=$(grep '^endpoint' "$CONFIG" | sed 's/.= "//' | sed 's/".//') API_KEY=$(grep '^api_key' "$CONFIG" | sed 's/.= "//' | sed 's/".//') CONTAINER=$(grep '^container' "$CONFIG" | sed 's/.*= "//' | sed 's/".//')
CONFIG="$HOME/.transcendence-memory/config.toml" ENDPOINT=$(grep '^endpoint' "$CONFIG" | sed 's/.= "//' | sed 's/".//') API_KEY=$(grep '^api_key' "$CONFIG" | sed 's/.= "//' | sed 's/".//') CONTAINER=$(grep '^container' "$CONFIG" | sed 's/.*= "//' | sed 's/".//')

Health check

健康检查

curl -sS "$ENDPOINT/health" | python3 -m json.tool
curl -sS "$ENDPOINT/health" | python3 -m json.tool

Authentication test

身份验证测试

curl -sS -X POST "$ENDPOINT/search"
-H "X-API-KEY: $API_KEY" -H "Content-Type: application/json"
-d "{"container":"$CONTAINER","query":"test","topk":1}"
undefined
curl -sS -X POST "$ENDPOINT/search"
-H "X-API-KEY: $API_KEY" -H "Content-Type: application/json"
-d "{"container":"$CONTAINER","query":"test","topk":1}"
undefined

Command:
search

命令:
search

bash
curl -sS -X POST "${ENDPOINT}/search" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"container\":\"${CONTAINER}\",\"query\":\"$ARGUMENTS\",\"topk\":5}"
bash
curl -sS -X POST "${ENDPOINT}/search" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"container\":\"${CONTAINER}\",\"query\":\"$ARGUMENTS\",\"topk\":5}"

Command:
remember

命令:
remember

Quickly store one memory with an auto-generated ID and automatic embedding:
bash
MEM_ID="mem-$(date +%s)"
curl -sS -X POST "${ENDPOINT}/ingest-memory/objects" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"container\":\"${CONTAINER}\",\"objects\":[{\"id\":\"${MEM_ID}\",\"text\":\"$ARGUMENTS\",\"tags\":[]}],\"auto_embed\":true}"
快速存储单条记忆,自动生成ID并自动执行embedding:
bash
MEM_ID="mem-$(date +%s)"
curl -sS -X POST "${ENDPOINT}/ingest-memory/objects" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"container\":\"${CONTAINER}\",\"objects\":[{\"id\":\"${MEM_ID}\",\"text\":\"$ARGUMENTS\",\"tags\":[]}],\"auto_embed\":true}"

Command:
query

命令:
query

Run a multimodal RAG query with knowledge graph retrieval plus LLM answer generation:
bash
curl -sS -X POST "${ENDPOINT}/query" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"query\":\"$ARGUMENTS\",\"container\":\"${CONTAINER}\",\"mode\":\"hybrid\",\"top_k\":60}"
执行多模态RAG查询,结合知识图谱检索和LLM答案生成:
bash
curl -sS -X POST "${ENDPOINT}/query" \
  -H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json" \
  -d "{\"query\":\"$ARGUMENTS\",\"container\":\"${CONTAINER}\",\"mode\":\"hybrid\",\"top_k\":60}"

Command:
upload

命令:
upload

Upload a file into the knowledge graph:
bash
curl -sS -X POST "${ENDPOINT}/documents/upload" \
  -H "X-API-KEY: ${API_KEY}" \
  -F "file=@$1" \
  -F "container=${CONTAINER}"
上传文件到知识图谱:
bash
curl -sS -X POST "${ENDPOINT}/documents/upload" \
  -H "X-API-KEY: ${API_KEY}" \
  -F "file=@$1" \
  -F "container=${CONTAINER}"

Command:
batch

命令:
batch

Bulk ingest memories with the bundled script:
bash
python3 <skill-path>/scripts/batch-ingest.py \
  "${ENDPOINT}" "${API_KEY}" "${CONTAINER}" "$1"
使用捆绑脚本批量导入记忆:
bash
python3 <skill-path>/scripts/batch-ingest.py \
  "${ENDPOINT}" "${API_KEY}" "${CONTAINER}" "$1"

Quick Reference (for configured users)

快速参考(已配置用户适用)

Text Memories (lightweight path)

文本记忆(轻量路径)

bash
undefined
bash
undefined

Search memories

搜索记忆

curl -sS -X POST "${ENDPOINT}/search"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","query":"what you want to search for","topk":5}'
curl -sS -X POST "${ENDPOINT}/search"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","query":"what you want to search for","topk":5}'

Store a memory

存储记忆

curl -sS -X POST "${ENDPOINT}/ingest-memory/objects"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","objects":[{"id":"mem-001","text":"content to store","tags":["tag1"]}]}'
curl -sS -X POST "${ENDPOINT}/ingest-memory/objects"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","objects":[{"id":"mem-001","text":"content to store","tags":["tag1"]}]}'

Rebuild the index after storing a new memory

存储新记忆后重建索引

curl -sS -X POST "${ENDPOINT}/embed"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","background":false,"wait":true}'
curl -sS -X POST "${ENDPOINT}/embed"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","background":false,"wait":true}'

Update a memory

更新记忆

curl -sS -X PUT "${ENDPOINT}/containers/${CONTAINER}/memories/mem-001"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"text":"updated content","tags":["new-tag"]}'
curl -sS -X PUT "${ENDPOINT}/containers/${CONTAINER}/memories/mem-001"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"text":"updated content","tags":["new-tag"]}'

Delete a memory

删除记忆

curl -sS -X DELETE "${ENDPOINT}/containers/${CONTAINER}/memories/mem-001"
-H "X-API-KEY: ${API_KEY}"

> After updating or deleting a memory, run `/embed` to refresh the index.
curl -sS -X DELETE "${ENDPOINT}/containers/${CONTAINER}/memories/mem-001"
-H "X-API-KEY: ${API_KEY}"

> 更新或删除记忆后,执行`/embed`刷新索引。

Multimodal RAG (RAG-Anything pipeline)

多模态RAG(RAG-Anything流水线)

bash
undefined
bash
undefined

Ingest raw text into the knowledge graph

将原始文本导入知识图谱

curl -sS -X POST "${ENDPOINT}/documents/text"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","text":"long text to ingest...","description":"optional description"}'
curl -sS -X POST "${ENDPOINT}/documents/text"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","text":"long text to ingest...","description":"optional description"}'

Upload a file (PDF, image, or Markdown)

上传文件(PDF、图片或Markdown)

curl -sS -X POST "${ENDPOINT}/documents/upload"
-H "X-API-KEY: ${API_KEY}"
-F "file=@/path/to/document.pdf"
-F "container=${CONTAINER}"
curl -sS -X POST "${ENDPOINT}/documents/upload"
-H "X-API-KEY: ${API_KEY}"
-F "file=@/path/to/document.pdf"
-F "container=${CONTAINER}"

Multimodal RAG query that returns an LLM-generated answer

多模态RAG查询,返回LLM生成的答案

curl -sS -X POST "${ENDPOINT}/query"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"query":"your question","container":"${CONTAINER}","mode":"hybrid","top_k":60}'
undefined
curl -sS -X POST "${ENDPOINT}/query"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"query":"your question","container":"${CONTAINER}","mode":"hybrid","top_k":60}'
undefined

Container Management

容器管理

bash
undefined
bash
undefined

List all containers

列出所有容器

curl -sS "${ENDPOINT}/containers" -H "X-API-KEY: ${API_KEY}"
curl -sS "${ENDPOINT}/containers" -H "X-API-KEY: ${API_KEY}"

Delete a container

删除容器

curl -sS -X DELETE "${ENDPOINT}/containers/${CONTAINER}"
-H "X-API-KEY: ${API_KEY}"
curl -sS -X DELETE "${ENDPOINT}/containers/${CONTAINER}"
-H "X-API-KEY: ${API_KEY}"

Health check

健康检查

curl -sS "${ENDPOINT}/health"

Variables are read from the local config file `~/.transcendence-memory/config.toml`.
curl -sS "${ENDPOINT}/health"

变量从本地配置文件`~/.transcendence-memory/config.toml`中读取。

First-Time Setup

首次设置

On first use, read
references/setup.md
to complete configuration.
The core flow has only two steps:
  1. Get a connection token from the server (through the
    /export-connection-token
    endpoint or from an administrator)
  2. Run
    /tm connect <token>
    to finish setup automatically
Or run
/tm connect --manual
and enter the values step by step.
After configuration is complete,
references/setup.md
no longer needs to be loaded into context.
首次使用时,请阅读
references/setup.md
完成配置。
核心流程仅两步:
  1. 从服务端获取连接令牌(通过
    /export-connection-token
    接口或向管理员索要)
  2. 执行
    /tm connect <token>
    自动完成设置
或执行
/tm connect --manual
逐步输入配置值。
配置完成后,无需再将
references/setup.md
加载到上下文。

API Reference

API参考

See
references/api-reference.md
for full request and response formats.
完整的请求和响应格式请查看
references/api-reference.md

Lightweight Path (text memory CRUD)

轻量路径(文本记忆CRUD)

EndpointMethodPurposeAuth
/health
GETHealth checkNot required
/search
POSTSearch memoriesRequired
/embed
POSTRebuild indexRequired
/ingest-memory/objects
POSTWrite typed objectsRequired
/ingest-memory/contract
GETInspect ingest semantic boundariesNot required
/ingest-structured
POSTIngest structured JSONRequired
/containers/{container}/memories/{id}
PUTUpdate a memoryRequired
/containers/{container}/memories/{id}
DELETEDelete a memoryRequired
接口请求方法用途身份验证
/health
GET健康检查无需
/search
POST搜索记忆需要
/embed
POST重建索引需要
/ingest-memory/objects
POST写入类型化对象需要
/ingest-memory/contract
GET检查摄入语义边界无需
/ingest-structured
POST摄入结构化JSON需要
/containers/{container}/memories/{id}
PUT更新记忆需要
/containers/{container}/memories/{id}
DELETE删除记忆需要

Multimodal Path (RAG-Anything pipeline)

多模态路径(RAG-Anything流水线)

EndpointMethodPurposeAuth
/documents/text
POSTIngest text into the knowledge graphRequired
/documents/upload
POSTUpload PDF, image, or Markdown documentsRequired
/query
POSTRun a multimodal RAG queryRequired
接口请求方法用途身份验证
/documents/text
POST将文本摄入知识图谱需要
/documents/upload
POST上传PDF、图片或Markdown文档需要
/query
POST执行多模态RAG查询需要

Administrative Endpoints

管理接口

EndpointMethodPurposeAuth
/containers
GETList containersRequired
/containers/{name}
DELETEDelete a containerRequired
/export-connection-token
GETExport a connection tokenRequired
/jobs/{pid}
GETAsync job statusRequired
Authentication methods:
X-API-KEY: <api-key>
or
Authorization: Bearer <api-key>
接口请求方法用途身份验证
/containers
GET列出容器需要
/containers/{name}
DELETE删除容器需要
/export-connection-token
GET导出连接令牌需要
/jobs/{pid}
GET异步任务状态需要
身份验证方式:
X-API-KEY: <api-key>
Authorization: Bearer <api-key>

Architecture Overview

架构概览

See
references/ARCHITECTURE.md
.
text
Agent --HTTPS + API Key--> transcendence-memory-server
                            |-- FastAPI HTTP layer
                            |-- Container isolation
                            |-- Lightweight path: /search + /ingest + /embed
                            |   `-- Embedding -> LanceDB vector store
                            `-- Multimodal path: /documents + /query
                                `-- RAG-Anything -> knowledge graph -> LLM answer
请查看
references/ARCHITECTURE.md
text
Agent --HTTPS + API Key--> transcendence-memory-server
                            |-- FastAPI HTTP layer
                            |-- Container isolation
                            |-- Lightweight path: /search + /ingest + /embed
                            |   `-- Embedding -> LanceDB vector store
                            `-- Multimodal path: /documents + /query
                                `-- RAG-Anything -> knowledge graph -> LLM answer

Troubleshooting

故障排查

See
references/troubleshooting.md
.
Common quick checks:
  • Cannot connect: run
    /tm status
    or
    curl -sS "${ENDPOINT}/health"
  • 401/403: verify that the API key is correct
  • Search returns empty: run
    /tm embed
    first to rebuild the index
  • Search returns 200 but the body contains an error: treat it as a failure and inspect the server logs
  • Document upload fails: verify file type and size (supported types include PDF, image, and Markdown)
  • Query returns empty: make sure content has been ingested through
    /documents/text
    or
    /documents/upload
  • Updates or deletes do not appear in search: run
    /tm embed
    to refresh the index
请查看
references/troubleshooting.md
常见快速检查项:
  • 无法连接:执行
    /tm status
    curl -sS "${ENDPOINT}/health"
  • 401/403错误:验证API key是否正确
  • 搜索返回空结果:先执行
    /tm embed
    重建索引
  • 搜索返回200但响应体包含错误:视为请求失败,检查服务端日志
  • 文档上传失败:验证文件类型和大小(支持的类型包括PDF、图片和Markdown)
  • 查询返回空结果:确认已通过
    /documents/text
    /documents/upload
    摄入了内容
  • 更新或删除的内容未在搜索中体现:执行
    /tm embed
    刷新索引

Batch and Async Operations

批量和异步操作

Bulk Ingest (large memory sets)

批量摄入(大量记忆集)

When you need to ingest dozens to hundreds of memories:
bash
undefined
当需要摄入几十到上百条记忆时:
bash
undefined

Prepare a JSONL file, one JSON object per line

准备JSONL文件,每行一个JSON对象

{"id":"mem-001","text":"memory content","tags":["tag1"]}

{"id":"mem-001","text":"memory content","tags":["tag1"]}

{"id":"mem-002","text":"another memory","source":"telegram"}

{"id":"mem-002","text":"another memory","source":"telegram"}

/tm batch memories.jsonl
/tm batch memories.jsonl

Or call the script directly:

或直接调用脚本:

python3 <skill-path>/scripts/batch-ingest.py
"${ENDPOINT}" "${API_KEY}" "${CONTAINER}" memories.jsonl

The script automatically batches input in groups of 50, retries failed requests, and prints progress output. It has zero external dependencies and uses only the Python standard library.
python3 <skill-path>/scripts/batch-ingest.py
"${ENDPOINT}" "${API_KEY}" "${CONTAINER}" memories.jsonl

该脚本自动将输入按50条分组,重试失败的请求,并打印进度输出。它没有外部依赖,仅使用Python标准库。

Async Tasks

异步任务

/embed
and
/documents/upload
support async mode:
bash
undefined
/embed
/documents/upload
支持异步模式:
bash
undefined

Submit an index rebuild asynchronously

异步提交索引重建任务

curl -sS -X POST "${ENDPOINT}/embed"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","background":true}'
curl -sS -X POST "${ENDPOINT}/embed"
-H "X-API-KEY: ${API_KEY}" -H "Content-Type: application/json"
-d '{"container":"${CONTAINER}","background":true}'

Check async task status

查看异步任务状态

curl -sS "${ENDPOINT}/jobs/${PID}" -H "X-API-KEY: ${API_KEY}"
undefined
curl -sS "${ENDPOINT}/jobs/${PID}" -H "X-API-KEY: ${API_KEY}"
undefined

Choosing an Operation Mode

操作模式选择

ScenarioRecommended approach
Health checks, single searches, or a few memory writesBuilt-in
/tm
commands
Bulk ingest of dozens to hundreds of memories
/tm batch file.jsonl
Rebuilding a large container index
/tm embed
or async mode
Adding documents to the knowledge graph
/tm upload file.pdf
or
/documents/text
Asking for an LLM-synthesized answer
/tm query your question
场景推荐方案
健康检查、单次搜索或少量记忆写入内置
/tm
命令
批量摄入几十到上百条记忆
/tm batch file.jsonl
重建大型容器索引
/tm embed
或异步模式
向知识图谱添加文档
/tm upload file.pdf
/documents/text
请求LLM合成的答案
/tm query your question

Command:
auto

命令:
auto

Enable, disable, or check automatic memory management.
Enable — creates a marker file so hooks auto-store commit summaries:
bash
mkdir -p ~/.transcendence-memory
touch ~/.transcendence-memory/auto-memory.enabled
echo "Automatic memory enabled. Git commit summaries will be stored automatically."
Disable — removes the marker file:
bash
rm -f ~/.transcendence-memory/auto-memory.enabled
echo "Automatic memory disabled."
Status — check current state:
bash
if [ -f ~/.transcendence-memory/auto-memory.enabled ]; then
  echo "Automatic memory: ENABLED"
else
  echo "Automatic memory: DISABLED"
fi
if [ -f ~/.transcendence-memory/config.toml ]; then
  ENDPOINT=$(grep '^endpoint' ~/.transcendence-memory/config.toml | sed 's/.*= *"//' | sed 's/".*//')
  CONTAINER=$(grep '^container' ~/.transcendence-memory/config.toml | sed 's/.*= *"//' | sed 's/".*//')
  echo "Endpoint: ${ENDPOINT}"
  echo "Container: ${CONTAINER}"
else
  echo "Not connected. Run /tm connect first."
fi
启用、禁用或检查自动记忆管理功能。
启用 — 创建标记文件,让钩子自动存储提交摘要:
bash
mkdir -p ~/.transcendence-memory
touch ~/.transcendence-memory/auto-memory.enabled
echo "Automatic memory enabled. Git commit summaries will be stored automatically."
禁用 — 删除标记文件:
bash
rm -f ~/.transcendence-memory/auto-memory.enabled
echo "Automatic memory disabled."
状态 — 检查当前状态:
bash
if [ -f ~/.transcendence-memory/auto-memory.enabled ]; then
  echo "Automatic memory: ENABLED"
else
  echo "Automatic memory: DISABLED"
fi
if [ -f ~/.transcendence-memory/config.toml ]; then
  ENDPOINT=$(grep '^endpoint' ~/.transcendence-memory/config.toml | sed 's/.*= *"//' | sed 's/".*//')
  CONTAINER=$(grep '^container' ~/.transcendence-memory/config.toml | sed 's/.*= *"//' | sed 's/".*//')
  echo "Endpoint: ${ENDPOINT}"
  echo "Container: ${CONTAINER}"
else
  echo "Not connected. Run /tm connect first."
fi

Automatic Memory

自动记忆

When enabled, transcendence-memory automatically stores a memory after every git commit, merge, cherry-pick, or rebase. This is powered by lifecycle hooks that integrate with the host AI coding CLI.
启用后,transcendence-memory会在每次git commit、merge、cherry-pick或rebase后自动存储记忆。该功能由与宿主AI编码CLI集成的生命周期钩子提供支持。

How it works

工作原理

  1. A SessionStart hook fires when a new session begins. It checks the connection status and tells the agent whether auto-memory is enabled.
  2. A PostToolUse hook fires after every shell command. If the command was a git commit and auto-memory is enabled, the agent is instructed to store a one-line commit summary as a memory tagged
    auto-commit
    .
  1. 新会话开始时触发SessionStart钩子,检查连接状态并告知Agent自动记忆是否启用。
  2. 每次shell命令执行后触发PostToolUse钩子,如果命令是git commit且自动记忆已启用,将指示Agent存储一行提交摘要作为记忆,标记为
    auto-commit
    标签。

Enable / disable

启用/禁用

text
/tm auto on       # enable auto-memory
/tm auto off      # disable auto-memory
/tm auto status   # check current configuration
text
/tm auto on       # 启用自动记忆
/tm auto off      # 禁用自动记忆
/tm auto status   # 查看当前配置

What gets stored

存储内容

Each auto-commit memory follows this format:
[commit abc1234] fix: resolve port conflict in docker-compose | files: M docker-compose.yml, M .env.example
All auto-commit memories are tagged
auto-commit
for easy filtering:
text
/tm search auto-commit
每条自动提交记忆遵循以下格式:
[commit abc1234] fix: resolve port conflict in docker-compose | files: M docker-compose.yml, M .env.example
所有自动提交记忆都带有
auto-commit
标签,便于筛选:
text
/tm search auto-commit

Platform Support

平台支持

The hooks system is designed to work across multiple AI coding CLIs. The plugin ships pre-built hook configs for supported platforms.
钩子系统设计为适配多个AI编码CLI,插件附带了支持平台的预构建钩子配置。

Claude Code (primary)

Claude Code(主要支持)

Hooks are registered in
hooks/hooks.json
and activated automatically when the plugin is installed via
/plugin install
.
钩子在
hooks/hooks.json
中注册,通过
/plugin install
安装插件时自动激活。

Cursor

Cursor

Uses
hooks/hooks-cursor.json
with camelCase event names (
sessionStart
,
postToolUse
).
使用
hooks/hooks-cursor.json
,采用驼峰命名的事件名(
sessionStart
postToolUse
)。

Other platforms

其他平台

The multi-platform adapter (
hooks/adapter.py
) normalizes hook input from:
PlatformEvent formatDetection
Claude Code
hook_event_name
+
tool_name
CLAUDE_PLUGIN_ROOT
env
CursorSame JSON schema
CURSOR_PLUGIN_ROOT
env
Gemini CLI
AfterTool
+
matcher
matcher
field in JSON
Windsurf
post-tool-use
+
tool
+
arguments
arguments
field in JSON
Vibe CLI
post-tool-call
+
tool
+
input
input
field in JSON
Cline / Roo Code
tool_name
or
tool
+ JSON stdin/stdout
JSON structure detection
Copilot CLIClaude Code compatible
COPILOT_CLI
env
Augment CodeClaude Code compatibleFallback to Claude format
For platforms without native hook support, add transcendence-memory instructions to the platform's rules file (e.g.,
.cursorrules
,
AGENTS.md
,
.clinerules/
).
多平台适配器(
hooks/adapter.py
)会标准化来自以下平台的钩子输入:
平台事件格式检测方式
Claude Code
hook_event_name
+
tool_name
CLAUDE_PLUGIN_ROOT
环境变量
Cursor相同JSON schema
CURSOR_PLUGIN_ROOT
环境变量
Gemini CLI
AfterTool
+
matcher
JSON中的
matcher
字段
Windsurf
post-tool-use
+
tool
+
arguments
JSON中的
arguments
字段
Vibe CLI
post-tool-call
+
tool
+
input
JSON中的
input
字段
Cline / Roo Code
tool_name
tool
+ JSON标准输入/输出
JSON结构检测
Copilot CLI兼容Claude Code
COPILOT_CLI
环境变量
Augment Code兼容Claude Codefallback到Claude格式
对于没有原生钩子支持的平台,将transcendence-memory说明添加到平台的规则文件中即可(例如
.cursorrules
AGENTS.md
.clinerules/
)。

Files in This Skill

该skill包含的文件

FilePurposeWhen to load
references/setup.md
First-time setup guideFirst use only
references/api-reference.md
Complete API referenceWhen API details are needed
references/ARCHITECTURE.md
Architecture and data flowWhen understanding the system
references/OPERATIONS.md
Operational verification and acceptanceDuring deployment verification
references/troubleshooting.md
Troubleshooting guideWhen something goes wrong
references/templates/config.toml.template
Config file templateDuring first-time setup
scripts/batch-ingest.py
Bulk ingest scriptFor large memory imports
文件用途加载时机
references/setup.md
首次设置指南仅首次使用时
references/api-reference.md
完整API参考需要API细节时
references/ARCHITECTURE.md
架构和数据流了解系统原理时
references/OPERATIONS.md
运行验证和验收部署验证阶段
references/troubleshooting.md
故障排查指南出现问题时
references/templates/config.toml.template
配置文件模板首次设置期间
scripts/batch-ingest.py
批量摄入脚本大量记忆导入时

When NOT to Use

不适用场景

  • Deploying the backend service -> use the
    transcendence-memory-server
    repository
  • Managing Docker, systemd, or Nginx -> use the
    transcendence-memory-server
    repository
  • Troubleshooting server-side problems such as 5xx errors, storage issues, or logs -> use the
    transcendence-memory-server
    repository
  • Configuring Embedding, LLM, or VLM models -> this is a server-side concern and does not need to be handled by the skill
  • 部署后端服务 → 请使用
    transcendence-memory-server
    仓库
  • 管理Docker、systemd或Nginx → 请使用
    transcendence-memory-server
    仓库
  • 排查服务端问题,例如5xx错误、存储问题或日志 → 请使用
    transcendence-memory-server
    仓库
  • 配置Embedding、LLM或VLM模型 → 这是服务端配置项,无需该skill处理