kelet-integration

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Kelet Integration

Kelet集成

Kelet is an AI agent that does Root Cause Analysis for AI app failures. It ingests traces + user signals → clusters failure patterns → generates hypotheses → suggests fixes. This skill integrates Kelet into a developer's AI application end-to-end.
Kelet never crashes your app. All SDK errors — misconfigured keys, network failures, wrong session IDs, missing extras — are swallowed silently to ensure QoS. A misconfigured integration looks identical to a working one. The Common Mistakes section documents every known silent failure mode.
What Kelet is not: Not a prompt management tool (no versioning or playground — use a dedicated prompt management platform or manage prompts as code). Not a log aggregator (Kelet doesn't store raw logs — use a logging solution for that).

Kelet是一款为AI应用故障提供根因分析能力的AI Agent。它接收追踪数据+用户信号→聚类故障模式→生成假设→给出修复建议。本指南介绍如何将Kelet端到端集成到开发者的AI应用中。
Kelet绝不会导致你的应用崩溃。 所有SDK错误——配置错误的密钥、网络故障、错误的会话ID、缺少附加信息——都会被静默吞掉以保障服务质量。配置错误的集成从表面上看和正常运行的集成完全一致。常见问题部分记录了所有已知的静默故障模式。
Kelet不是什么: 不是提示词管理工具(没有版本控制或 playground——请使用专用的提示词管理平台或将提示词作为代码管理)。不是日志聚合工具(Kelet不存储原始日志——请使用专门的日志解决方案实现该功能)。

Key Concepts

核心概念

What the SDK does automatically: Once
kelet.configure()
is called, popular AI frameworks are auto-instrumented via OTEL — tracing requires no further code.
What requires explicit integration: session grouping (
agentic_session()
), user signals (VoteFeedback,
useFeedbackState
), and custom coded signals.
Session grouping: Developers almost always already have conversation/request/thread IDs. Find what exists and reuse it — don't invent new session management. Verify the session identifier is propagated consistently end-to-end (client → server →
agentic_session()
→ response header → VoteFeedback). If IDs conflict or are ambiguous, explicitly ask the developer before proceeding.
Explicit signals: If the app already has feedback UI (thumbs up/down, ratings) — wire to it, don't replace it. If nothing exists, suggest adding VoteFeedback. Edit tracking (user modifying AI-generated content) is always worth capturing — it reveals "close but wrong."
Coded signals: Find real hooks in the existing codebase — dismiss, accept, retry, undo, escalate. Don't propose signals abstractly. Verify with the developer that each event is specific to AI content (not a general UI action).
Synthetic signals: Platform-run synthetic signal evaluators — either LLM-as-judge (semantic/quality) or heuristic ( structural/metric). No app code required. Delivered via deeplink.

If Kelet is already in the project's dependencies: skip setup, focus on what the developer asked. Phase 0a and Phase V still apply.
Always follow phases in order: 0a → 0b → 0c → 0d → 1 → implement. Each phase ends with a STOP: present your findings to the developer and wait for confirmation before continuing. DO NOT chain phases silently. DO NOT write a full plan without these checkpoints.
Plan mode: This skill runs inside
/plan
mode. Present the full implementation plan and call
ExitPlanMode
for approval BEFORE writing any code or editing any files. Never start implementation without explicit developer approval.

SDK自动完成的工作: 调用
kelet.configure()
后,主流AI框架会通过OTEL自动埋点——无需额外代码即可实现追踪。
需要显式集成的内容: 会话分组(
agentic_session()
)、用户信号(VoteFeedback、
useFeedbackState
)以及自定义编码信号。
会话分组: 开发者几乎都已经有会话/请求/线程ID了。找到现有ID复用即可,不要新建会话管理逻辑。验证会话标识符在端到端链路上是一致传递的(客户端→服务端→
agentic_session()
→响应头→VoteFeedback)。如果ID存在冲突或歧义,请先明确询问开发者再继续操作。
显式信号: 如果应用已经有反馈UI(点赞/点踩、评分)——直接对接即可,不要替换现有功能。如果没有相关功能,建议添加VoteFeedback。编辑追踪(用户修改AI生成的内容)始终值得采集,它能反映AI输出「接近但不正确」的情况。
编码信号: 找到现有代码库中的真实钩子——驳回、接受、重试、撤销、升级。不要凭空提出信号设计。请和开发者确认每个事件都是AI内容相关的(不是通用UI操作)。
合成信号: 平台运行的合成信号评估器——要么是LLM作为裁判(语义/质量维度)要么是启发式规则(结构/指标维度)。无需修改应用代码,通过深度链接交付。

如果Kelet已经在项目依赖中: 跳过设置步骤,专注处理开发者的需求即可。阶段0a和阶段V仍然适用。
始终按顺序执行阶段:0a → 0b → 0c → 0d → 1 → 实现。每个阶段结束后都要暂停:将你的发现同步给开发者,等待确认后再继续。不要静默串联多个阶段。不要在没有这些检查点的情况下编写完整方案。
方案模式: 本指南在
/plan
模式下运行。在编写任何代码或编辑任何文件之前,请先展示完整的实现方案并调用
ExitPlanMode
获取审批。没有开发者明确批准的情况下永远不要开始实现。

Before You Implement

实现前准备

Always fetch current Kelet documentation before writing any integration code. Kelet updates frequently — trust the docs over your training data.
  1. Ask the docs AI (preferred):
    GET https://docs-ai.kelet.ai/chat?q=<your+question>
    — returns a focused plain-text answer from live docs. Ask before writing code, e.g.:
    • ?q=how+to+configure+kelet+in+python
    • ?q=agenticSession+typescript+usage
    • ?q=VoteFeedback+session+id+propagation
  2. Browse the index (fallback): If the AI answer is insufficient, fetch
    https://kelet.ai/docs/llms.txt
    for a structured index, then append
    .md
    to any docs URL for clean markdown — e.g.,
    https://kelet.ai/docs/getting-started/quickstart.md

编写任何集成代码之前都要先拉取最新的Kelet文档。Kelet更新频率很高——请以文档为准,不要依赖训练数据。
  1. 询问文档AI(推荐)
    GET https://docs-ai.kelet.ai/chat?q=<your+question>
    ——会从实时文档返回精准的纯文本答案。编写代码前先查询,例如:
    • ?q=how+to+configure+kelet+in+python
    • ?q=agenticSession+typescript+usage
    • ?q=VoteFeedback+session+id+propagation
  2. 浏览索引(备选):如果AI答案不够充分,拉取
    https://kelet.ai/docs/llms.txt
    获取结构化索引,然后在任意文档URL后追加
    .md
    获取纯净的markdown内容——例如
    https://kelet.ai/docs/getting-started/quickstart.md

Phase 0a: Project Mapping (ALWAYS first)

阶段0a:项目映射(始终是第一步)

Enter
/plan
mode
and map the codebase before asking or proposing anything:
  1. Map every LLM call — to understand the use case, flows, and failure modes (feeds into 0b/0c)
  2. Find existing session tracking — look for conversation IDs, request IDs, thread IDs, or any grouping mechanism. Wire it to
    agentic_session()
    rather than inventing new session management. Check that session identifiers are propagated consistently end-to-end. If there's a contradiction or ambiguity, explicitly ask the developer before proceeding.
Stay focused. When exploring, only read what's relevant to Kelet: LLM calls, session IDs, startup/entrypoint code, existing feedback UI, UI integration with the AI, and dependencies. Skip styling, animations, auth flows, unrelated business logic — if it doesn't affect tracing or signals, ignore it. Our focus is to understand how the UI interacts with the AI or the back-end that serves it.
Start with dependency files to identify AI frameworks and libraries. If you spot other repos/services that are part of the agentic flow (e.g., a frontend, another agent service) — not unrelated infra — tell the developer to run this skill there too.
Produce an Integration Map, present it to the developer, and wait for confirmation before proceeding to Phase 0b.
Infer from existing files (README, CLAUDE.md, entrypoints, dependency files,
.env
) before asking. Only ask what you can't determine.
Questions to resolve (ask only if unclear after reading files):
  1. What is the agentic use case?
  2. How many distinct agentic flows? → maps to Kelet project count
    A flow is isolated and standalone with clear ownership boundaries. If flow A triggers flow B with a clear interface boundary = TWO projects. Same flow in prod vs staging = TWO projects.
  3. Is this user-facing? (determines whether React/VoteFeedback applies)
  4. Stack: server (Python/Node.js/Next.js) + LLM framework + React?
  5. Config pattern:
    .env
    /
    .envrc
    / YAML / K8s secrets?
    Writing keys to the wrong file is a silent failure — Kelet appears uninstrumented with no error.
Produce a Project Map before proceeding:
Use case: [what the agents do]
Flows → Kelet projects:
  - flow "X" → project "X"
  - flow "Y" → project "Y"
User-facing: yes/no
Stack: [server framework] + [LLM framework]
Config: .env / .envrc / k8s

进入
/plan
模式
,在询问或提出任何方案前先梳理代码库:
  1. 梳理所有LLM调用——了解使用场景、流程和故障模式(为0b/0c提供输入)
  2. 找到现有会话追踪逻辑——查找会话ID、请求ID、线程ID或任何分组机制。将其对接
    agentic_session()
    即可,不需要新建会话管理逻辑。检查会话标识符在端到端链路上是否一致传递。如果存在矛盾或歧义,先明确询问开发者再继续操作。
保持专注。 探索代码库时,只阅读和Kelet相关的内容:LLM调用、会话ID、启动/入口代码、现有反馈UI、AI相关的UI集成以及依赖项。跳过样式、动画、鉴权流程、无关的业务逻辑——如果不影响追踪或信号,直接忽略即可。我们的核心是理解UI如何和AI或提供AI能力的后端交互。
从依赖文件开始识别AI框架和库。如果你发现其他属于智能体流程的仓库/服务(例如前端、其他Agent服务)——不是无关基础设施——告知开发者也要在这些项目中运行本集成指南。
输出集成映射表,将其展示给开发者,等待确认后再进入阶段0b。
先从现有文件(README、CLAUDE.md、入口文件、依赖文件、
.env
)推断信息,不要直接提问。只有你无法确定的内容才需要询问。
需要解决的问题(仅当读完文件后仍不明确时才询问):
  1. 智能体的使用场景是什么?
  2. 有多少个独立的智能体流程?→ 对应Kelet的项目数量
    流程是隔离且独立的,有明确的所有权边界。如果流程A通过清晰的接口边界触发流程B = 2个项目。生产环境和 staging 环境的同一个流程 = 2个项目。
  3. 是面向用户的吗?(决定是否需要使用React/VoteFeedback)
  4. 技术栈:服务端(Python/Node.js/Next.js)+ LLM框架 + 是否使用React?
  5. 配置模式:
    .env
    /
    .envrc
    / YAML / K8s secrets?
    将密钥写入错误的文件会导致静默故障——Kelet看起来没有埋点,也不会报错。
继续操作前先输出项目映射表:
使用场景:[智能体的功能]
流程 → Kelet项目:
  - 流程 "X" → 项目 "X"
  - 流程 "Y" → 项目 "Y"
面向用户:是/否
技术栈:[服务端框架] + [LLM框架]
配置方式:.env / .envrc / k8s

Phase 0b: Agentic Workflow + UX Mapping

阶段0b:智能体工作流 + UX映射

The purpose of this phase is to map what "failure" looks like for Kelet's RCA engine — Kelet clusters spans by failure pattern, so you need to understand failure modes before proposing signals.
Workflow (what the agent does):
  • Steps and decision points
  • Where it could go wrong: wrong retrieval, hallucination, off-topic, loops, timeouts
  • What success vs. failure looks like from the agent's perspective
UX (if user-facing):
  • What AI-generated content is shown? (answers, suggestions, code, summaries)
  • Where do users react? (edit it, retry, copy, ignore, complain)
  • What implicit behaviors signal dissatisfaction? (abandon, rephrase, undo)
Outputs from this phase feed directly into signal selection in 0c — each identified failure mode becomes a signal candidate. Present the workflow + UX map to the developer and wait for confirmation before proceeding to Phase 0c.

本阶段的目标是梳理Kelet的RCA引擎识别的「故障」是什么——Kelet会按照故障模式对Span聚类,所以在提出信号方案前你需要先理解故障模式。
工作流(智能体的执行逻辑):
  • 步骤和决策点
  • 可能出错的环节:检索错误、幻觉、跑题、循环、超时
  • 从智能体视角来看,成功和失败分别是什么样的
UX(如果是面向用户的):
  • 展示了哪些AI生成的内容?(答案、建议、代码、摘要)
  • 用户在哪里会做出反馈?(编辑、重试、复制、忽略、投诉)
  • 哪些隐式行为代表不满?(离开、重新提问、撤销)
本阶段的输出会直接作为0c阶段信号选择的输入——每个识别到的故障模式都会成为候选信号。将工作流+UX映射展示给开发者,等待确认后再进入阶段0c。

Phase 0c: Signal Brainstorming

阶段0c:信号头脑风暴

Reason about failure modes, then propose signals across three layers — propose all that apply:
1. Explicit signals (highest value — direct user expression) Look at the UX from 0b. Find every place the user interacts with AI-generated content.
  • Feedback already exists (thumbs up/down, rating, feedback text)? Wire
    kelet.signal()
    to it — don't replace it.
  • No feedback mechanism? Suggest adding VoteFeedback and explain what it unlocks for RCA.
  • Edit tracking: if the user can modify AI-generated content, tracking those edits is highly valuable (accepted but corrected = "close but wrong"). Implement appropriately for the stack.
2. Coded signals (implicit behavioral events in the app) Find events that imply the AI got it right or wrong — dismiss, accept, retry, undo, escalate, rephrase, skip. Wire
kelet.signal()
to the exact locations. When proposing a signal, verify with the developer that the event is specific to AI content (not a general UI action).
3. Synthetic signals (platform-run, no app code) Based on failure modes from 0b, propose LLM-as-judge synthetic signal evaluators (semantic/quality) and heuristic synthetic signal evaluators (structural/metric). Delivered LATER (after user approval) via deeplink — developer clicks once to activate.
Ground every synthetic signal evaluator in observed behavior. Only propose synthetic signal evaluators for things the agent actually does — don't invent features. If you're unsure whether the agent produces a certain output (e.g. citations, confidence scores, structured data), ask the developer before proposing a synthetic signal evaluator that depends on it. For
code
type: the check must be fully deterministic from the raw output (e.g. response length, JSON validity, presence of a known token). If you're reaching for any natural language understanding, it's
llm
, not
code
.
STOP — this is a REQUIRED interactive checkpoint. Use
AskUserQuestion
with
multiSelect: true
— two questions:
  1. One for explicit + coded signals (options = each proposed signal)
  2. One for synthetic evaluators (options = each proposed evaluator)
Ask if any coded signals need steering (e.g., "does this event apply only to AI content?") and wait for their response.
You don't need to implement synthetics on your own — let Kelet do that for you. After the developer has selected which synthetic evaluators they want, generate the deeplink scoped to exactly those evaluators and present it as a bold standalone action item:
Action required → click this link to activate your synthetic evaluators:
https://console.kelet.ai/synthetics/setup?deeplink=<encoded>
This will generate evaluators for: [list selected names]. Click "Activate All" once you've reviewed them.
Generate the deeplink like this — include only the evaluators the developer selected:
python
python3 - c
"
import base64, json

payload = {
    'use_case': '<agent use case>',
    'ideas': [
        {'name': '<name>', 'evaluator_type': 'llm', 'description': '<description>'},
        {'name': '<name>', 'evaluator_type': 'code', 'description': '<description>'},
    ]
}
encoded = base64.urlsafe_b64encode(json.dumps(payload, separators=(',', ':')).encode()).rstrip(b'=').decode()
print(f'https://console.kelet.ai/synthetics/setup?deeplink={encoded}')
"
ONLY create and send the link AFTER the developer has selected which evaluators they want. Do NOT generate or present the link before they make their selection — that would be confusing and overwhelming. The link should reflect their choices, not all possible ideas!
For each idea, decide the type: is this check deterministic/measurable?
"code"
. Is it semantic/qualitative?
"llm"
. Add
"context"
only when you need to steer the evaluator toward something specific.
After presenting the link, use
AskUserQuestion
to confirm the developer has clicked it and activated the evaluators before proceeding to Phase 0d. Do NOT proceed until confirmed.
Only write
source=SYNTHETIC
signal code if the developer explicitly asks AND the platform cannot implement it (explain why + ask to confirm).
See references/signals.md for signal kinds, sources, and when to use each.

梳理故障模式,然后从三个层面提出适用的信号方案:
1. 显式信号(价值最高——用户直接表达的反馈) 参考0b阶段梳理的UX,找到用户和AI生成内容交互的所有位置。
  • 已经有反馈功能(点赞/点踩、评分、反馈文本)?将
    kelet.signal()
    对接现有功能即可,不要替换。
  • 没有反馈机制? 建议添加VoteFeedback,并说明它能为RCA带来的价值。
  • 编辑追踪:如果用户可以修改AI生成的内容,追踪这些编辑的价值非常高(被接受但经过修改 = 「接近但不正确」)。根据技术栈选择合适的实现方式。
2. 编码信号(应用中的隐式行为事件) 找到能反映AI输出正确或错误的事件——驳回、接受、重试、撤销、升级、重新提问、跳过。将
kelet.signal()
对接这些事件的准确位置。提出信号方案时,请和开发者确认事件是AI内容专属的(不是通用UI操作)。
3. 合成信号(平台运行,无需修改应用代码) 基于0b阶段梳理的故障模式,提出LLM裁判类合成信号评估器(语义/质量维度)和启发式合成信号评估器(结构/指标维度)。后续(用户批准后)通过深度链接交付——开发者点击一次即可激活。
所有合成信号评估器都要基于实际观测到的行为。 只为智能体实际会产生的输出提出合成信号评估器,不要凭空捏造功能。如果你不确定智能体是否会产生某种输出(例如引用、置信度分数、结构化数据),在提出依赖该输出的合成信号评估器前先询问开发者。对于
code
类型:检查必须是可以从原始输出完全确定的(例如响应长度、JSON合法性、已知token是否存在)。如果需要用到自然语言理解能力,那就是
llm
类型,不是
code
类型。
暂停——这是必填的交互检查点。 使用带
multiSelect: true
参数的
AskUserQuestion
,问两个问题:
  1. 显式+编码信号确认(选项 = 每个提出的信号)
  2. 合成评估器确认(选项 = 每个提出的评估器)
询问是否有编码信号需要调整(例如「这个事件是否仅适用于AI内容?」),等待开发者回复。
你不需要自己实现合成信号——Kelet会帮你处理。 开发者选择完想要的合成评估器后,生成仅包含所选评估器的深度链接,作为加粗的独立待办项展示:
需要操作 → 点击该链接激活你的合成评估器:
https://console.kelet.ai/synthetics/setup?deeplink=<encoded>
这将为以下内容生成评估器:[列出选中的评估器名称]。确认后点击「全部激活」即可。
按照以下方式生成深度链接——仅包含开发者选中的评估器:
python
python3 - c
"
import base64, json

payload = {
    'use_case': '<agent use case>',
    'ideas': [
        {'name': '<name>', 'evaluator_type': 'llm', 'description': '<description>'},
        {'name': '<name>', 'evaluator_type': 'code', 'description': '<description>'},
    ]
}
encoded = base64.urlsafe_b64encode(json.dumps(payload, separators=(',', ':')).encode()).rstrip(b'=').decode()
print(f'https://console.kelet.ai/synthetics/setup?deeplink={encoded}')
"
仅在开发者选择完想要的评估器后再创建并发送链接。 在他们做出选择前不要生成或展示链接——这会造成混淆和信息过载。链接应该反映他们的选择,而不是所有可能的方案!
对每个方案确定类型:这个检查是可确定/可量化的吗?
"code"
是语义/定性的吗?
"llm"
。仅当你需要引导评估器关注特定内容时才添加
"context"
字段。
展示链接后,使用
AskUserQuestion
确认开发者已经点击链接并激活了评估器,再进入阶段0d。没有确认前不要继续。
仅当开发者明确要求,且平台无法实现该信号时,才需要编写
source=SYNTHETIC
的信号代码(请先说明原因并请求确认)。
参考references/signals.md了解信号类型、来源以及适用场景。

Phase 0d: What You'll See in Kelet

阶段0d:Kelet中的可见内容

After implementingVisible in Kelet console
kelet.configure()
LLM spans in Traces: model, tokens, latency, errors
agentic_session()
Sessions view: full conversation grouped for RCA
VoteFeedbackSignals: 👍/👎 correlated to the exact trace that generated the response
Edit signals (
useFeedbackState
)
Signals: what users corrected — reveals model errors
Platform syntheticsSignals: automated quality scores Kelet runs on your behalf

实现完成后Kelet控制台可见内容
kelet.configure()
追踪中的LLM Span:模型、token数、延迟、错误
agentic_session()
会话视图:为RCA聚合的完整对话
VoteFeedback信号:和生成响应的准确追踪关联的👍/👎
编辑信号(
useFeedbackState
信号:用户修改的内容——反映模型错误
平台合成信号信号:Kelet代运行的自动化质量评分

Sessions

会话

A session is the logical boundary of one unit of work — all LLM calls, tool uses, agent hops, and retrievals that belong to the same context. Not tied to conversations: a batch processing job, a scheduled pipeline, or a chat thread are all valid sessions. New context = new session.
The framework orchestrates the flow (pydantic-ai runs your agent loop, LangGraph manages your graph execution, a LangChain chain runs end-to-end): Kelet infers sessions automatically — no
agentic_session()
needed. Supported frameworks: pydantic-ai, LangChain/LangGraph, LlamaIndex, CrewAI, Haystack, DSPy, LiteLLM, Langfuse, and any framework using OpenInference or OpenLLMetry instrumentation. If the framework isn't listed, research whether it uses one of these instrumentation libraries before omitting
agentic_session()
.
Exception — externally managed session lifecycle: If the app owns the session ID (e.g. stored in Redis, a database, or generated server-side and returned to the client), the framework has no knowledge of it. You MUST use
agentic_session(session_id=...)
even with a supported framework — otherwise Kelet generates its own session ID that doesn't match the one the client receives, breaking VoteFeedback linkage.
Note: Vercel AI SDK does not set session IDs automatically — use
agenticSession()
at the route level (see Next.js section).
You own the loop (you write the code that calls agent A, passes results to agent B, chains steps in Temporal, a custom loop, or any orchestrator you built — even if individual steps use a supported framework internally): the framework doesn't set a session ID for the overall flow. You MUST use
agentic_session(session_id=...)
/
agenticSession({ sessionId }, callback)
. (Silent if omitted — spans appear as unlinked individual traces.)

会话是一个工作单元的逻辑边界——属于同一个上下文的所有LLM调用、工具使用、Agent跳转和检索都归属于同一个会话。会话不绑定对话场景:批处理任务、定时流水线、聊天线程都是合法的会话。新的上下文 = 新的会话。
由框架编排流程(pydantic-ai运行你的Agent循环,LangGraph管理你的图执行,LangChain链条端到端运行):Kelet会自动推断会话——不需要
agentic_session()
。支持的框架:pydantic-ai、LangChain/LangGraph、LlamaIndex、CrewAI、Haystack、DSPy、LiteLLM、Langfuse,以及任何使用OpenInference或OpenLLMetry埋点的框架。如果框架不在列表中,先调研它是否使用了上述埋点库,再决定是否省略
agentic_session()
例外——外部管理的会话生命周期: 如果应用自己管理会话ID(例如存储在Redis、数据库中,或服务端生成后返回给客户端),框架感知不到这个ID。即使是支持的框架,你也必须使用
agentic_session(session_id=...)
——否则Kelet会生成自己的会话ID,和客户端收到的ID不匹配,导致VoteFeedback关联失败。
注意:Vercel AI SDK不会自动设置会话ID——在路由级别使用
agenticSession()
(参考Next.js章节)。
你自己管理循环(你自己编写代码调用Agent A,将结果传递给Agent B,在Temporal中编排步骤,自定义循环,或任何你自己构建的编排器——即使单个步骤内部使用了支持的框架):框架不会为整个流程设置会话ID。你必须使用
agentic_session(session_id=...)
/
agenticSession({ sessionId }, callback)
。(省略会导致静默故障——Span会显示为无关联的独立追踪。

Phase 1: API Key Setup

阶段1:API密钥设置

Two key types — never mix them:
  • Secret key (
    KELET_API_KEY
    ): server-only. Traces LLM calls. Never expose to frontend.
  • Publishable key (
    VITE_KELET_PUBLISHABLE_KEY
    /
    NEXT_PUBLIC_KELET_PUBLISHABLE_KEY
    ): frontend-safe. Used in
    KeletProvider
    for VoteFeedback widget.
Ask for API keys during planning (before presenting the final plan / calling ExitPlanMode). Use
AskUserQuestion
(with an "I'll paste it in Other" option) to collect each key interactively. If the developer says they don't have a key or don't know what it is, direct them to create one:
Go to https://console.kelet.ai/api-keys to create your key, then paste it here.
Do not proceed until both required keys are in hand (or explicitly deferred with a placeholder).
Once received, write to the correct file based on the detected config pattern:
  • .env
    KEY=value
  • .envrc
    (direnv) →
    export KEY=value
  • K8s → tell developer to add to secrets manifest
Add both vars to
.gitignore
if not already present.

两种密钥类型——永远不要混用:
  • 密钥
    KELET_API_KEY
    ):仅服务端使用。追踪LLM调用。永远不要暴露给前端。
  • 可发布密钥
    VITE_KELET_PUBLISHABLE_KEY
    /
    NEXT_PUBLIC_KELET_PUBLISHABLE_KEY
    ):前端安全。用于
    KeletProvider
    中的VoteFeedback组件。
在规划阶段索要API密钥(在展示最终方案/调用ExitPlanMode之前)。使用
AskUserQuestion
(提供「我会粘贴到其他地方」选项)交互式收集每个密钥。如果开发者说他们没有密钥或者不知道是什么,引导他们创建:
前往 https://console.kelet.ai/api-keys 创建你的密钥,然后粘贴到这里。
在拿到两个必填密钥之前不要继续(或者明确约定后续填写占位符)。
拿到密钥后,根据检测到的配置模式写入正确的文件:
  • .env
    KEY=value
  • .envrc
    (direnv) →
    export KEY=value
  • K8s → 告知开发者添加到密钥清单中
如果两个变量不在
.gitignore
中,将其添加进去。

Implementation: Key Concepts by Stack

实现:不同技术栈的核心要点

See references/api.md for exact function names, package names, and the one TS gotcha.
Python:
kelet.configure()
at startup auto-instruments pydantic-ai/Anthropic/OpenAI/LangChain. Each LLM framework extra must be installed (
kelet[anthropic]
,
kelet[openai]
, etc.) — if missing,
configure()
silently skips that library.
agentic_session()
is required whenever you own the orchestration loop. If a supported framework orchestrates for you, sessions are inferred automatically — no wrapper needed. See Sessions section above.
kelet.agent(name=...)
— use when: (a) multiple agents run in one session and need separate attribution, or (b) your framework doesn't expose agent names natively (pydantic-ai does; OpenAI/Anthropic/raw SDKs don't — Kelet can't infer it). Logfire users:
kelet.configure()
detects the existing
TracerProvider
— no conflict.
Streaming: wrap the entire generator body (not the caller), including the final sentinel — trailing spans are silently lost otherwise:
python
async def stream_response():
    async with kelet.agentic_session(session_id=...):
        async for chunk in llm.stream(...):  # sentinel included in scope
            yield chunk
TypeScript/Node.js:
agenticSession
is callback-based (not a context manager). AsyncLocalStorage context propagates through the callback's call tree — there's no
with
-equivalent in Node.js, so the callback IS the scope boundary. Node.js only (not browser-compatible). Also requires OTEL peer deps alongside
kelet
— see Implementation Steps.
Next.js:
KeletExporter
in
instrumentation.ts
via
@vercel/otel
. Two required steps often missed: (1)
experimental: { instrumentationHook: true }
in
next.config.js
— without it,
instrumentation.ts
never runs (* Silent*); (2) each Vercel AI SDK call needs
experimental_telemetry: { isEnabled: true }
— telemetry is off by default (Silent).
Multi-project apps: Call
configure()
once with no project. Override per call with
agentic_session(project=...)
. W3C Baggage propagates the project to downstream microservices automatically.
React:
KeletProvider
at app root sets
apiKey
+ default project. For multiple AI features belonging to different Kelet projects: nest a second
KeletProvider
with only
project=
— it inherits
apiKey
from the outer provider. No need to repeat the key.
No React on the frontend (e.g. Astro, plain HTML, server-rendered): VoteFeedback requires React. Before concluding " no React = no VoteFeedback", think creatively: many non-React frameworks support React as an island/component (Astro via
@astrojs/react
, SvelteKit via
svelte-preprocess
, etc.). Check if the framework supports React interop before ruling it out. Either way, this is a major architectural decision — present the trade-offs and let the developer choose before proceeding:
OptionTrade-offs
Add React (recommended) — e.g.
@astrojs/react
Official SDK, best integration, richer UX — adds React as a dependency but most frameworks support React islands/interop
Implement feedback UI ad hoc in the existing stackNo new dependencies — VoteFeedback is conceptually just 👍/👎 buttons that POST a signal to the Kelet REST API. Valid if adding React is genuinely not feasible
Skip frontend feedback for nowFastest — server-side tracing still works; add feedback later
The React SDK (
@kelet-ai/feedback-ui
) is the recommended path. Only fall back to ad hoc or skip if the developer explicitly doesn't want React. Do not assume — always present the options and let them choose.
VoteFeedback:
session_id
passed to
VoteFeedback.Root
must exactly match what the server used in
agentic_session()
. If they differ, feedback is captured but silently unlinked from the trace.
Session ID propagation (how feedback links to traces): Client generates UUID → sends in request body → server uses in
agentic_session(session_id=...)
→ server returns it as
X-Session-ID
response header → client passes it to
VoteFeedback.Root
. (Silent if mismatched — no error, feedback captured but unlinked from the trace.)
Implicit feedback — three patterns, each for a different use case:
  • useFeedbackState
    : drop-in for
    useState
    . Each
    setState
    call accepts a trigger name as second arg — tag AI-generated updates
    "ai_generation"
    and user edits
    "manual_refinement"
    . Without trigger names, all state changes look identical and Kelet can't distinguish "user accepted AI output" from "user corrected it."
  • useFeedbackReducer
    : drop-in for
    useReducer
    . Action
    type
    fields automatically become trigger names — zero extra instrumentation for reducer-based state.
Which to use: Explicit rating of AI response →
VoteFeedback
. Editable AI output →
useFeedbackState
with trigger names. Complex state with action types →
useFeedbackReducer
.

参考references/api.md获取准确的函数名、包名以及TypeScript注意事项。
Python:启动时调用
kelet.configure()
会自动为pydantic-ai/Anthropic/OpenAI/LangChain埋点。每个LLM框架的额外依赖都需要安装(
kelet[anthropic]
kelet[openai]
等)——如果缺失,
configure()
会静默跳过对应库的埋点。当你自己管理编排循环时必须使用
agentic_session()
。如果是支持的框架替你编排,会话会自动推断——不需要包装器。参考上面的会话章节。
kelet.agent(name=...)
——适用场景:(a) 同一个会话中运行多个Agent,需要单独归因,或(b) 你的框架没有原生暴露Agent名称(pydantic-ai原生支持,会自动推断;OpenAI/Anthropic/原生SDK不支持——Kelet无法推断)。Logfire用户:
kelet.configure()
会检测现有
TracerProvider
——不会产生冲突。
流式响应:包装整个生成器主体(不是调用方),包括最后的结束标记——否则尾部Span会静默丢失:
python
async def stream_response():
    async with kelet.agentic_session(session_id=...):
        async for chunk in llm.stream(...):  # 结束标记包含在作用域内
            yield chunk
TypeScript/Node.js
agenticSession
基于回调的(不是上下文管理器)。AsyncLocalStorage上下文会通过回调的调用树传递——Node.js中没有
with
等价语法,所以回调就是作用域边界。仅支持Node.js(不兼容浏览器)。除了
kelet
之外还需要安装OTEL对等依赖——参考实现步骤。
Next.js
instrumentation.ts
中的
KeletExporter
通过
@vercel/otel
实现。两个经常遗漏的必填步骤:(1)
next.config.js
中添加
experimental: { instrumentationHook: true }
——没有这个配置,
instrumentation.ts
永远不会执行(静默故障);(2) 每个Vercel AI SDK调用都需要添加
experimental_telemetry: { isEnabled: true }
——埋点默认是关闭的(静默故障)。
多项目应用:调用一次不带project参数的
configure()
。每次调用时通过
agentic_session(project=...)
覆盖配置。W3C Baggage会自动将项目信息传递给下游微服务。
React:应用根组件的
KeletProvider
设置
apiKey
+默认项目。如果多个AI功能属于不同的Kelet项目:嵌套第二个仅带
project=
参数的
KeletProvider
——它会继承外层Provider的
apiKey
,不需要重复填写密钥。
前端不使用React(例如Astro、纯HTML、服务端渲染): VoteFeedback需要React。在得出「没有React就不能用VoteFeedback」的结论前,先想想其他方案:很多非React框架都支持React作为孤岛/组件(Astro通过
@astrojs/react
,SvelteKit通过
svelte-preprocess
等)。在排除VoteFeedback之前先检查框架是否支持React互操作。无论哪种情况,这都是重要的架构决策——先展示权衡方案,让开发者选择后再继续:
选项权衡
添加React(推荐) —— 例如
@astrojs/react
官方SDK、最佳集成、更丰富的UX——会添加React作为依赖,但大多数框架都支持React孤岛/互操作
在现有技术栈中自行实现反馈UI没有新依赖——VoteFeedback本质就是两个向Kelet REST API POST信号的👍/👎按钮。如果确实不能添加React,这是可行方案
暂时跳过前端反馈最快实现——服务端追踪仍然可用;后续再添加反馈
React SDK(
@kelet-ai/feedback-ui
)是推荐路径。仅当开发者明确不想使用React时,才退而求其次选择自行实现或跳过。不要假设开发者的选择——始终展示选项让他们决定。
VoteFeedback:传递给
VoteFeedback.Root
session_id
必须和服务端
agentic_session()
中使用的完全一致。如果不匹配,反馈会被采集,但会静默无法关联到对应追踪。
会话ID传递(反馈关联到追踪的方式): 客户端生成UUID → 放在请求体中发送 → 服务端在
agentic_session(session_id=...)
中使用 → 服务端将其作为
X-Session-ID
响应头返回 → 客户端将其传递给
VoteFeedback.Root
。(不匹配会导致静默故障——没有报错,反馈被采集但无法关联到追踪。
隐式反馈——三种模式,分别适用不同场景:
  • useFeedbackState
    useState
    的替代实现。每个
    setState
    调用接受第二个参数作为触发名称——将AI生成的更新标记为
    "ai_generation"
    ,用户编辑标记为
    "manual_refinement"
    。没有触发名称的话,所有状态变更看起来都一样,Kelet无法区分「用户接受AI输出」和「用户修改AI输出」。
  • useFeedbackReducer
    useReducer
    的替代实现。Action的
    type
    字段会自动成为触发名称——基于reducer的状态管理不需要额外埋点。
如何选择: AI响应的显式评分 →
VoteFeedback
。可编辑的AI输出 → 带触发名称的
useFeedbackState
。带action类型的复杂状态 →
useFeedbackReducer

Decision Tree

决策树

N agentic flows?
├─► 1  ──► configure(project="name") at startup
└─► N  ──► configure() once, agentic_session(project=...) per flow

Stack?
├─► Python   ──► kelet.configure() + agentic_session() context manager
├─► Node.js  ──► configure() + agenticSession({sessionId}, callback)
└─► Next.js  ──► instrumentation.ts + KeletExporter

User-facing with React?
├─► Yes ──► KeletProvider at root
│           ├─► Multiple flows? → nested KeletProvider per flow (project only)
│           └─► VoteFeedback at AI response sites + session propagation
└─► No  ──► Server-side only

Feedback signals?
├─► Explicit (votes)            ──► VoteFeedback / kelet.signal(kind=FEEDBACK, source=HUMAN)
├─► Implicit (edits)            ──► useFeedbackState (tag AI vs human updates with trigger names)
├─► Reducer-based state         ──► useFeedbackReducer (action.type = trigger name automatically)
└─► Synthetic signal evaluators ──► Generate deeplink → console.kelet.ai/synthetics/setup

有N个智能体流程?
├─► 1  ──► 启动时调用configure(project="name")
└─► N  ──► 调用一次configure(),每个流程通过agentic_session(project=...)覆盖

技术栈?
├─► Python   ──► kelet.configure() + agentic_session()上下文管理器
├─► Node.js  ──► configure() + agenticSession({sessionId}, callback)
└─► Next.js  ──► instrumentation.ts + KeletExporter

面向用户且使用React?
├─► 是 ──► 根组件添加KeletProvider
│           ├─► 多个流程? → 每个流程嵌套KeletProvider(仅指定project)
│           └─► AI响应位置添加VoteFeedback + 会话传递
└─► 否  ──► 仅服务端集成

反馈信号?
├─► 显式(投票)            ──► VoteFeedback / kelet.signal(kind=FEEDBACK, source=HUMAN)
├─► 隐式(编辑)            ──► useFeedbackState(用触发名称标记AI vs 用户更新)
├─► 基于Reducer的状态         ──► useFeedbackReducer(action.type自动作为触发名称)
└─► 合成信号评估器 ──► 生成深度链接 → console.kelet.ai/synthetics/setup

Implementation Steps

实现步骤

  1. Project Map — infer from files, confirm flow → project mapping
  2. API keys — ask for keys, detect config pattern, write to correct file
  3. Install — Python:
    kelet[all]
    or per-library extras. Node.js/Next.js:
    kelet
    + OTEL peer deps (
    @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-http
    ) — Python needs no OTEL deps. React:
    @kelet-ai/feedback-ui
  4. Instrument server
    configure()
    at startup +
    agentic_session()
    per flow
  5. Instrument frontend
    KeletProvider
    at root, nested per flow if multi-project
  6. Connect feedback — VoteFeedback + session ID propagation if user-facing
  7. Verify — type check, confirm env vars set, open Kelet console and confirm traces appear

  1. 项目映射——从文件推断信息,确认流程→项目映射
  2. API密钥——索要密钥,检测配置模式,写入正确的文件
  3. 安装依赖——Python:
    kelet[all]
    或对应库的额外依赖。Node.js/Next.js:
    kelet
    + OTEL对等依赖(
    @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-http
    )——Python不需要OTEL依赖。React:
    @kelet-ai/feedback-ui
  4. 服务端埋点——启动时调用
    configure()
    + 每个流程调用
    agentic_session()
  5. 前端埋点——根组件添加
    KeletProvider
    ,多项目的话按流程嵌套
  6. 对接反馈——如果是面向用户的应用,添加VoteFeedback + 会话ID传递
  7. 验证——类型检查,确认环境变量已设置,打开Kelet控制台确认追踪已显示

Phase V: Post-Implementation Verification

阶段V:实现后验证

Key things to verify for a Kelet integration:
  • Every agentic entry point is covered by
    agentic_session()
    or a supported framework — missing one = silent fragmented traces
  • Session ID is consistent end-to-end: client → server →
    agentic_session()
    → response header → VoteFeedback
  • kelet.configure()
    is called once at startup, not per-request
  • Secret key is server-only — never in the frontend bundle
  • Check Common Mistakes for any stack-specific gotchas that apply
  • Smoke test: trigger an LLM call, then tell the developer to open the Kelet console and verify sessions are appearing. Note it may take a few minutes for sessions to be fully ingested.

Kelet集成需要验证的核心要点:
  • 每个智能体入口点都被
    agentic_session()
    或支持的框架覆盖——遗漏一个就会导致静默的碎片化追踪
  • 会话ID端到端一致:客户端→服务端→
    agentic_session()
    →响应头→VoteFeedback
  • kelet.configure()
    在启动时调用一次,不是每次请求都调用
  • 密钥仅在服务端使用——永远不要出现在前端包中
  • 检查常见问题中是否有适用的技术栈专属注意事项
  • 冒烟测试:触发一次LLM调用,然后告知开发者打开Kelet控制台确认会话已显示。注意会话完全入库可能需要几分钟时间。

Common Mistakes

常见问题

MistakeSymptomNotes
Secret key in
KeletProvider
/ frontend env
Key leaked in JS bundleUse publishable key in frontend. Silent until key is revoked.
Keys written to wrong config file (
.env
vs
.envrc
)
App starts but no traces appearCheck config pattern before writing. Silent failure.
agentic_session
exits before streaming generator finishes
Traces appear incompleteWrap entire generator body including
[DONE]
sentinel. Silent.
VoteFeedback
session_id
doesn't match server session
Feedback unlinked from tracesCapture
X-Session-ID
header; use exact same value. Silent.
configure(project=...)
on a multi-project app
All sessions attributed to one projectUse
configure()
with no project; override in
agentic_session()
.
No
kelet.agent(name=...)
with OpenAI/Anthropic/AI SDK
Kelet shows unattributed spans — RCA can't identify which agent failedpydantic-ai exposes names natively (auto-inferred); raw SDKs don't. Silent.
Python extra not installed (e.g. missing
kelet[anthropic]
)
configure()
succeeds, zero traces from that library
Install the matching extra — Kelet silently skips uninstrumented libraries. Silent.
Node.js:
npm install kelet
only, missing OTEL peer deps
Import errors or no tracesAdd
@opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-http
. Python needs no peer deps.
Next.js: missing
instrumentationHook: true
in
next.config.js
instrumentation.ts
exists but never runs, zero traces
Add
experimental: { instrumentationHook: true }
to
next.config.js
. Silent.
Vercel AI SDK: missing
experimental_telemetry: { isEnabled: true }
per call
configure()
succeeds, zero traces from AI SDK calls
Vercel AI SDK telemetry is off by default. Must opt in per call. Silent.
DIY orchestration without
agentic_session()
Sessions appear fragmented — each LLM call is a separate unlinked trace in KeletRequired whenever you own the loop: Temporal, manual agent chaining, custom orchestrators, raw SDK calls. Silent.
错误现象说明
密钥放在
KeletProvider
/前端环境中
密钥泄露到JS包中前端使用可发布密钥。密钥被撤销前都是静默故障。
密钥写入错误的配置文件(
.env
.envrc
搞混)
应用启动但没有追踪数据写入前检查配置模式。静默故障。
agentic_session
在流式生成器结束前退出
追踪显示不完整包装整个生成器主体,包括
[DONE]
结束标记。静默故障。
VoteFeedback的
session_id
和服务端会话不匹配
反馈无法关联到追踪采集
X-Session-ID
头;使用完全一致的值。静默故障。
多项目应用调用
configure(project=...)
所有会话都归属于同一个项目调用不带project参数的
configure()
;在
agentic_session()
中覆盖。
使用OpenAI/Anthropic/AI SDK时没有添加
kelet.agent(name=...)
Kelet显示未归因的Span——RCA无法识别是哪个Agent故障pydantic-ai原生暴露名称(自动推断);原生SDK不支持。静默故障。
没有安装Python额外依赖(例如缺少
kelet[anthropic]
configure()
调用成功,但对应库没有任何追踪数据
安装匹配的额外依赖——Kelet会静默跳过未埋点的库。静默故障。
Node.js:仅执行
npm install kelet
,缺少OTEL对等依赖
导入错误或没有追踪数据添加
@opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-http
。Python不需要对等依赖。
Next.js:
next.config.js
中缺少
instrumentationHook: true
instrumentation.ts
存在但永远不会执行,没有任何追踪数据
next.config.js
中添加
experimental: { instrumentationHook: true }
静默故障。
Vercel AI SDK:每次调用缺少
experimental_telemetry: { isEnabled: true }
configure()
调用成功,但AI SDK调用没有任何追踪数据
Vercel AI SDK埋点默认关闭。必须每次调用都显式开启。静默故障。
自行编排流程但没有使用
agentic_session()
会话显示碎片化——每个LLM调用都是Kelet中独立的无关联追踪只要你自己管理循环就必须添加:Temporal、手动Agent链式调用、自定义编排器、原生SDK调用。静默故障。