aiconfig-migrate
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseMigrate to AI Configs
迁移至AI Configs
You're using a skill that will guide you through migrating an application from hardcoded LLM prompts to a full LaunchDarkly AI Configs implementation. Your job is to audit the existing code, extract the hardcoded model and prompt, wrap the call site in the AI SDK with a safe fallback, move tools into the config, instrument the tracker, and attach evaluations — in that order, stopping at each stage for the user to confirm.
你将使用一项技能,引导你完成将应用从硬编码LLM提示词迁移至完整LaunchDarkly AI Configs实现的流程。你的任务是按顺序完成现有代码审计、提取硬编码模型与提示词、用AI SDK封装调用站点并设置安全降级方案、将工具迁移至配置中、接入追踪器、附加评估功能——每个阶段完成后需等待用户确认,再进入下一阶段。
Prerequisites
前置条件
This skill requires the remotely hosted LaunchDarkly MCP server to be configured in your environment, and an application that already calls an LLM provider with hardcoded model, prompt, and parameter values.
Required environment:
- — server-side SDK key (starts with
LD_SDK_KEY) from the target LaunchDarkly projectsdk-
MCP tools used directly by this skill: none — every LaunchDarkly write happens in a focused sibling skill.
Hand-off model. This skill does not auto-invoke other skills. At each stage that needs a LaunchDarkly write, this skill prepares the inputs (config key, mode, model, prompt, tool schemas, judge keys) and then tells the user to run the next slash-command themselves. After the user finishes that sibling skill, return to the next step here. Treat the "Delegate" lines below as next-step instructions, not auto-handoffs.
Sibling skills the user runs at each stage:
- — pre-Stage 2, only if no project exists yet
aiconfig-projects - — Stage 2 (creates the AI Config and first variation)
aiconfig-create - — Stage 3 (creates tool definitions and attaches them)
aiconfig-tools - — between Stage 2 and Stage 4 (promotes the new variation to fallthrough so the SDK actually serves it)
aiconfig-targeting - — Stage 5 (attaches judges, creates custom judges)
aiconfig-online-evals
使用本技能需要在环境中配置远程托管的LaunchDarkly MCP服务器,且应用已存在直接调用LLM服务商接口的代码,其中包含硬编码的模型、提示词及参数值。
必填环境变量:
- — 目标LaunchDarkly项目的服务端SDK密钥(以
LD_SDK_KEY开头)sdk-
本技能直接使用的MCP工具:无 — 所有LaunchDarkly写入操作均由关联的专属技能完成。
交接模式:本技能不会自动调用其他技能。在每个需要写入LaunchDarkly的阶段,本技能会准备好输入信息(配置密钥、模式、模型、提示词、工具Schema、判定器密钥),然后告知用户自行运行对应的斜杠命令。用户完成关联技能后,再回到此处继续下一步。请将下方的“Delegate”行视为下一步操作指引,而非自动交接。
用户在各阶段需运行的关联技能:
- — 仅在Stage 2之前使用,若尚未创建项目时调用
aiconfig-projects - — Stage 2(创建AI Config及首个变体)
aiconfig-create - — Stage 3(创建工具定义并附加到配置)
aiconfig-tools - — Stage 2与Stage 4之间调用(将新变体升级为默认方案,确保SDK能实际返回该配置)
aiconfig-targeting - — Stage 5(附加判定器、创建自定义判定器)
aiconfig-online-evals
Core Principles
核心原则
- Inspect before you mutate. Every stage begins with a read-only audit. Do not touch code until Step 1 is confirmed by the user.
- Replace config, not business logic. The SDK call is a drop-in for the place where the model, parameters, and prompt are defined — not for the provider call itself. OpenAI/Anthropic/Bedrock calls stay where they are.
- Fallback mirrors current behavior. The fallback passed to /
completion_configmust preserve the hardcoded values you removed, so the app is unchanged if LaunchDarkly is unreachable.agent_config - Stages are ordered. Wrap before you add tools. Add tools before you track. Track before you add evals. Skipping ahead produces configs without traffic, metrics without context, and judges with nothing to score.
- Hand off to focused skills, manually. Each stage that needs a LaunchDarkly write tells the user to run a sibling slash-command (,
/aiconfig-create,/aiconfig-tools,/aiconfig-targeting) and waits for them to come back. This skill does not auto-invoke other skills./aiconfig-online-evals
- 先检查再修改:每个阶段均以只读审计开始。在用户确认Step 1完成前,不得修改任何代码。
- 替换配置而非业务逻辑:SDK调用仅替换模型、参数和提示词的定义位置——而非服务商调用本身。OpenAI/Anthropic/Bedrock的调用代码保持原样。
- 降级方案匹配当前行为:传递给/
completion_config的降级方案必须保留你移除的硬编码值,确保LaunchDarkly不可用时,应用行为与迁移前一致。agent_config - 阶段顺序不可打乱:先封装再添加工具,先添加工具再接入追踪,先接入追踪再添加评估。跳过前面的阶段会导致配置无流量、指标无上下文、判定器无内容可评分。
- 手动交接至专属技能:每个需要写入LaunchDarkly的阶段,告知用户运行对应的斜杠命令(、
/aiconfig-create、/aiconfig-tools、/aiconfig-targeting)并等待用户返回。本技能不会自动调用其他技能。/aiconfig-online-evals
Workflow
工作流程
Step 1: Audit the codebase (read-only)
Step 1:审计代码库(只读)
Run the phase-1 checklist and produce a structured summary. This step writes no code and creates no LaunchDarkly resources.
Use phase-1-analysis-checklist.md to scan:
- Language and package manager — Python (pip/poetry/uv), TypeScript/JavaScript (npm/pnpm/yarn), Go, Ruby, .NET
- LLM provider — OpenAI, Anthropic, Bedrock, Gemini, LangChain, LangGraph, CrewAI
- Existing LaunchDarkly usage — any pre-existing or
LDClientinitialization to reuseldclient - Hardcoded model configs — model name string literals, temperature / maxTokens / topP, system prompts, instruction strings
- Mode decision — completion mode (chat messages array) or agent mode (single instructions string). Completion mode is the default and the only mode that supports judges attached in the UI.
Phase 1 output (return to user as a structured summary):
Language: Python 3.12
Package manager: uv
LLM provider: OpenAI
Existing LD SDK: none
Target mode: completion
Hardcoded targets:
- src/chat.py:42 model="gpt-4o"
- src/chat.py:43 temperature=0.7, max_tokens=2000
- src/chat.py:45 system="You are a helpful assistant..."
Proposed plan: single AI Config key `chat-assistant`, mirror fallback, Stage 3 (tools) skipped (no function calling), Stage 4 (tracking) inline, Stage 5 (evals) attach built-in accuracy judge.STOP. Present this summary and wait for the user to confirm before proceeding to Stage 1 extract. This is the same stop point as the Phase 1 pattern.
AGENT-SETUP-PROMPT.md执行第一阶段检查清单并生成结构化总结。此步骤不修改代码,也不创建任何LaunchDarkly资源。
使用phase-1-analysis-checklist.md扫描以下内容:
- 语言与包管理器 — Python(pip/poetry/uv)、TypeScript/JavaScript(npm/pnpm/yarn)、Go、Ruby、.NET
- LLM服务商 — OpenAI、Anthropic、Bedrock、Gemini、LangChain、LangGraph、CrewAI
- 现有LaunchDarkly使用情况 — 是否存在可复用的或
LDClient初始化代码ldclient - 硬编码模型配置 — 模型名字面量、temperature/maxTokens/topP参数、系统提示词、指令字符串
- 模式选择 — 补全模式(聊天消息数组)或代理模式(单条指令字符串)。补全模式为默认模式,也是目前唯一支持在UI中附加判定器的模式。
第一阶段输出(以结构化总结形式返回给用户):
语言:Python 3.12
包管理器:uv
LLM服务商:OpenAI
现有LD SDK:无
目标模式:补全
硬编码目标:
- src/chat.py:42 model="gpt-4o"
- src/chat.py:43 temperature=0.7, max_tokens=2000
- src/chat.py:45 system="You are a helpful assistant..."
建议方案:单个AI Config密钥`chat-assistant`,匹配降级方案,跳过Stage 3(无函数调用),Stage 4(追踪)内联实现,Stage 5(评估)附加内置准确性判定器。暂停:展示此总结并等待用户确认,再进入Stage 1的提取步骤。此暂停点与第一阶段模式一致。
AGENT-SETUP-PROMPT.mdStep 2: Extract prompts (Stage 1)
Step 2:提取提示词(Stage 1)
Turn the audit into a concrete migration manifest — still read-only. For each hardcoded target from Step 1, record:
- File path and line range
- Current value (model name, full prompt text, parameter dict)
- Target AI Config field (,
model.name,model.parameters.temperature,messages[].content)instructions - Whether the surrounding call uses function calling / tools (drives Stage 3)
- Whether the surrounding call has retry logic (affects where Stage 4 tracker calls go)
This manifest is the contract for the next four stages. Review it with the user. Do not mutate any files in this step.
将审计结果转化为具体的迁移清单——仍为只读操作。针对Step 1中发现的每个硬编码目标,记录:
- 文件路径与行号范围
- 当前值(模型名称、完整提示词文本、参数字典)
- 目标AI Config字段(、
model.name、model.parameters.temperature、messages[].content)instructions - 周边调用是否使用函数调用/工具(决定Stage 3是否执行)
- 周边调用是否包含重试逻辑(影响Stage 4追踪器调用的位置)
此清单是后续四个阶段的执行依据。需与用户共同审核。此步骤不得修改任何文件。
Step 3: Wrap the call in the AI SDK (Stage 2)
Step 3:用AI SDK封装调用(Stage 2)
This is the first stage that writes code. It has six sub-steps.
-
Install the AI SDK. Detect the package manager from Step 1, then install:
- Python: +
launchdarkly-server-sdklaunchdarkly-server-sdk-ai - Node.js/TypeScript: +
@launchdarkly/node-server-sdk@launchdarkly/server-sdk-ai - Go: +
github.com/launchdarkly/go-server-sdk/v7github.com/launchdarkly/go-server-sdk/ldai
- Python:
-
Initializeonce at startup. Reuse any existing
LDAIClient— do not create a second base client. Place the initialization in the same module that owns existing app config.LDClientPython:pythonimport ldclient from ldclient.config import Config from ldai.client import LDAIClient ldclient.set_config(Config(os.environ["LD_SDK_KEY"])) ai_client = LDAIClient(ldclient.get())Node.js/TypeScript:typescriptimport { init } from '@launchdarkly/node-server-sdk'; import { initAi } from '@launchdarkly/server-sdk-ai'; const ldClient = init(process.env.LD_SDK_KEY!); await ldClient.waitForInitialization({ timeout: 10 }); const aiClient = initAi(ldClient); -
Hand off to. Print the extracted model, prompt/instructions, parameters, and mode from Step 2's manifest, then tell the user: "Run
aiconfig-createwith these inputs, then come back here." Supply the config key you want the code to call (e.g./aiconfig-create). Do not attempt to auto-invoke the sibling skill — wait for the user to finish it before continuing.chat-assistantAfterfinishes, the user must also runaiconfig-createto promote the new variation to fallthrough. A freshly created variation returns/aiconfig-targetingto every consumer until targeting is updated. Skip this and Stage 2 verification (sub-step 7 below) will silently take the fallback path on every request.enabled=False -
Build the fallback. Mirror the hardcoded values you extracted. Use/
AICompletionConfigDefaultin Python, plain object literals in Node. See fallback-defaults-pattern.md for inline, file-backed, and bootstrap-generated patterns.AIAgentConfigDefaultPython fallback (completion mode):pythonfrom ldai.client import AICompletionConfigDefault, ModelConfig, ProviderConfig, LDMessage fallback = AICompletionConfigDefault( enabled=True, model=ModelConfig(name="gpt-4o", parameters={"temperature": 0.7, "maxTokens": 2000}), provider=ProviderConfig(name="openai"), messages=[LDMessage(role="system", content="You are a helpful assistant...")], ) -
Replace the hardcoded call site. Swap the hardcoded model/prompt/params for a/
completion_config(orcompletionConfig/agent_config) call, then read the returned fields into the existing provider call. Keep the provider call intact.agentConfigPython — before:pythonresponse = openai_client.chat.completions.create( model="gpt-4o", temperature=0.7, max_tokens=2000, messages=[ {"role": "system", "content": "You are a helpful assistant..."}, {"role": "user", "content": user_input}, ], )Python — after:pythoncontext = Context.builder(user_id).set("email", user.email).build() config = ai_client.completion_config("chat-assistant", context, fallback) if not config.enabled: return disabled_response() params = config.model.parameters or {} response = openai_client.chat.completions.create( model=config.model.name, temperature=params.get("temperature"), max_tokens=params.get("maxTokens"), messages=[m.to_dict() for m in (config.messages or [])] + [ {"role": "user", "content": user_input}, ], )Python — after (agent mode) — for LangGraph, CrewAI, or any framework that takes a goal/instructions string:pythoncontext = Context.builder(user_id).kind("user").build() config = ai_client.agent_config("support-agent", context, FALLBACK) if not config.enabled: return disabled_response() # config is a single AIAgentConfig object — NOT a (config, tracker) tuple. # The tracker lives at config.tracker. model_name = f"{config.provider.name}/{config.model.name}" instructions = config.instructions params = config.model.parameters or {} # Pass model_name + instructions into your framework's agent constructor. # Example: LangGraph create_react_agent # agent = create_react_agent( # model=load_chat_model(model_name), # tools=TOOLS, # Stage 3 will replace this with a config.tools loader # prompt=instructions, # )See before-after-examples.md for full Python OpenAI, Node Anthropic, and LangGraph agent-mode paired snippets. -
Check. If it returns
config.enabled, handle the disabled path without crashing and without calling the provider. The check is required — not optional.False -
Verify. Run the app with a valid; confirm the call succeeds and the response matches pre-migration output. Then temporarily set
LD_SDK_KEY(or unset it) and confirm the fallback path runs without error. Both paths must work before moving to Stage 3.LD_SDK_KEY=sdk-invalid
Delegate: (sub-step 3).
aiconfig-create这是首个修改代码的阶段,包含六个子步骤。
-
安装AI SDK:根据Step 1检测到的包管理器,执行安装:
- Python:+
launchdarkly-server-sdklaunchdarkly-server-sdk-ai - Node.js/TypeScript:+
@launchdarkly/node-server-sdk@launchdarkly/server-sdk-ai - Go:+
github.com/launchdarkly/go-server-sdk/v7github.com/launchdarkly/go-server-sdk/ldai
- Python:
-
在启动时初始化一次:复用现有
LDAIClient——不要创建第二个基础客户端。将初始化代码放在管理现有应用配置的模块中。LDClientPython示例:pythonimport ldclient from ldclient.config import Config from ldai.client import LDAIClient ldclient.set_config(Config(os.environ["LD_SDK_KEY"])) ai_client = LDAIClient(ldclient.get())Node.js/TypeScript示例:typescriptimport { init } from '@launchdarkly/node-server-sdk'; import { initAi } from '@launchdarkly/server-sdk-ai'; const ldClient = init(process.env.LD_SDK_KEY!); await ldClient.waitForInitialization({ timeout: 10 }); const aiClient = initAi(ldClient); -
交接至:打印Step 2清单中提取的模型、提示词/指令、参数及模式,然后告知用户:“使用这些输入运行
aiconfig-create,完成后回到此处。” 指定代码将调用的配置密钥(例如/aiconfig-create)。不要尝试自动调用关联技能——等待用户完成后再继续。chat-assistant完成后,用户还需运行aiconfig-create将新变体升级为默认方案。 新创建的变体默认对所有消费者返回/aiconfig-targeting,直到更新目标配置。跳过此步骤,Stage 2的验证(子步骤7)将在每次请求时自动走降级路径,且无任何提示。enabled=False -
构建降级方案:匹配你提取的硬编码值。Python中使用/
AICompletionConfigDefault,Node中使用普通对象字面量。可参考fallback-defaults-pattern.md中的内联、文件存储、引导生成三种模式。AIAgentConfigDefaultPython降级方案(补全模式):pythonfrom ldai.client import AICompletionConfigDefault, ModelConfig, ProviderConfig, LDMessage fallback = AICompletionConfigDefault( enabled=True, model=ModelConfig(name="gpt-4o", parameters={"temperature": 0.7, "maxTokens": 2000}), provider=ProviderConfig(name="openai"), messages=[LDMessage(role="system", content="You are a helpful assistant...")], ) -
替换硬编码调用站点:将硬编码的模型/提示词/参数替换为/
completion_config(或completionConfig/agent_config)调用,然后将返回的字段传入现有的服务商调用中。保留服务商调用代码不变。agentConfigPython — 迁移前:pythonresponse = openai_client.chat.completions.create( model="gpt-4o", temperature=0.7, max_tokens=2000, messages=[ {"role": "system", "content": "You are a helpful assistant..."}, {"role": "user", "content": user_input}, ], )Python — 迁移后:pythoncontext = Context.builder(user_id).set("email", user.email).build() config = ai_client.completion_config("chat-assistant", context, fallback) if not config.enabled: return disabled_response() params = config.model.parameters or {} response = openai_client.chat.completions.create( model=config.model.name, temperature=params.get("temperature"), max_tokens=params.get("maxTokens"), messages=[m.to_dict() for m in (config.messages or [])] + [ {"role": "user", "content": user_input}, ], )Python — 迁移后(代理模式) — 适用于LangGraph、CrewAI或任何接受目标/指令字符串的框架:pythoncontext = Context.builder(user_id).kind("user").build() config = ai_client.agent_config("support-agent", context, FALLBACK) if not config.enabled: return disabled_response() # config是单个AIAgentConfig对象 — 不是(config, tracker)元组。 # tracker位于config.tracker。 model_name = f"{config.provider.name}/{config.model.name}" instructions = config.instructions params = config.model.parameters or {} # 将model_name + instructions传入框架的代理构造函数。 # 示例:LangGraph create_react_agent # agent = create_react_agent( # model=load_chat_model(model_name), # tools=TOOLS, # Stage 3将替换为config.tools加载器 # prompt=instructions, # )完整的Python OpenAI、Node Anthropic及LangGraph代理模式对比代码片段可参考before-after-examples.md。 -
检查:若返回
config.enabled,需处理禁用路径,避免崩溃且不调用服务商接口。此检查为必填项——不可省略。False -
验证:使用有效的运行应用;确认调用成功且响应与迁移前一致。然后临时设置
LD_SDK_KEY(或删除该变量),确认降级路径可正常运行且无错误。两条路径均需正常工作,再进入Stage 3。LD_SDK_KEY=sdk-invalid
Delegate: (子步骤3)。
aiconfig-createStep 4: Move tools into the config (Stage 3)
Step 4:将工具迁移至配置中(Stage 3)
Skip this step if the audited app has no function calling / tools. Otherwise:
-
Enumerate the tools currently registered. Common shapes to look for:
- — OpenAI direct
openai.chat.completions.create(tools=[...]) - — Anthropic direct
anthropic.messages.create(tools=[...]) - — LangGraph prebuilt ReAct
create_react_agent(tools=[...]) - — CrewAI
Agent(tools=[...]) - Custom — module-level
StateGraphlist referenced in bothTOOLS = [...]andmodel.bind_tools(TOOLS). This is theToolNode(TOOLS)template shape; the list is usually in alangchain-ai/react-agentmodule. Grep fortools.pyandbind_tools(together — they will point at the same list.ToolNode(
Record each tool's name, description, and JSON schema.For LangChain/LangGraph tools defined with, extract the schema via@tool(or the equivalent Pydantictool.args_schema.model_json_schema()call). For plain async callables used as tools (common in custom StateGraph shapes), LangChain infers the schema from the function signature at bind time — extract it viamodel_json_schema(). Do not hand-write the schema.StructuredTool.from_function(fn).args_schema.model_json_schema() -
Hand off to. Print the extracted tool names, descriptions, and schemas, then tell the user: "Run
aiconfig-toolswith these tools and the variation key, then come back here." The sibling skill creates tool definitions (/aiconfig-tools) and attaches them to the variation (create-ai-tool). Wait for the user to finish before proceeding to sub-step 3. Do not auto-invoke.update-ai-config-variation -
Replace the hardcoded tools array at the call site with a read from(or the SDK equivalent for your language). Load the actual implementation functions dynamically from the tool names — see agent-mode-frameworks.md for the dynamic-tool-factory pattern from the devrel agents tutorial.
config.toolsFor customshapes, you must update both call sites:StateGraphand.bind_tools(TOOLS)must both read from the sameToolNode(TOOLS)-derived list. Forgetting one leaves the LLM seeing the new tools but the executor still running the old ones, or vice versa.config.tools -
Verify. Run the app; confirm the tool flows still execute correctly.(via the delegate) confirms the tools are attached server-side.
get-ai-config
Delegate: (sub-step 2).
aiconfig-tools若审计发现应用无函数调用/工具,则跳过此步骤。否则:
-
枚举当前已注册的工具:需关注的常见形式:
- — 直接调用OpenAI
openai.chat.completions.create(tools=[...]) - — 直接调用Anthropic
anthropic.messages.create(tools=[...]) - — LangGraph预构建ReAct
create_react_agent(tools=[...]) - — CrewAI
Agent(tools=[...]) - 自定义— 模块级
StateGraph列表,同时在TOOLS = [...]和model.bind_tools(TOOLS)中引用。这是ToolNode(TOOLS)模板的典型结构;该列表通常位于langchain-ai/react-agent模块中。同时搜索tools.py和bind_tools(,它们会指向同一个列表。ToolNode(
记录每个工具的名称、描述及JSON Schema。对于使用定义的LangChain/LangGraph工具,通过@tool(或等效的Pydantictool.args_schema.model_json_schema()调用)提取Schema。对于用作工具的普通异步可调用对象(自定义StateGraph结构中常见),LangChain会在绑定阶段从函数签名推断Schema——通过model_json_schema()提取。请勿手动编写Schema。StructuredTool.from_function(fn).args_schema.model_json_schema() -
交接至:打印提取的工具名称、描述及Schema,然后告知用户:“使用这些工具及变体密钥运行
aiconfig-tools,完成后回到此处。” 关联技能会创建工具定义(/aiconfig-tools)并将其附加到变体(create-ai-tool)。等待用户完成后再进入子步骤3。不要自动调用。update-ai-config-variation -
将调用站点的硬编码工具数组替换为从读取的内容(或对应语言的SDK等效方式)。根据工具名称动态加载实际实现函数——可参考agent-mode-frameworks.md中开发者代理教程的动态工具工厂模式。
config.tools对于自定义结构,必须更新两个调用站点:StateGraph和.bind_tools(TOOLS)都必须从同一个基于ToolNode(TOOLS)的列表读取内容。遗漏其中一个会导致LLM看到新工具,但执行器仍运行旧工具,反之亦然。config.tools -
验证:运行应用;确认工具流程仍可正常执行。通过Delegate调用确认工具已在服务端附加。
get-ai-config
Delegate: (子步骤2)。
aiconfig-toolsStep 5: Instrument the tracker (Stage 4)
Step 5:接入追踪器(Stage 4)
Delegate: wires the per-request calls (duration, tokens, success/error, feedback) around the provider call. Use alongside it if the app needs business metrics beyond the built-in AI ones. Note: do not confuse this with , which is for feature metrics — a different API. See sdk-ai-tracker-patterns.md for the full per-method Python + Node matrix that the delegate skill draws on.
aiconfig-ai-metricstracker.track_*aiconfig-custom-metricslaunchdarkly-metric-instrumentldClient.track()Hand off: print the AI Config key, variation key, provider, and whether the call is streaming, then tell the user: "Run with these inputs, then come back here." Do not auto-invoke. Return here for sub-step 5 (verify) once they're done.
/aiconfig-ai-metrics-
Locate the tracker. It's attached to the config object returned in Stage 2:(Python) or
config.tracker(Node). Tier 1 (managed runner) tracks automatically and does not need an explicit tracker call at all — if the app is a chat loop, useaiConfig.tracker/ai_client.create_model(...)and skip to sub-step 4.aiClient.initChat(...) -
Pick a tier from the four-tier ladder. The delegate skill's SKILL.md has the full walk-through; the condensed version for migration-context decisions:
Tier When to use Pattern 1 — Managed runner The call site is a chat loop (turn-based, maintains history). /ManagedModel— automatic tracking, no tracker calls.TrackedChat2 — Provider package + trackMetricsOfNon-chat shape where a provider package exists: OpenAI, LangChain/LangGraph, Vercel AI SDK. tracker.trackMetricsOf(Provider.getAIMetricsFromResponse, fn)3 — Custom extractor + trackMetricsOfAnthropic direct, Gemini, Bedrock, Cohere, custom HTTP. — one small function mapping response →tracker.trackMetricsOf(myExtractor, fn).LDAIMetrics4 — Raw manual Streaming with TTFT, partial tracking, unusual shapes. Explicit +trackDuration+trackTokens/trackSuccess.trackErrorDo not introduce/track_openai_metrics/track_bedrock_converse_metricsin new code. They still exist in the SDK but have been replaced bytrackVercelAISDKGenerateTextMetricscomposed with a provider-package extractor. Current Python and Node SDK READMEs document the new pattern exclusively.trackMetricsOf -
Wire the chosen tier. The delegate skill has full Python + Node examples for each tier plus per-provider files. A condensed Tier 2/3 example for reference — OpenAI via the provider package:Python:python
from ldai_openai import OpenAIProvider import openai client = openai.OpenAI() def call_openai(): return client.chat.completions.create( model=config.model.name, messages=[{"role": "system", "content": config.messages[0].content}, {"role": "user", "content": user_prompt}], ) try: response = config.tracker.track_metrics_of( call_openai, OpenAIProvider.get_ai_metrics_from_response, ) except Exception: config.tracker.track_error() raiseNode:typescriptimport { OpenAIProvider } from '@launchdarkly/server-sdk-ai-openai'; try { const response = await aiConfig.tracker.trackMetricsOf( OpenAIProvider.getAIMetricsFromResponse, () => openaiClient.chat.completions.create({ model: aiConfig.model!.name, messages: [...aiConfig.messages, { role: 'user', content: userPrompt }], }), ); } catch (err) { aiConfig.tracker.trackError(); throw err; }For Anthropic direct, Bedrock (no provider package), Gemini, and custom HTTP, write a small extractor returning— see the delegate skill's anthropic-tracking.md and bedrock-tracking.md. LangChain single-node and LangGraph go through theLDAIMetrics/launchdarkly-server-sdk-ai-langchainprovider package with@launchdarkly/server-sdk-ai-langchain.LangChainProvider.get_ai_metrics_from_response -
Wire feedback tracking if the app has thumbs-up/down UI. Both SDKs exposewith a
trackFeedbackargument.{kind}Python:pythonfrom ldai.tracker import FeedbackKind config.tracker.track_feedback({"kind": FeedbackKind.Positive})Node:typescriptimport { LDFeedbackKind } from '@launchdarkly/server-sdk-ai'; aiConfig.tracker.trackFeedback({ kind: LDFeedbackKind.Positive }); -
Verify. Hit the wrapped endpoint in staging, then open the AI Config in LaunchDarkly → Monitoring tab. Duration, token, and generation counts should appear within 1–2 minutes. If nothing shows up, walk the checklist in sdk-ai-tracker-patterns.md under "Troubleshooting."
Delegate: 会在服务商调用周围封装请求级别的调用(时长、令牌、成功/错误、反馈)。若应用需要内置AI指标之外的业务指标,可搭配使用****。注意:不要与混淆,后者用于功能指标——属于不同的API。完整的Python + Node各方法矩阵可参考sdk-ai-tracker-patterns.md,关联技能会基于此实现。
aiconfig-ai-metricstracker.track_*aiconfig-custom-metricslaunchdarkly-metric-instrumentldClient.track()交接:打印AI Config密钥、变体密钥、服务商及调用是否为流式,然后告知用户:“使用这些输入运行,完成后回到此处。” 不要自动调用。用户完成后回到此处进行子步骤5(验证)。
/aiconfig-ai-metrics-
定位追踪器:它附加在Stage 2返回的config对象上:(Python)或
config.tracker(Node)。Tier 1(托管运行器)会自动追踪,无需显式调用追踪器——若应用为聊天循环,使用aiConfig.tracker/ai_client.create_model(...)并直接跳至子步骤4。aiClient.initChat(...) -
从四个层级中选择合适的方案:关联技能的SKILL.md有完整说明;迁移场景下的精简决策指南:
层级 使用场景 模式匹配 1 — 托管运行器 调用站点为聊天循环(回合制,保留历史记录) /ManagedModel— 自动追踪,无需调用追踪器。TrackedChat2 — 服务商包 + trackMetricsOf非聊天结构且存在服务商包:OpenAI、LangChain/LangGraph、Vercel AI SDK tracker.trackMetricsOf(Provider.getAIMetricsFromResponse, fn)3 — 自定义提取器 + trackMetricsOf直接调用Anthropic、Gemini、Bedrock、Cohere或自定义HTTP接口 — 一个将响应映射为tracker.trackMetricsOf(myExtractor, fn)的小型函数。LDAIMetrics4 — 手动原生实现 流式传输、TTFT、部分追踪或特殊结构 显式调用 +trackDuration+trackTokens/trackSuccess。trackError请勿在新代码中使用/track_openai_metrics/track_bedrock_converse_metrics。这些方法仍存在于SDK中,但已被trackVercelAISDKGenerateTextMetrics与服务商包提取器组合的方式取代。当前Python和Node SDK的README仅记录新模式。trackMetricsOf -
配置所选层级:关联技能有每个层级的完整Python + Node示例及各服务商专属文件。以下为Tier 2/3的精简示例——通过服务商包调用OpenAI:Python示例:python
from ldai_openai import OpenAIProvider import openai client = openai.OpenAI() def call_openai(): return client.chat.completions.create( model=config.model.name, messages=[{"role": "system", "content": config.messages[0].content}, {"role": "user", "content": user_prompt}], ) try: response = config.tracker.track_metrics_of( call_openai, OpenAIProvider.get_ai_metrics_from_response, ) except Exception: config.tracker.track_error() raiseNode示例:typescriptimport { OpenAIProvider } from '@launchdarkly/server-sdk-ai-openai'; try { const response = await aiConfig.tracker.trackMetricsOf( OpenAIProvider.getAIMetricsFromResponse, () => openaiClient.chat.completions.create({ model: aiConfig.model!.name, messages: [...aiConfig.messages, { role: 'user', content: userPrompt }], }), ); } catch (err) { aiConfig.tracker.trackError(); throw err; }对于直接调用Anthropic、Bedrock(无服务商包)、Gemini及自定义HTTP接口,需编写一个返回的小型提取器——可参考关联技能的anthropic-tracking.md和bedrock-tracking.md。LangChain单节点及LangGraph需通过LDAIMetrics/launchdarkly-server-sdk-ai-langchain服务商包,使用@launchdarkly/server-sdk-ai-langchain。LangChainProvider.get_ai_metrics_from_response -
若应用有点赞/踩UI,接入反馈追踪:两个SDK均提供带参数的
{kind}方法。trackFeedbackPython示例:pythonfrom ldai.tracker import FeedbackKind config.tracker.track_feedback({"kind": FeedbackKind.Positive})Node示例:typescriptimport { LDFeedbackKind } from '@launchdarkly/server-sdk-ai'; aiConfig.tracker.trackFeedback({ kind: LDFeedbackKind.Positive }); -
验证:在预发布环境调用封装后的接口,然后打开LaunchDarkly中的AI Config → 监控标签页。时长、令牌及生成次数应在1-2分钟内显示。若未显示,可参考sdk-ai-tracker-patterns.md中“故障排除”部分的检查清单。
Step 6: Attach evaluations (Stage 5)
Step 6:附加评估功能(Stage 5)
-
Decide between three evaluation paths. This is the most commonly misunderstood stage — there are three paths, not two, and the right default for a migration context is often the one people skip.
Path When to use Supports agent mode? Offline eval (recommended default for migration) Pre-ship regression: run a fixed dataset through the new variation in the LD Playground and score against baseline. Best fit for migration because you want to prove the new AI Config behaves at least as well as the hardcoded version before shipping. Yes — all modes UI-attached auto judges Attach one or more judges to a variation in the LD UI; judges run on sampled live requests automatically. Zero code changes. Completion mode only (the UI widget is completion-only today) Programmatic direct-judge Call inside the request handler andai_client.create_judge(...)on each call. Adds per-request cost and code complexity. Best for continuous live scoring of workflows where sampled auto-judges aren't enough.judge.evaluate(input, output)Yes — all modes (the SDK handles both identically) Most migration users should start with offline eval, then add programmatic direct-judge only if they need continuous live scoring after the rollout is stable. -
For agent-mode migrations, default to offline eval. UI-attached auto judges are completion-mode only today. The documented path for agent mode is either (a) offline regression via the LD Playground + Datasets (works for all modes), or (b) programmatic direct-judge wired into the call site. Generate a starter dataset CSV from the audit manifest (one representative input per row) and point the user atfor the Playground walkthrough. Only wire programmatic direct-judge into production code if the user explicitly asks for continuous live scoring.
/tutorials/offline-evals -
Hand off to— only for UI-attached judges (completion mode) or to create custom judge AI Configs that will be referenced by the programmatic path. Tell the user: "Run
aiconfig-online-evalswith these inputs, then come back here." Do not auto-invoke. Pass:/aiconfig-online-evals- The parent AI Config key and variation key
- A list of built-in judges (Accuracy, Relevance, Toxicity) or custom judge keys to create/attach
- Target environment
The delegate handles creating custom judge AI Configs, attaching them via the variation PATCH endpoint, and setting fallthrough on each judge config. Offline eval does not go through this delegate — it's a Playground workflow, not an API write. -
For programmatic direct-judge: wire+
create_judge+evaluate. This is the only path at Stage 5 that writes code. The correct shape:track_eval_scorespythonfrom ldai.client import AIJudgeConfigDefault judge = await ai_client.create_judge( judge_key, # judge AI Config key in LD ld_context, AIJudgeConfigDefault(enabled=False), # fallback: skip eval on SDK miss ) if judge and judge.enabled: result = await judge.evaluate( input_text, output_text, sampling_rate=0.25, # optional; default 1.0 (always eval) ) if result: config.tracker.track_eval_scores(result.evals)Three rules:- returns
create_judge. Always guard withOptional[Judge]— it returnsif judge and judge.enabled:if the judge AI Config is disabled for the context or the provider is missing. A directNoneon a.evaluate()return will raiseNone.AttributeError - Pass , not
AIJudgeConfigDefault. TheAICompletionConfigDefaultcreate_judgeparameter is typeddefault; passing the completion type will not type-check and is a doc-level bug in some older examples.Optional[AIJudgeConfigDefault] - is a parameter on
sampling_rate, not onevaluate(). It defaults tocreate_judge(evaluate every call). For live paths, pass something lower (0.1–0.25) to control cost.1.0
Ask the user which judge AI Config key to use. LaunchDarkly ships three built-in judges — Accuracy, Relevance, Toxicity — but the actual AI Config keys for the built-ins are not canonical SDK constants and aren't documented. Have the user open AI Configs > Library in the LD UI and copy the key of the judge they want to reference, or create a custom judge AI Config viafirst.aiconfig-create -
Verify.
- UI-attached auto judges: trigger a request in staging, open the Monitoring tab → "Evaluator metrics" dropdown. Scores appear within 1–2 minutes at the configured sampling rate.
- Programmatic direct-judge: hit the wrapped endpoint and confirm lands on the parent config's Monitoring tab.
track_eval_scores - Offline eval: run the dataset through the LD Playground, compare baseline vs new-variation scores side by side. No runtime wiring required.
Delegate: (sub-step 3, optional — only for UI-attached judges or custom-judge creation; offline eval doesn't delegate).
aiconfig-online-evals-
选择三种评估路径之一:这是最容易被误解的阶段——共有三种路径,而非两种,迁移场景下的最佳默认路径往往是人们容易忽略的那种。
路径 使用场景 支持代理模式? 离线评估(迁移场景推荐默认) 上线前回归测试:在LD Playground中使用固定数据集运行新变体,并与基线版本对比评分。最适合迁移场景,因为你需要证明新AI Config的表现至少与硬编码版本一致再上线。 是 — 支持所有模式 UI附加自动判定器 在LD UI中将一个或多个判定器附加到变体;判定器会自动对抽样的实时请求进行评分。无需修改代码。 仅支持补全模式(当前UI组件仅适配补全模式) 程序化直接判定器 在请求处理器中调用 ,并对每个调用执行ai_client.create_judge(...)。会增加请求级成本及代码复杂度。最适合需持续实时评分的工作流,抽样自动判定器无法满足需求时使用。judge.evaluate(input, output)是 — 支持所有模式(SDK对两种模式处理方式一致) 大多数迁移用户应从离线评估开始,仅在迁移稳定后需要持续实时评分时,再添加程序化直接判定器。 -
代理模式迁移默认选择离线评估:当前UI附加自动判定器仅支持补全模式。代理模式的推荐路径为:(a) 通过LD Playground + 数据集进行离线回归测试(支持所有模式),或(b) 将程序化直接判定器接入调用站点。从审计清单生成初始数据集CSV(每行一个代表性输入),并引导用户查看中的Playground操作指南。仅当用户明确要求持续实时评分时,才将程序化直接判定器接入生产代码。
/tutorials/offline-evals -
交接至— 仅适用于UI附加判定器(补全模式)或创建程序化路径将引用的自定义判定器AI Config。告知用户:“使用这些输入运行
aiconfig-online-evals,完成后回到此处。” 不要自动调用。传递以下信息:/aiconfig-online-evals- 父AI Config密钥及变体密钥
- 内置判定器列表(准确性、相关性、毒性)或需创建/附加的自定义判定器密钥
- 目标环境
关联技能会处理自定义判定器AI Config的创建、通过变体PATCH接口附加、以及为每个判定器配置设置默认方案。离线评估无需通过此Delegate执行——这是Playground中的操作流程,而非API写入操作。 -
程序化直接判定器:配置+
create_judge+evaluate:这是Stage 5中唯一需要修改代码的路径。正确代码结构:track_eval_scorespythonfrom ldai.client import AIJudgeConfigDefault judge = await ai_client.create_judge( judge_key, # LD中的判定器AI Config密钥 ld_context, AIJudgeConfigDefault(enabled=False), # 降级方案:SDK不可用时跳过评估 ) if judge and judge.enabled: result = await judge.evaluate( input_text, output_text, sampling_rate=0.25, # 可选;默认1.0(每次调用都评估) ) if result: config.tracker.track_eval_scores(result.evals)三条规则:- 返回
create_judge:必须始终用Optional[Judge]判断——若判定器AI Config对当前上下文禁用或服务商缺失,会返回if judge and judge.enabled:。直接对None调用None会引发.evaluate()。AttributeError - 传递,而非
AIJudgeConfigDefault。AICompletionConfigDefault的create_judge参数类型为default;传递补全类型会导致类型检查失败,这是部分旧示例中的文档级错误。Optional[AIJudgeConfigDefault] - 是
sampling_rate的参数,而非evaluate()的参数。默认值为1.0(每次调用都评估)。对于实时路径,可设置较低值(0.1–0.25)以控制成本。create_judge()
询问用户使用哪个判定器AI Config密钥。LaunchDarkly提供三个内置判定器——准确性、相关性、毒性,但内置判定器的实际AI Config 密钥并非标准SDK常量,也未在文档中明确说明。请用户打开LD UI中的AI Configs > Library,复制所需判定器的密钥,或先通过创建自定义判定器AI Config。aiconfig-create -
验证:
- UI附加自动判定器:在预发布环境触发请求,打开监控标签页 → “评估器指标”下拉菜单。评分会在1-2分钟内按配置的抽样率显示。
- 程序化直接判定器:调用封装后的接口,确认的数据已显示在父配置的监控标签页中。
track_eval_scores - 离线评估:在LD Playground中运行数据集,对比基线版本与新变体的评分。无需运行时配置。
Delegate: (子步骤3,可选——仅适用于UI附加判定器或自定义判定器创建;离线评估无需Delegate)。
aiconfig-online-evalsEdge Cases
边缘场景
| Situation | Action |
|---|---|
| Hardcoded prompt uses f-string / template literal interpolation | Move interpolation into AI Config prompt variables |
App already initializes | Reuse it — pass the existing client to |
App uses LangChain | Read |
| Retry wrapper around the provider call | Move tracker inside the retry — failures in the same request should share one |
| App has no tools — Stage 3 skipped | Move directly from Stage 2 verification to Stage 4 (tracking) |
| Mode mismatch: user said agent, audit shows one-shot chat | Choose completion mode unless the app uses LangGraph |
| TypeScript app using Anthropic SDK | No |
Fallback would silently crash because | Log a startup warning; proceed with the fallback. Never raise at import time |
| Multi-agent graph (supervisor + workers) | Stop after migrating a single agent. Agent graphs are currently Python-only ( |
| Single-agent (ReAct, tool loop) + agent mode | Default to offline eval via the LD Playground + Datasets for Stage 5. UI-attached judges are completion-only today, and programmatic direct-judge adds per-call cost that is usually not worth it until after the migration is live and stable. Point at |
Tool with a Pydantic | Extract the schema via |
Custom | Find the |
App has already externalized config into a | Good news — migration is a single-layer change. Replace the |
| 场景 | 处理方式 |
|---|---|
| 硬编码提示词使用f-string/模板字面量插值 | 将插值逻辑迁移至AI Config提示词变量 |
应用已为功能标志初始化 | 复用该客户端——将现有客户端传递给 |
应用使用LangChain | 读取 |
| 服务商调用周围有重试包装器 | 将追踪器放在重试逻辑内部——同一请求中的失败应共享一个 |
| 应用无工具 — 跳过Stage 3 | 直接从Stage 2验证进入Stage 4(追踪) |
| 模式不匹配:用户指定代理模式,但审计发现为单次聊天 | 除非应用使用LangGraph |
| TypeScript应用使用Anthropic SDK | 无 |
因缺失 | 在启动时记录警告;继续使用降级方案。绝不在导入时抛出异常 |
| 多代理图(监督者 + 执行者) | 完成单个代理迁移后暂停。当前多代理图仅支持Python( |
| 单代理(ReAct、工具循环)+ 代理模式 | Stage 5默认选择通过LD Playground + 数据集进行离线评估。当前UI附加判定器仅支持补全模式,且程序化直接判定器会增加每次调用的成本,通常在迁移上线并稳定后才值得使用。引导用户查看 |
工具带有Pydantic | 通过 |
自定义 | 找到 |
应用已将配置外部化至带有环境变量降级的 | 好消息——迁移仅需修改一层。将 |
What NOT to Do
禁止操作
- Don't skip Step 1 even when the user says "just wrap it." Without the audit, the fallback will drift from the hardcoded behavior.
- Don't delegate to before extracting the prompt and model — the delegate needs them as inputs.
aiconfig-create - Don't try to attach tools during initial . Tool attachment is a separate step owned by
setup-ai-config.aiconfig-tools - Don't use for Step 5. That skill is for
launchdarkly-metric-instrumentfeature metrics, not AIldClient.track()calls — they are different APIs.tracker.track_* - Don't wire evals before the tracker is in place. Judges score traffic; without Stage 4 traffic, there is nothing to judge.
- Don't frame Stage 5 as "either UI or programmatic." There are three paths: offline eval (recommended default for migration), UI-attached auto judges (completion-mode only), and programmatic direct-judge. Offline eval is the one most people skip and usually the right starting point.
- Don't pass to
sampling_rate— it's a parameter oncreate_judge, notJudge.evaluate().create_judge() - Don't hardcode judge AI Config keys (,
"accuracy-judge", etc). The built-in keys are not canonical SDK constants; ask the user to look them up in AI Configs > Library in the LD UI."relevance-judge" - Don't forget the guard after
if judge and judge.enabled:. It returnscreate_judgeand returnsOptional[Judge]when the judge config is disabled for the context.None - Don't cache the config object across requests. Call /
completion_configon each request so LaunchDarkly can re-evaluate targeting.completionConfig - Don't delete the fallback once LaunchDarkly is wired up. It is required for the and SDK-unreachable paths.
enabled=False - Don't claim you "delegated to " or any other sibling skill. This skill does not auto-invoke. At each handoff, print the inputs and tell the user to run the sibling slash-command, then wait. Anything else misleads the user about what just happened.
aiconfig-create - Don't skip the step between Stage 2 and Stage 4. A freshly created variation returns
/aiconfig-targetinguntil targeting promotes it to fallthrough — Stage 2 verification will silently take the fallback path on every request.enabled=False - Don't attempt a multi-agent graph migration in one pass. Migrate a single agent first; use agent-graph-reference.md as the next-step read.
- Don't use in Python — it does not exist in
track_request(). Uselaunchdarkly-server-sdk-aiwith a provider-package or custom extractor, or drop to explicittrack_metrics_of+track_duration+track_tokens/track_successif you're on the streaming path.track_error - Don't tuple-unpack the return of /
completion_config/agent_config/completionConfig. They return a single config object (e.g.agentConfig,AIAgentConfig), notAICompletionConfig. The tracker is at(config, tracker). LLMs hallucinate the tuple shape because pre-v0.x SDKs used to return one — the current API does not.config.tracker - Don't import from
LaunchDarklyCallbackHandler— neither the class nor the dotted module path exists. The real Python LangChain helper package isldai.langchain(top-level module, underscore). For single-node LangChain calls, useldai_langchain— the provider package normalizestrack_metrics_of(fn, LangChainProvider.get_ai_metrics_from_response)for you across OpenAI / Anthropic / Bedrock / Gemini. See sdk-ai-tracker-patterns.md for the full matrix.AIMessage.usage_metadata
- 即使用户说“直接封装”,也不要跳过Step 1。没有审计,降级方案会与硬编码行为不一致。
- 提取提示词和模型前,不要交接至——Delegate需要这些作为输入。
aiconfig-create - 不要在初始时附加工具。工具附加是
setup-ai-config负责的独立步骤。aiconfig-tools - Step 5不要使用。该技能用于
launchdarkly-metric-instrument功能指标,而非AIldClient.track()调用——二者是不同的API。tracker.track_* - 追踪器未配置前,不要接入评估功能。判定器为流量评分;没有Stage 4的流量,就没有可评分的内容。
- 不要将Stage 5描述为“要么UI要么程序化”。共有三种路径:离线评估(迁移场景推荐默认)、UI附加自动判定器(仅补全模式)、程序化直接判定器。离线评估是大多数人会忽略但通常是最佳起点的路径。
- 不要将传递给
sampling_rate——它是create_judge的参数,而非Judge.evaluate()的参数。create_judge() - 不要硬编码判定器AI Config密钥(、
"accuracy-judge"等)。内置密钥并非标准SDK常量;请用户在LD UI的AI Configs > Library中查找。"relevance-judge" - 后不要忘记
create_judge判断。它返回if judge and judge.enabled:,当判定器配置对当前上下文禁用时会返回Optional[Judge]。None - 不要跨请求缓存config对象。每次请求都需调用/
completion_config,以便LaunchDarkly重新评估目标配置。completionConfig - LaunchDarkly配置完成后,不要删除降级方案。它是及SDK不可用路径的必填项。
enabled=False - 不要声称“已交接至”或其他关联技能。本技能不会自动调用。每次交接时,打印输入信息并告知用户运行关联斜杠命令,然后等待。任何其他表述都会误导用户。
aiconfig-create - 不要跳过Stage 2与Stage 4之间的步骤。新创建的变体默认返回
/aiconfig-targeting,直到目标配置将其升级为默认方案——Stage 2的验证会在每次请求时自动走降级路径,且无任何提示。enabled=False - 不要尝试一次性完成多代理图迁移。先迁移单个代理;下一步可参考agent-graph-reference.md。
- Python中不要使用——
track_request()中不存在此方法。使用launchdarkly-server-sdk-ai搭配服务商包或自定义提取器,若为流式路径,可使用显式track_metrics_of+track_duration+track_tokens/track_success。track_error - 不要对/
completion_config/agent_config/completionConfig的返回值进行元组解包。它们返回单个config对象(例如agentConfig、AIAgentConfig),而非AICompletionConfig。tracker位于(config, tracker)。LLM可能会幻觉出元组结构,因为v0.x之前的SDK曾返回元组——但当前API已不支持。config.tracker - 不要从导入
ldai.langchain——该类及模块路径均不存在。真正的Python LangChain辅助包是LaunchDarklyCallbackHandler(顶级模块,下划线命名)。对于LangChain单节点调用,使用ldai_langchain——服务商包会为你标准化OpenAI/Anthropic/Bedrock/Gemini的track_metrics_of(fn, LangChainProvider.get_ai_metrics_from_response)。完整矩阵可参考sdk-ai-tracker-patterns.md。AIMessage.usage_metadata
Related Skills
相关技能
- — called by Stage 2 to create the config
aiconfig-create - — called by Stage 3 to create and attach tool definitions
aiconfig-tools - — called by Stage 5 to attach judges
aiconfig-online-evals - — add variations for A/B testing after migration is complete
aiconfig-variations - — roll out new variations to users after migration is complete
aiconfig-targeting - — modify config properties as your app evolves
aiconfig-update - — for
launchdarkly-metric-instrumentfeature metrics (NOT for AI tracker calls)ldClient.track()
- — Stage 2调用以创建配置
aiconfig-create - — Stage 3调用以创建并附加工具定义
aiconfig-tools - — Stage 5调用以附加判定器
aiconfig-online-evals - — 迁移完成后添加变体用于A/B测试
aiconfig-variations - — 迁移完成后向用户推出新变体
aiconfig-targeting - — 应用演进时修改配置属性
aiconfig-update - — 用于
launchdarkly-metric-instrument功能指标(不用于AI追踪器调用)ldClient.track()
References
参考资料
- phase-1-analysis-checklist.md — Step 1 audit checklist, grep patterns, SDK routing table, mode decision tree
- before-after-examples.md — Paired hardcoded-to-wrapped snippets for Python OpenAI, Node Anthropic, Python LangGraph
- sdk-ai-tracker-patterns.md — Every method in Python and Node side by side, auto-helper matrix, and common gotchas
tracker.track_* - agent-mode-frameworks.md — How to wire into LangGraph, CrewAI, and custom react loops; dynamic tool loading pattern
agent_config - fallback-defaults-pattern.md — Three fallback patterns (inline, file-backed, bootstrap-generated) and when to use each
- agent-graph-reference.md — Out-of-scope pointer doc for multi-agent migrations
- phase-1-analysis-checklist.md — Step 1审计清单、grep模式、SDK路由表、模式决策树
- before-after-examples.md — Python OpenAI、Node Anthropic、Python LangGraph的硬编码与封装后代码对比片段
- sdk-ai-tracker-patterns.md — Python和Node中所有方法的对比、自动辅助方法矩阵及常见陷阱
tracker.track_* - agent-mode-frameworks.md — 如何将接入LangGraph、CrewAI及自定义react循环;动态工具加载模式
agent_config - fallback-defaults-pattern.md — 三种降级方案模式(内联、文件存储、引导生成)及适用场景
- agent-graph-reference.md — 本技能未覆盖的多代理迁移参考文档