aiconfig-custom-metrics
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseCustom Metrics for AI Configs
AI Configs自定义指标
Full lifecycle management of custom business metrics: create metric definitions via API, track events via SDK, retrieve metric data, and manage metrics programmatically.
自定义业务指标的全生命周期管理:通过API创建指标定义,通过SDK跟踪事件,检索指标数据,并以编程方式管理指标。
Prerequisites
前置条件
- LaunchDarkly SDK initialized (see )
aiconfig-sdk - LaunchDarkly API token with role for metric management
writer - Understanding of built-in AI metrics (see )
aiconfig-ai-metrics
- 已初始化LaunchDarkly SDK(详见)
aiconfig-sdk - 拥有权限的LaunchDarkly API令牌,用于指标管理
writer - 了解内置AI指标(详见)
aiconfig-ai-metrics
API Key Detection
API密钥检测
Before prompting the user for an API key, try to detect it automatically:
- Check Claude MCP config - Read and look for
~/.claude/config.jsonmcpServers.launchdarkly.env.LAUNCHDARKLY_API_KEY - Check environment variables - Look for ,
LAUNCHDARKLY_API_KEY, orLAUNCHDARKLY_API_TOKENLD_API_KEY - Prompt user - Only if detection fails, ask the user for their API key
python
import os
import json
from pathlib import Path
def get_launchdarkly_api_key():
"""Auto-detect LaunchDarkly API key from Claude config or environment."""
# 1. Check Claude MCP config
claude_config = Path.home() / ".claude" / "config.json"
if claude_config.exists():
try:
config = json.load(open(claude_config))
api_key = config.get("mcpServers", {}).get("launchdarkly", {}).get("env", {}).get("LAUNCHDARKLY_API_KEY")
if api_key:
return api_key
except (json.JSONDecodeError, IOError):
pass
# 2. Check environment variables
for var in ["LAUNCHDARKLY_API_KEY", "LAUNCHDARKLY_API_TOKEN", "LD_API_KEY"]:
if os.environ.get(var):
return os.environ[var]
return None在提示用户输入API密钥之前,先尝试自动检测:
- 检查Claude MCP配置 - 读取,查找
~/.claude/config.jsonmcpServers.launchdarkly.env.LAUNCHDARKLY_API_KEY - 检查环境变量 - 查找、
LAUNCHDARKLY_API_KEY或LAUNCHDARKLY_API_TOKENLD_API_KEY - 提示用户 - 仅当检测失败时,才向用户询问其API密钥
python
import os
import json
from pathlib import Path
def get_launchdarkly_api_key():
"""Auto-detect LaunchDarkly API key from Claude config or environment."""
# 1. Check Claude MCP config
claude_config = Path.home() / ".claude" / "config.json"
if claude_config.exists():
try:
config = json.load(open(claude_config))
api_key = config.get("mcpServers", {}).get("launchdarkly", {}).get("env", {}).get("LAUNCHDARKLY_API_KEY")
if api_key:
return api_key
except (json.JSONDecodeError, IOError):
pass
# 2. Check environment variables
for var in ["LAUNCHDARKLY_API_KEY", "LAUNCHDARKLY_API_TOKEN", "LD_API_KEY"]:
if os.environ.get(var):
return os.environ[var]
return NoneMetrics Lifecycle Overview
指标生命周期概述
| Step | Method | Purpose |
|---|---|---|
| 1. Create | API | Define metric in LaunchDarkly |
| 2. Track | SDK | Send events to the metric |
| 3. Get | API | Retrieve metric definition/data |
| 4. Update | API | Modify metric properties |
| 5. Delete | API | Remove metric |
| 步骤 | 方式 | 用途 |
|---|---|---|
| 1. 创建 | API | 在LaunchDarkly中定义指标 |
| 2. 跟踪 | SDK | 向指标发送事件 |
| 3. 获取 | API | 检索指标定义/数据 |
| 4. 更新 | API | 修改指标属性 |
| 5. 删除 | API | 删除指标 |
1. Create Metric (API)
1. 创建指标(API)
Required fields for numeric custom metrics:
- - Must be one of:
successCriteria,"HigherThanBaseline""LowerThanBaseline" - - e.g.,
unit,"count","percent""milliseconds"
The API will return if these are missing for numeric metrics.
400 Bad Requestpython
import requests
import os
def create_metric(
project_key: str,
metric_key: str,
name: str,
kind: str = "custom",
is_numeric: bool = True,
unit: str = "count",
success_criteria: str = "HigherThanBaseline",
event_key: str = None,
description: str = None
):
"""Create a new metric definition in LaunchDarkly."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}"
payload = {
"key": metric_key,
"name": name,
"kind": kind,
"isNumeric": is_numeric,
"eventKey": event_key or metric_key
}
# Unit and successCriteria are required for numeric custom metrics
if is_numeric and kind == "custom":
payload["unit"] = unit
payload["successCriteria"] = success_criteria
if description:
payload["description"] = description
headers = {
"Authorization": API_TOKEN,
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 201:
print(f"[OK] Created metric: {metric_key}")
return response.json()
elif response.status_code == 409:
print(f"[INFO] Metric already exists: {metric_key}")
return None
else:
print(f"[ERROR] Failed to create metric: {response.status_code}")
print(f" {response.text}")
return NoneMetric Kinds:
- - Track any event (most common for AI metrics)
custom - - Track page views
pageview - - Track click events
click
Success Criteria (for numeric metrics):
- - Higher values are better (e.g., revenue, satisfaction)
HigherThanBaseline - - Lower values are better (e.g., errors, latency)
LowerThanBaseline
Common Units:
- - Generic count
count - - Time duration
milliseconds - - Percentage values
percent - - Currency
dollars
数值型自定义指标必填字段:
- - 必须为以下值之一:
successCriteria、"HigherThanBaseline""LowerThanBaseline" - - 例如:
unit、"count"、"percent""milliseconds"
如果数值型指标缺少这些字段,API将返回错误。
400 Bad Requestpython
import requests
import os
def create_metric(
project_key: str,
metric_key: str,
name: str,
kind: str = "custom",
is_numeric: bool = True,
unit: str = "count",
success_criteria: str = "HigherThanBaseline",
event_key: str = None,
description: str = None
):
"""Create a new metric definition in LaunchDarkly."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}"
payload = {
"key": metric_key,
"name": name,
"kind": kind,
"isNumeric": is_numeric,
"eventKey": event_key or metric_key
}
# Unit and successCriteria are required for numeric custom metrics
if is_numeric and kind == "custom":
payload["unit"] = unit
payload["successCriteria"] = success_criteria
if description:
payload["description"] = description
headers = {
"Authorization": API_TOKEN,
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 201:
print(f"[OK] Created metric: {metric_key}")
return response.json()
elif response.status_code == 409:
print(f"[INFO] Metric already exists: {metric_key}")
return None
else:
print(f"[ERROR] Failed to create metric: {response.status_code}")
print(f" {response.text}")
return None指标类型:
- - 跟踪任意事件(AI指标最常用类型)
custom - - 跟踪页面浏览量
pageview - - 跟踪点击事件
click
成功判定标准(针对数值型指标):
- - 值越高越好(例如:收入、满意度)
HigherThanBaseline - - 值越低越好(例如:错误数、延迟)
LowerThanBaseline
常用单位:
- - 通用计数单位
count - - 时间时长单位
milliseconds - - 百分比单位
percent - - 货币单位
dollars
2. Track Events (SDK)
2. 跟踪事件(SDK)
Once the metric is created, track events using the SDK:
python
from ldclient import Context
from ldclient.config import Config
import ldclient创建指标后,使用SDK跟踪事件:
python
from ldclient import Context
from ldclient.config import Config
import ldclientInitialize (see aiconfig-sdk for details)
Initialize (see aiconfig-sdk for details)
ldclient.set_config(Config("your-sdk-key"))
ld_client = ldclient.get()
def track_metric(ld_client, user_id: str, metric_key: str, value: float, data: dict = None):
"""Track an event to a metric."""
context = Context.builder(user_id).build()
ld_client.track(
metric_key,
context,
data=data,
metric_value=value
)undefinedldclient.set_config(Config("your-sdk-key"))
ld_client = ldclient.get()
def track_metric(ld_client, user_id: str, metric_key: str, value: float, data: dict = None):
"""Track an event to a metric."""
context = Context.builder(user_id).build()
ld_client.track(
metric_key,
context,
data=data,
metric_value=value
)undefinedCommon Tracking Patterns
常见跟踪模式
python
def track_conversion(ld_client, user_id: str, amount: float, config_key: str):
"""Track a conversion event with revenue."""
context = Context.builder(user_id).build()
ld_client.track(
"business.conversion",
context,
data={"configKey": config_key, "category": "electronics"},
metric_value=amount
)
def track_task_success(ld_client, user_id: str, task_type: str, success: bool):
"""Track task completion success/failure."""
context = Context.builder(user_id).build()
ld_client.track(
"task.success_rate",
context,
data={"taskType": task_type},
metric_value=1.0 if success else 0.0
)
def track_satisfaction(ld_client, user_id: str, score: float, feedback_type: str):
"""Track user satisfaction (0-100 scale)."""
context = Context.builder(user_id).build()
ld_client.track(
"user.satisfaction",
context,
data={"feedbackType": feedback_type},
metric_value=score
)
# Track negative feedback separately for alerts
if score < 50:
ld_client.track(
"user.negative_feedback",
context,
metric_value=1.0
)
def track_revenue(ld_client, user_id: str, revenue: float, source: str):
"""Track revenue generated after AI interaction."""
context = Context.builder(user_id).set("tier", "premium").build()
if revenue > 0:
ld_client.track(
"revenue.impact",
context,
data={"source": source},
metric_value=revenue
)python
def track_conversion(ld_client, user_id: str, amount: float, config_key: str):
"""Track a conversion event with revenue."""
context = Context.builder(user_id).build()
ld_client.track(
"business.conversion",
context,
data={"configKey": config_key, "category": "electronics"},
metric_value=amount
)
def track_task_success(ld_client, user_id: str, task_type: str, success: bool):
"""Track task completion success/failure."""
context = Context.builder(user_id).build()
ld_client.track(
"task.success_rate",
context,
data={"taskType": task_type},
metric_value=1.0 if success else 0.0
)
def track_satisfaction(ld_client, user_id: str, score: float, feedback_type: str):
"""Track user satisfaction (0-100 scale)."""
context = Context.builder(user_id).build()
ld_client.track(
"user.satisfaction",
context,
data={"feedbackType": feedback_type},
metric_value=score
)
# Track negative feedback separately for alerts
if score < 50:
ld_client.track(
"user.negative_feedback",
context,
metric_value=1.0
)
def track_revenue(ld_client, user_id: str, revenue: float, source: str):
"""Track revenue generated after AI interaction."""
context = Context.builder(user_id).set("tier", "premium").build()
if revenue > 0:
ld_client.track(
"revenue.impact",
context,
data={"source": source},
metric_value=revenue
)3. Get Metrics (API)
3. 获取指标(API)
Get Single Metric
获取单个指标
python
def get_metric(project_key: str, metric_key: str):
"""Get a single metric definition."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {"Authorization": API_TOKEN}
response = requests.get(url, headers=headers)
if response.status_code == 200:
metric = response.json()
print(f"[OK] Metric: {metric['key']}")
print(f" Name: {metric.get('name', 'N/A')}")
print(f" Kind: {metric.get('kind', 'N/A')}")
print(f" Numeric: {metric.get('isNumeric', False)}")
print(f" Event Key: {metric.get('eventKey', 'N/A')}")
return metric
elif response.status_code == 404:
print(f"[INFO] Metric not found: {metric_key}")
return None
else:
print(f"[ERROR] Failed to get metric: {response.status_code}")
return Nonepython
def get_metric(project_key: str, metric_key: str):
"""Get a single metric definition."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {"Authorization": API_TOKEN}
response = requests.get(url, headers=headers)
if response.status_code == 200:
metric = response.json()
print(f"[OK] Metric: {metric['key']}")
print(f" Name: {metric.get('name', 'N/A')}")
print(f" Kind: {metric.get('kind', 'N/A')}")
print(f" Numeric: {metric.get('isNumeric', False)}")
print(f" Event Key: {metric.get('eventKey', 'N/A')}")
return metric
elif response.status_code == 404:
print(f"[INFO] Metric not found: {metric_key}")
return None
else:
print(f"[ERROR] Failed to get metric: {response.status_code}")
return NoneList All Metrics
列出所有指标
python
def list_metrics(project_key: str, limit: int = 20):
"""List all metrics in a project."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}"
headers = {"Authorization": API_TOKEN}
params = {"limit": limit}
response = requests.get(url, headers=headers, params=params)
if response.status_code == 200:
data = response.json()
metrics = data.get("items", [])
print(f"[OK] Found {len(metrics)} metrics:")
for metric in metrics:
numeric = "numeric" if metric.get("isNumeric") else "non-numeric"
print(f" - {metric['key']} ({metric.get('kind', 'custom')}, {numeric})")
return metrics
else:
print(f"[ERROR] Failed to list metrics: {response.status_code}")
return Nonepython
def list_metrics(project_key: str, limit: int = 20):
"""List all metrics in a project."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}"
headers = {"Authorization": API_TOKEN}
params = {"limit": limit}
response = requests.get(url, headers=headers, params=params)
if response.status_code == 200:
data = response.json()
metrics = data.get("items", [])
print(f"[OK] Found {len(metrics)} metrics:")
for metric in metrics:
numeric = "numeric" if metric.get("isNumeric") else "non-numeric"
print(f" - {metric['key']} ({metric.get('kind', 'custom')}, {numeric})")
return metrics
else:
print(f"[ERROR] Failed to list metrics: {response.status_code}")
return None4. Update Metric (API)
4. 更新指标(API)
python
def update_metric(project_key: str, metric_key: str, updates: list):
"""
Update a metric using JSON Patch operations.
Args:
updates: List of patch operations, e.g.:
[{"op": "replace", "path": "/name", "value": "New Name"}]
"""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {
"Authorization": API_TOKEN,
"Content-Type": "application/json"
}
response = requests.patch(url, json=updates, headers=headers)
if response.status_code == 200:
print(f"[OK] Updated metric: {metric_key}")
return response.json()
elif response.status_code == 404:
print(f"[ERROR] Metric not found: {metric_key}")
return None
else:
print(f"[ERROR] Failed to update metric: {response.status_code}")
print(f" {response.text}")
return Nonepython
def update_metric(project_key: str, metric_key: str, updates: list):
"""
Update a metric using JSON Patch operations.
Args:
updates: List of patch operations, e.g.:
[{"op": "replace", "path": "/name", "value": "New Name"}]
"""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {
"Authorization": API_TOKEN,
"Content-Type": "application/json"
}
response = requests.patch(url, json=updates, headers=headers)
if response.status_code == 200:
print(f"[OK] Updated metric: {metric_key}")
return response.json()
elif response.status_code == 404:
print(f"[ERROR] Metric not found: {metric_key}")
return None
else:
print(f"[ERROR] Failed to update metric: {response.status_code}")
print(f" {response.text}")
return NoneExample: Update metric name and description
Example: Update metric name and description
def rename_metric(project_key: str, metric_key: str, new_name: str, new_description: str = None):
"""Rename a metric and optionally update description."""
updates = [
{"op": "replace", "path": "/name", "value": new_name}
]
if new_description:
updates.append({"op": "replace", "path": "/description", "value": new_description})
return update_metric(project_key, metric_key, updates)undefineddef rename_metric(project_key: str, metric_key: str, new_name: str, new_description: str = None):
"""Rename a metric and optionally update description."""
updates = [
{"op": "replace", "path": "/name", "value": new_name}
]
if new_description:
updates.append({"op": "replace", "path": "/description", "value": new_description})
return update_metric(project_key, metric_key, updates)undefined5. Delete Metric (API)
5. 删除指标(API)
python
def delete_metric(project_key: str, metric_key: str):
"""Delete a metric from the project."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {"Authorization": API_TOKEN}
response = requests.delete(url, headers=headers)
if response.status_code == 204:
print(f"[OK] Deleted metric: {metric_key}")
return True
elif response.status_code == 404:
print(f"[INFO] Metric not found: {metric_key}")
return False
else:
print(f"[ERROR] Failed to delete metric: {response.status_code}")
return Falsepython
def delete_metric(project_key: str, metric_key: str):
"""Delete a metric from the project."""
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
url = f"https://app.launchdarkly.com/api/v2/metrics/{project_key}/{metric_key}"
headers = {"Authorization": API_TOKEN}
response = requests.delete(url, headers=headers)
if response.status_code == 204:
print(f"[OK] Deleted metric: {metric_key}")
return True
elif response.status_code == 404:
print(f"[INFO] Metric not found: {metric_key}")
return False
else:
print(f"[ERROR] Failed to delete metric: {response.status_code}")
return FalseComplete Workflow Example
完整工作流示例
python
import os
import requests
from ldclient import Context
from ldclient.config import Config
import ldclientpython
import os
import requests
from ldclient import Context
from ldclient.config import Config
import ldclientSetup
Setup
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY")
PROJECT_KEY = "support-ai"
ldclient.set_config(Config(SDK_KEY))
ld_client = ldclient.get()
API_TOKEN = os.environ.get("LAUNCHDARKLY_API_TOKEN")
SDK_KEY = os.environ.get("LAUNCHDARKLY_SDK_KEY")
PROJECT_KEY = "support-ai"
ldclient.set_config(Config(SDK_KEY))
ld_client = ldclient.get()
1. Create metric
1. Create metric
create_metric(
PROJECT_KEY,
"ai.task.completion",
name="AI Task Completion Rate",
kind="custom",
is_numeric=True,
description="Tracks successful AI task completions"
)
create_metric(
PROJECT_KEY,
"ai.task.completion",
name="AI Task Completion Rate",
kind="custom",
is_numeric=True,
description="Tracks successful AI task completions"
)
2. Track events
2. Track events
context = Context.builder("user-123").build()
ld_client.track("ai.task.completion", context, metric_value=1.0)
ld_client.track("ai.task.completion", context, metric_value=1.0)
ld_client.track("ai.task.completion", context, metric_value=0.0) # failure
ld_client.flush()
context = Context.builder("user-123").build()
ld_client.track("ai.task.completion", context, metric_value=1.0)
ld_client.track("ai.task.completion", context, metric_value=1.0)
ld_client.track("ai.task.completion", context, metric_value=0.0) # failure
ld_client.flush()
3. Get metric definition
3. Get metric definition
metric = get_metric(PROJECT_KEY, "ai.task.completion")
metric = get_metric(PROJECT_KEY, "ai.task.completion")
4. Update metric name
4. Update metric name
rename_metric(PROJECT_KEY, "ai.task.completion", "AI Task Success Rate")
rename_metric(PROJECT_KEY, "ai.task.completion", "AI Task Success Rate")
5. List all metrics
5. List all metrics
list_metrics(PROJECT_KEY)
list_metrics(PROJECT_KEY)
6. Delete metric (when no longer needed)
6. Delete metric (when no longer needed)
delete_metric(PROJECT_KEY, "ai.task.completion")
delete_metric(PROJECT_KEY, "ai.task.completion")
undefinedundefinedSession Metrics Tracker
会话指标跟踪器
python
import time
from ldclient import Context
class SessionMetricsTracker:
"""Track metrics across an entire user session."""
def __init__(self, ld_client):
self.ld_client = ld_client
self.session_data = {}
def start_session(self, user_id: str, session_id: str):
"""Initialize session tracking."""
self.session_data[session_id] = {
"user_id": user_id,
"start_time": time.time(),
"interactions": 0,
"successful_tasks": 0
}
def track_interaction(self, session_id: str, success: bool):
"""Track individual interaction within session."""
if session_id not in self.session_data:
return
session = self.session_data[session_id]
session["interactions"] += 1
if success:
session["successful_tasks"] += 1
def end_session(self, session_id: str):
"""Finalize and track session metrics."""
if session_id not in self.session_data:
return None
session = self.session_data[session_id]
duration = time.time() - session["start_time"]
context = Context.builder(session["user_id"]).build()
# Track session duration
self.ld_client.track(
"session.duration",
context,
data={"interactions": session["interactions"]},
metric_value=duration
)
# Track session success rate
if session["interactions"] > 0:
success_rate = session["successful_tasks"] / session["interactions"]
self.ld_client.track(
"session.success_rate",
context,
metric_value=success_rate * 100
)
result = dict(session)
result["duration"] = duration
del self.session_data[session_id]
return resultpython
import time
from ldclient import Context
class SessionMetricsTracker:
"""Track metrics across an entire user session."""
def __init__(self, ld_client):
self.ld_client = ld_client
self.session_data = {}
def start_session(self, user_id: str, session_id: str):
"""Initialize session tracking."""
self.session_data[session_id] = {
"user_id": user_id,
"start_time": time.time(),
"interactions": 0,
"successful_tasks": 0
}
def track_interaction(self, session_id: str, success: bool):
"""Track individual interaction within session."""
if session_id not in self.session_data:
return
session = self.session_data[session_id]
session["interactions"] += 1
if success:
session["successful_tasks"] += 1
def end_session(self, session_id: str):
"""Finalize and track session metrics."""
if session_id not in self.session_data:
return None
session = self.session_data[session_id]
duration = time.time() - session["start_time"]
context = Context.builder(session["user_id"]).build()
# Track session duration
self.ld_client.track(
"session.duration",
context,
data={"interactions": session["interactions"]},
metric_value=duration
)
# Track session success rate
if session["interactions"] > 0:
success_rate = session["successful_tasks"] / session["interactions"]
self.ld_client.track(
"session.success_rate",
context,
metric_value=success_rate * 100
)
result = dict(session)
result["duration"] = duration
del self.session_data[session_id]
return resultNaming Conventions
命名规范
python
undefinedpython
undefinedUse dot notation for hierarchy
Use dot notation for hierarchy
"quality.accuracy"
"quality.relevance"
"user.satisfaction"
"user.engagement"
"revenue.conversion"
"task.success_rate"
"session.duration"
"ai.task.completion"
"ai.recommendation.conversion"
undefined"quality.accuracy"
"quality.relevance"
"user.satisfaction"
"user.engagement"
"revenue.conversion"
"task.success_rate"
"session.duration"
"ai.task.completion"
"ai.recommendation.conversion"
undefinedBest Practices
最佳实践
- Create Before Track - Metric must exist before tracking events
- Use Numeric Metrics - Set for aggregation
isNumeric=True - Consistent Keys - Use same key in and
create_metric()ld_client.track() - Flush in Serverless - Call before Lambda terminates
ld_client.flush() - Rate Limit - Don't track on every keystroke
- 先创建再跟踪 - 跟踪事件前必须先创建指标
- 使用数值型指标 - 设置以支持聚合
isNumeric=True - 保持密钥一致 - 在和
create_metric()中使用相同的密钥ld_client.track() - 无服务器环境下刷新 - 在Lambda终止前调用
ld_client.flush() - 限制跟踪频率 - 不要对每次按键都进行跟踪
Viewing Metrics
查看指标
Custom metrics appear in:
- Metrics page in LaunchDarkly UI
- Monitoring tab of your AI Config
- Via API using or
get_metric()list_metrics()
自定义指标可在以下位置查看:
- LaunchDarkly UI中的指标页面
- AI Config的监控标签页
- 通过API调用或
get_metric()查看list_metrics()
Related Skills
相关技能
- - SDK setup
aiconfig-sdk - - Built-in AI metrics (tokens, duration, cost)
aiconfig-ai-metrics - - Quality metrics via judges
aiconfig-online-evals
- - SDK设置
aiconfig-sdk - - 内置AI指标(令牌、时长、成本)
aiconfig-ai-metrics - - 通过判定者评估质量指标
aiconfig-online-evals