integrate-flowlines-sdk-python

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Flowlines SDK for Python — Agent Skill

Python版Flowlines SDK — Agent Skill

What is Flowlines

什么是Flowlines

Flowlines is an observability SDK for LLM-powered Python applications. It instruments LLM provider APIs using OpenTelemetry, automatically capturing requests, responses, timing, and errors. It filters telemetry to only LLM-related spans and exports them via OTLP/HTTP to the Flowlines backend.
Supported LLM providers: OpenAI, Anthropic, AWS Bedrock, Cohere, Google Generative AI, Vertex AI, Together AI. Supported frameworks/tools: LangChain, LlamaIndex, MCP, Pinecone, ChromaDB, Qdrant.
Flowlines是面向LLM驱动的Python应用的可观测性SDK。它使用OpenTelemetry对LLM提供商的API进行插桩,自动捕获请求、响应、耗时和错误信息。它会过滤遥测数据,仅保留与LLM相关的追踪跨度(span),并通过OTLP/HTTP将其导出到Flowlines后端。
支持的LLM提供商:OpenAI、Anthropic、AWS Bedrock、Cohere、Google Generative AI、Vertex AI、Together AI。 支持的框架/工具:LangChain、LlamaIndex、MCP、Pinecone、ChromaDB、Qdrant。

Installation

安装

Requires Python 3.11+.
bash
pip install flowlines
Then install instrumentation extras for the providers used in the project:
bash
undefined
要求Python 3.11及以上版本。
bash
pip install flowlines
然后为项目中使用的提供商安装对应的插桩扩展包:
bash
undefined

Single provider

单个提供商

pip install flowlines[openai]
pip install flowlines[openai]

Multiple providers

多个提供商

pip install flowlines[openai,anthropic]
pip install flowlines[openai,anthropic]

All supported providers

所有支持的提供商

pip install flowlines[all]

Available extras: `openai`, `anthropic`, `bedrock`, `cohere`, `google-generativeai`, `vertexai`, `together`, `pinecone`, `chromadb`, `qdrant`, `langchain`, `llamaindex`, `mcp`.
pip install flowlines[all]

可用的扩展包:`openai`、`anthropic`、`bedrock`、`cohere`、`google-generativeai`、`vertexai`、`together`、`pinecone`、`chromadb`、`qdrant`、`langchain`、`llamaindex`、`mcp`。

Integration

集成方式

There are three integration modes. Pick the one that matches the project's OpenTelemetry situation.
有三种集成模式,请根据项目的OpenTelemetry使用情况选择。

Mode A — No existing OpenTelemetry setup (default)

模式A — 无现有OpenTelemetry配置(默认)

Use this when the project does NOT already have its own OpenTelemetry
TracerProvider
. This is the most common case.
python
from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>")
This single call:
  1. Creates an OpenTelemetry
    TracerProvider
  2. Auto-detects which LLM libraries are installed and instruments them
  3. Filters spans to only export LLM-related telemetry
  4. Sends data to the Flowlines backend via OTLP/HTTP
当项目未自行管理OpenTelemetry
TracerProvider
时使用此模式,这是最常见的场景。
python
from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>")
这一行代码会完成以下操作:
  1. 创建一个OpenTelemetry
    TracerProvider
  2. 自动检测已安装的LLM库并对其进行插桩
  3. 过滤追踪跨度,仅导出与LLM相关的遥测数据
  4. 通过OTLP/HTTP将数据发送到Flowlines后端

Mode B1 — Existing OpenTelemetry setup (
has_external_otel=True
)

模式B1 — 已有OpenTelemetry配置(
has_external_otel=True

Use this when the project already manages its own
TracerProvider
.
python
from flowlines import Flowlines
from opentelemetry.sdk.trace import TracerProvider

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_external_otel=True)

provider = TracerProvider()
当项目已自行管理
TracerProvider
时使用此模式。
python
from flowlines import Flowlines
from opentelemetry.sdk.trace import TracerProvider

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_external_otel=True)

provider = TracerProvider()

Add the Flowlines span processor to the existing provider

将Flowlines的跨度处理器添加到现有提供商

processor = flowlines.create_span_processor() provider.add_span_processor(processor)
processor = flowlines.create_span_processor() provider.add_span_processor(processor)

Instrument providers using the Flowlines instrumentor registry

使用Flowlines的插桩器注册表对提供商进行插桩

for instrumentor in flowlines.get_instrumentors(): instrumentor.instrument(tracer_provider=provider)

- `create_span_processor()` must be called exactly once.
- `get_instrumentors()` returns instrumentor instances only for libraries that are currently installed.
for instrumentor in flowlines.get_instrumentors(): instrumentor.instrument(tracer_provider=provider)

- `create_span_processor()`必须恰好调用一次。
- `get_instrumentors()`仅返回当前已安装库对应的插桩器实例。

Mode B2 — Traceloop already initialized (
has_traceloop=True
)

模式B2 — 已初始化Traceloop(
has_traceloop=True

Use this when Traceloop SDK is already initialized. Traceloop must be initialized BEFORE Flowlines.
python
from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_traceloop=True)
Flowlines adds its span processor to the existing Traceloop
TracerProvider
. No instrumentor registration needed.
当Traceloop SDK已初始化时使用此模式。必须在Flowlines之前初始化Traceloop。
python
from flowlines import Flowlines

flowlines = Flowlines(api_key="<FLOWLINES_API_KEY>", has_traceloop=True)
Flowlines会将其跨度处理器添加到已有的Traceloop
TracerProvider
中,无需注册插桩器。

Critical rules

重要规则

  1. Initialize Flowlines BEFORE creating LLM clients. The
    Flowlines()
    constructor must run before any LLM provider client is instantiated (e.g.,
    OpenAI()
    ,
    Anthropic()
    ). If the client is created first, its calls will not be captured.
  2. Flowlines is a singleton. Only one
    Flowlines()
    instance may exist. A second call raises
    RuntimeError
    . Store the instance and reuse it. Do NOT instantiate it multiple times.
  3. has_external_otel
    and
    has_traceloop
    are mutually exclusive.
    Setting both to
    True
    raises
    ValueError
    .
  4. user_id
    is mandatory in
    context()
    .
    The context manager requires
    user_id
    as a keyword argument.
    session_id
    and
    agent_id
    are optional.
  5. Context does not auto-propagate to child threads/tasks. If using threads or async tasks, set context in each thread/task explicitly.
  1. 必须在创建LLM客户端之前初始化Flowlines
    Flowlines()
    构造函数必须在任何LLM提供商客户端实例化(例如
    OpenAI()
    Anthropic()
    )之前运行。如果先创建客户端,其调用将无法被捕获。
  2. Flowlines是单例模式。只能存在一个
    Flowlines()
    实例,第二次调用会抛出
    RuntimeError
    。请存储实例并复用,不要多次实例化。
  3. has_external_otel
    has_traceloop
    不可同时设置为True
    。同时设置两者会抛出
    ValueError
  4. context()
    中的
    user_id
    是必填项
    。上下文管理器要求
    user_id
    作为关键字参数,
    session_id
    agent_id
    为可选。
  5. 上下文不会自动传播到子线程/任务。如果使用线程或异步任务,需要在每个线程/任务中显式设置上下文。

User, session, and agent tracking

用户、会话和Agent追踪

Tag LLM calls with user/session/agent IDs using the context manager:
python
with flowlines.context(user_id="user-42", session_id="sess-abc", agent_id="agent-1"):
    client.chat.completions.create(...)  # this span gets user_id, session_id, and agent_id
session_id
and
agent_id
are optional:
python
with flowlines.context(user_id="user-42"):
    client.chat.completions.create(...)
For cases where a context manager doesn't fit (e.g., across request boundaries in web frameworks), use the imperative API:
python
token = Flowlines.set_context(user_id="user-42", session_id="sess-abc", agent_id="agent-1")
try:
    client.chat.completions.create(...)
finally:
    Flowlines.clear_context(token)
set_context()
/
clear_context()
are static methods on the
Flowlines
class.
使用上下文管理器为LLM调用标记用户/会话/Agent ID:
python
with flowlines.context(user_id="user-42", session_id="sess-abc", agent_id="agent-1"):
    client.chat.completions.create(...)  # 此追踪跨度会携带user_id、session_id和agent_id
session_id
agent_id
是可选的:
python
with flowlines.context(user_id="user-42"):
    client.chat.completions.create(...)
对于上下文管理器不适用的场景(例如Web框架中的跨请求边界),使用命令式API:
python
token = Flowlines.set_context(user_id="user-42", session_id="sess-abc", agent_id="agent-1")
try:
    client.chat.completions.create(...)
finally:
    Flowlines.clear_context(token)
set_context()
/
clear_context()
Flowlines
类的静态方法。

Context integration guidance

上下文集成指南

When integrating
flowlines.context()
, you MUST wrap LLM calls with context. Follow these steps:
  1. Identify existing data in the codebase that maps to
    user_id
    ,
    session_id
    , and
    agent_id
    :
    • user_id
      : the end-user making the request (e.g., authenticated user ID, email, API key owner)
    • session_id
      : the conversation or session grouping multiple interactions (e.g., chat thread ID, session token, conversation UUID)
    • agent_id
      : the AI agent or assistant handling the request (e.g., agent name, bot identifier, assistant ID)
  2. If obvious mappings exist, use them directly. For example, if the app has
    request.user.id
    and a
    thread_id
    , wire them in:
    python
    with flowlines.context(user_id=request.user.id, session_id=thread_id):
        ...
  3. If mappings are unclear, ask the user which variables or fields should be used for
    user_id
    ,
    session_id
    , and
    agent_id
    .
  4. If no data is available yet, propose using placeholder values with TODO comments so the integration is functional and easy to complete later:
    python
    with flowlines.context(
        user_id="anonymous",  # TODO: replace with actual user identifier
        session_id=f"sess-{uuid.uuid4().hex[:8]}",  # TODO: replace with actual session/conversation ID
        agent_id="my-agent",  # TODO: replace with actual agent identifier
    ):
        ...
    Only include fields that are relevant.
    session_id
    and
    agent_id
    can be omitted entirely if not applicable.
集成
flowlines.context()
时,必须用上下文包裹LLM调用。请遵循以下步骤:
  1. 识别代码库中映射到
    user_id
    session_id
    agent_id
    的现有数据
    • user_id
      :发起请求的终端用户(例如已认证用户ID、邮箱、API密钥所有者)
    • session_id
      :将多次交互分组的会话或对话(例如聊天线程ID、会话令牌、对话UUID)
    • agent_id
      :处理请求的AI Agent或助手(例如Agent名称、机器人标识符、助手ID)
  2. 如果存在明确的映射关系,直接使用。例如,如果应用中有
    request.user.id
    thread_id
    ,则可以这样配置:
    python
    with flowlines.context(user_id=request.user.id, session_id=thread_id):
        ...
  3. 如果映射关系不明确,询问用户应使用哪些变量或字段作为
    user_id
    session_id
    agent_id
  4. 如果暂无可用数据,建议使用占位符值并添加TODO注释,以便集成功能可用且后续易于完善:
    python
    with flowlines.context(
        user_id="anonymous",  # TODO: 替换为实际用户标识符
        session_id=f"sess-{uuid.uuid4().hex[:8]}",  # TODO: 替换为实际会话/对话ID
        agent_id="my-agent",  # TODO: 替换为实际Agent标识符
    ):
        ...
    仅包含相关字段,如果
    session_id
    agent_id
    不适用,可以完全省略。

Constructor parameters

构造函数参数

python
Flowlines(
    api_key: str,                    # Required. The Flowlines API key.
    endpoint: str = "https://ingest.flowlines.ai",  # Backend URL.
    has_external_otel: bool = False,  # True if project has its own TracerProvider.
    has_traceloop: bool = False,      # True if Traceloop is already initialized.
    verbose: bool = False,            # True to enable debug logging to stderr.
)
python
Flowlines(
    api_key: str,                    # 必填项。Flowlines API密钥。
    endpoint: str = "https://ingest.flowlines.ai",  # 后端URL。
    has_external_otel: bool = False,  # 当项目有自己的TracerProvider时设为True。
    has_traceloop: bool = False,      # 当Traceloop已初始化时设为True。
    verbose: bool = False,            # 设为True可启用调试日志输出到stderr。
)

Public API summary

公共API摘要

Method / attributeDescription
Flowlines(api_key, ...)
Constructor. Initializes the SDK (singleton).
flowlines.context(user_id=..., session_id=..., agent_id=...)
Context manager to tag spans with user/session/agent.
Flowlines.set_context(user_id=..., session_id=..., agent_id=...)
Static. Imperative context setting; returns a token.
Flowlines.clear_context(token)
Static. Restores previous context using the token.
flowlines.create_span_processor()
Returns a
SpanProcessor
. Mode B1 only. Call once.
flowlines.get_instrumentors()
Returns list of available instrumentor instances.
flowlines.shutdown()
Flush and shut down. Called automatically via
atexit
.
方法/属性描述
Flowlines(api_key, ...)
构造函数。初始化SDK(单例模式)。
flowlines.context(user_id=..., session_id=..., agent_id=...)
上下文管理器,为追踪跨度标记用户/会话/Agent信息。
Flowlines.set_context(user_id=..., session_id=..., agent_id=...)
静态方法。命令式设置上下文,返回一个令牌。
Flowlines.clear_context(token)
静态方法。使用令牌恢复之前的上下文。
flowlines.create_span_processor()
返回一个
SpanProcessor
。仅模式B1使用,调用一次。
flowlines.get_instrumentors()
返回可用插桩器实例的列表。
flowlines.shutdown()
刷新并关闭。通过
atexit
自动调用。

Imports

导入方式

The public API is exported from the top-level package:
python
from flowlines import Flowlines
from flowlines import FlowlinesExporter  # only needed for advanced use
公共API从顶层包导出:
python
from flowlines import Flowlines
from flowlines import FlowlinesExporter  # 仅高级场景需要

Verbose / debug mode

详细/调试模式

Pass
verbose=True
to print debug information to stderr:
python
flowlines = Flowlines(api_key="...", verbose=True)
This logs instrumentor discovery, span filtering, and export results.
传入
verbose=True
可将调试信息打印到stderr:
python
flowlines = Flowlines(api_key="...", verbose=True)
此模式会记录插桩器发现、追踪跨度过滤和导出结果。

Shutdown

关闭

flowlines.shutdown()
is registered as an
atexit
handler automatically. It is idempotent — safe to call multiple times. You can call it explicitly if you need to ensure spans are flushed before the process ends (e.g., in serverless environments).
flowlines.shutdown()
会自动注册为
atexit
处理程序。它是幂等的——多次调用安全。如果需要确保在进程结束前刷新追踪跨度(例如无服务器环境),可以显式调用它。

Common mistakes to avoid

需避免的常见错误

  • Do NOT create the LLM client before initializing Flowlines — spans will be missed.
  • Do NOT instantiate
    Flowlines()
    more than once — it raises
    RuntimeError
    .
  • Do NOT set both
    has_external_otel=True
    and
    has_traceloop=True
    .
  • Do NOT forget to install the instrumentation extras for the providers you use (e.g.,
    flowlines[openai]
    ).
  • Do NOT assume context propagates to child threads — set it explicitly in each thread/task.
  • 不要在初始化Flowlines之前创建LLM客户端——会导致追踪跨度丢失。
  • 不要多次实例化
    Flowlines()
    ——会抛出
    RuntimeError
  • 不要同时设置
    has_external_otel=True
    has_traceloop=True
  • 不要忘记为使用的提供商安装对应的插桩扩展包(例如
    flowlines[openai]
    )。
  • 不要假设上下文会自动传播到子线程——在每个线程/任务中显式设置。