addon-langchain-llm

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Add-on: LangChain LLM

附加组件:LangChain LLM

Use this skill when an existing project needs LangChain primitives for chat, retrieval, or summarization.
当现有项目需要用于聊天、检索或摘要功能的LangChain基元时,可使用此技能。

Compatibility

兼容性

  • Works with
    architect-python-uv-fastapi-sqlalchemy
    ,
    architect-python-uv-batch
    , and
    architect-nextjs-bun-app
    .
  • Can be combined with
    addon-rag-ingestion-pipeline
    .
  • Can be combined with
    addon-langgraph-agent
    when graph orchestration is required.
  • Can be combined with
    addon-llm-judge-evals
    ; when used together, declare
    langchain
    in
    config/skill_manifest.json
    so the judge runner can resolve the backend without guessing.
  • 适配
    architect-python-uv-fastapi-sqlalchemy
    architect-python-uv-batch
    architect-nextjs-bun-app
    技术栈。
  • 可与
    addon-rag-ingestion-pipeline
    搭配使用。
  • 需要图编排能力时,可与
    addon-langgraph-agent
    搭配使用。
  • 可与
    addon-llm-judge-evals
    搭配使用;同时使用时,请在
    config/skill_manifest.json
    中声明
    langchain
    ,以便评测运行器可以直接解析后端,无需猜测。

Inputs

输入参数

Collect:
  • LLM_PROVIDER
    :
    openai
    |
    anthropic
    |
    ollama
    .
  • DEFAULT_MODEL
    : provider model id.
  • ENABLE_STREAMING
    :
    yes
    |
    no
    (default
    yes
    ).
  • USE_RAG
    :
    yes
    |
    no
    .
  • MAX_INPUT_TOKENS
    : default
    8000
    .
需收集以下信息:
  • LLM_PROVIDER
    openai
    |
    anthropic
    |
    ollama
  • DEFAULT_MODEL
    :对应提供商的模型ID。
  • ENABLE_STREAMING
    yes
    |
    no
    (默认值为
    yes
    )。
  • USE_RAG
    yes
    |
    no
  • MAX_INPUT_TOKENS
    :默认值为
    8000

Integration Workflow

集成工作流

  1. Add dependencies:
  • Python:
bash
uv add langchain langchain-core langchain-community pydantic-settings tiktoken
  • Next.js:
bash
bun add langchain zod
  • Provider packages (as needed):
bash
uv add langchain-openai langchain-anthropic langchain-ollama
bun add @langchain/openai @langchain/anthropic @langchain/ollama
  1. Add files by architecture:
  • Python API:
text
src/{{MODULE_NAME}}/llm/provider.py
src/{{MODULE_NAME}}/llm/chains.py
src/{{MODULE_NAME}}/api/routes/llm.py
  • Next.js:
text
src/lib/llm/langchain.ts
src/lib/llm/chains.ts
src/app/api/llm/chat/route.ts
  1. Enforce typed request/response contracts:
  • Validate input lengths before chain invocation.
  • Return stable schema for streaming and non-streaming modes.
  1. If
    USE_RAG=yes
    , compose retriever + prompt + model chain:
  • Keep retrieval source metadata in outputs.
  • Bound document count and token budget.
  1. If
    addon-llm-judge-evals
    is also selected:
  • emit
    config/skill_manifest.json
    with
    addon-langchain-llm
    in
    addons
  • declare
    "judge_backends": ["langchain"]
    in
    capabilities
  • allow the judge runner to reuse
    DEFAULT_MODEL
    when
    JUDGE_MODEL
    is unset
  1. 添加依赖:
  • Python 环境:
bash
uv add langchain langchain-core langchain-community pydantic-settings tiktoken
  • Next.js 环境:
bash
bun add langchain zod
  • 提供商对应包(按需安装):
bash
uv add langchain-openai langchain-anthropic langchain-ollama
bun add @langchain/openai @langchain/anthropic @langchain/ollama
  1. 按架构添加对应文件:
  • Python API:
text
src/{{MODULE_NAME}}/llm/provider.py
src/{{MODULE_NAME}}/llm/chains.py
src/{{MODULE_NAME}}/api/routes/llm.py
  • Next.js:
text
src/lib/llm/langchain.ts
src/lib/llm/chains.ts
src/app/api/llm/chat/route.ts
  1. 强制使用带类型的请求/响应契约:
  • 在调用链之前验证输入长度。
  • 为流式和非流式模式返回稳定的schema。
  1. USE_RAG=yes
    ,组合检索器 + 提示词 + 模型链:
  • 在输出中保留检索源元数据。
  • 限制文档数量和token预算。
  1. 若同时选择了
    addon-llm-judge-evals
  • 生成
    config/skill_manifest.json
    ,在
    addons
    字段中加入
    addon-langchain-llm
  • capabilities
    字段中声明
    "judge_backends": ["langchain"]
  • JUDGE_MODEL
    未设置时,允许评测运行器复用
    DEFAULT_MODEL

Required Template

所需模板

Chat response shape

聊天响应结构

json
{
  "outputText": "string",
  "model": "string",
  "provider": "string"
}
json
{
  "outputText": "string",
  "model": "string",
  "provider": "string"
}

Guardrails

防护规则

  • Documentation contract for generated code:
    • Python: write module docstrings and docstrings for public classes, methods, and functions.
    • Next.js/TypeScript: write JSDoc for exported components, hooks, utilities, and route handlers.
    • Add concise rationale comments only for non-obvious logic, invariants, or safety constraints.
    • Apply this contract even when using template snippets below; expand templates as needed.
  • Enforce provider/model allow-lists.
  • Add timeout and retry limits around provider calls.
  • Never log secrets or raw auth headers.
  • On streaming disconnect, stop upstream generation promptly.
  • If judge evals are enabled, keep the judge path on the same provider abstraction instead of bypassing it with ad hoc SDK calls.
  • 生成代码的文档契约:
    • Python:为模块、公开类、方法和函数编写文档字符串。
    • Next.js/TypeScript:为导出的组件、钩子、工具函数和路由处理器编写JSDoc。
    • 仅为非显见逻辑、不变量或安全约束添加简明的原理注释。
    • 即使使用下方的模板片段也需遵守此契约,可根据需要扩展模板。
  • 强制实施提供商/模型白名单。
  • 为提供商调用添加超时和重试限制。
  • 永远不要记录密钥或原始认证头。
  • 流式连接断开时,及时停止上游生成。
  • 若启用了评测功能,需将评测路径保持在同一提供商抽象层上,不要通过临时SDK调用绕过该层。

Validation Checklist

验证检查清单

  • Confirm generated code includes required docstrings/JSDoc and rationale comments for non-obvious logic.
bash
uv run ruff check . || true
uv run mypy src || true
bun run lint || true
rg -n "langchain|outputText|provider" src
  • Manual checks:
  • Typed chat route returns valid response.
  • Invalid payloads fail with controlled validation errors.
  • 确认生成的代码包含所需的文档字符串/JSDoc,以及非显见逻辑的原理注释。
bash
uv run ruff check . || true
uv run mypy src || true
bun run lint || true
rg -n "langchain|outputText|provider" src
  • 人工检查项:
  • 带类型的聊天路由返回有效响应。
  • 无效载荷会返回受控的验证错误。

Decision Justification Rule

决策论证规则

  • Every non-trivial decision must include a concrete justification.
  • Capture the alternatives considered and why they were rejected.
  • State tradeoffs and residual risks for the chosen option.
  • If justification is missing, treat the task as incomplete and surface it as a blocker.
  • 每个非琐碎决策都必须包含具体的论证依据。
  • 记录考虑过的替代方案以及拒绝的原因。
  • 说明所选方案的权衡和残留风险。
  • 若缺少论证,视为任务未完成,将其标记为阻塞项。