n8n-subworkflows
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese<!-- TEMPORARY: change workflow prefix searching to tags when tag tools are added to mcp -->
<!-- TEMPORARY: change workflow prefix searching to tags when tag tools are added to mcp -->
n8n Sub-workflows
n8n 子工作流
Sub-workflows are reusable functions. The declares input parameters, the body does work, the last node returns output. Callers invoke it like any other node.
Execute Workflow TriggerThat framing opens up the function-shaped wins: encapsulation, reuse, testability, replaceability. It's the primary reuse mechanism in n8n, and unfortunately underused.
Without sub-workflows, the same logic gets duplicated across workflows. Bug fixes happen in multiple places, one gets missed, and "identical" copies drift.
子工作流是可复用的函数。声明输入参数,主体执行任务,最后一个节点返回输出。调用方可以像调用其他任何节点一样调用它。
Execute Workflow Trigger这种架构带来了函数式的优势:封装、复用、可测试性、可替换性。它是n8n中主要的复用机制,但遗憾的是并未被充分利用。
如果不使用子工作流,相同的逻辑会在多个工作流中重复出现。修复bug时需要在多处修改,很容易遗漏一处,导致“相同”的副本逐渐产生差异。
Non-negotiables
必须遵守的规则
- Search before you build. Before writing logic that handles a generic problem, check if a sub-workflow already exists. Use ,
search_workflows({ query: 'Subworkflow' }), etc. The MCP can't filter by tags, so naming is the discovery mechanism.query: '<keyword>' - uses "Define Below" with typed fields, not passthrough. Define Below is the only mode that lets agent tools (
Execute Workflow Trigger) and structured callers pass values in. The single exception: passthrough is required when the sub-workflow specifically needs to receive binary, and that sub-workflow then can't be wired as an agent tool directly. See "Sub-workflow inputs and outputs" below.fromAi
- 构建前先搜索。在编写处理通用问题的逻辑之前,先检查是否已有对应的子工作流。使用、
search_workflows({ query: 'Subworkflow' })等方式搜索。由于MCP暂不支持按标签筛选,命名是主要的发现机制。query: '<keyword>' - 使用“Define Below”模式并配置类型化字段,而非透传模式。只有“Define Below”模式允许代理工具(
Execute Workflow Trigger)和结构化调用方传入值。唯一例外:当子工作流明确需要接收二进制数据时,必须使用透传模式,但此时该子工作流无法直接作为代理工具使用。详情见下方“子工作流的输入与输出”部分。fromAi
Strong defaults
推荐默认规范
- Anything reusable becomes a sub-workflow. If a logical chunk could plausibly be needed elsewhere, extract it. Exception: trivial wrappers (one HTTP call, no logic) and tightly-coupled-to-this-caller chunks.
- Default to stateless for pure logic (input → output, no external state). For state-touching logic, build deliberately stateful sub-workflows that abstract the operation behind a clean contract (the ORM / repository pattern). What to avoid is accidental state: a "validate" sub-workflow that quietly writes to a log table.
- Verb-first prefix naming: ,
Subworkflow: Parse RFC2822 date,Customer: hydrate from Stripe. The prefix is whatTool: list available credentialsmatches on. Seesearch_workflows.references/NAMING_AND_DISCOVERY.md - Description carries keywords. Input/output shape + representative terms, so varied queries surface it.
- Split when input contracts genuinely differ (binary vs JSON, sync vs async, divergent auth schemes). Don't fit divergent contracts under one trigger via passthrough + internal branching. See "Splitting by input shape".
references/SUBWORKFLOW_PATTERNS.md
- 任何可复用的逻辑都应转为子工作流。如果某段逻辑块有可能在其他地方被用到,就将其提取为子工作流。例外情况:简单的包装器(仅一个HTTP调用,无额外逻辑)和与调用方高度耦合的逻辑块。
- 纯逻辑默认使用无状态模式(输入→输出,无外部状态交互)。对于涉及状态操作的逻辑,需刻意构建有状态子工作流,通过清晰的契约抽象操作(类似ORM/仓储模式)。要避免的是“意外”状态:例如名为“验证”的子工作流却悄悄写入日志表。
- 动词开头的前缀命名:、
Subworkflow: Parse RFC2822 date、Customer: hydrate from Stripe。前缀是Tool: list available credentials匹配的关键。详见search_workflows。references/NAMING_AND_DISCOVERY.md - 描述中包含关键词。需包含输入/输出结构及代表性术语,以便不同的搜索查询都能找到它。
- 当输入契约确实不同时拆分(二进制vs JSON、同步vs异步、不同的授权方案)。不要通过透传+内部分支的方式将不同契约塞进同一个触发器。详见中的“按输入结构拆分”部分。
references/SUBWORKFLOW_PATTERNS.md
Decision tree: should this be a sub-workflow?
决策树:是否应转为子工作流?
About to write a chunk of logic?
├── Could this plausibly be needed in another workflow?
│ ├── Yes → extract to sub-workflow
│ └── No → keep inline
│
├── Is this chunk >5 nodes and conceptually one thing?
│ └── Probably yes-extract, even if reuse isn't certain. It's still better isolated.
│
├── Is this chunk dealing with a generic concern (auth, retry, parsing, formatting)?
│ └── Almost certainly extract. These are the canonical reusable sub-workflows.
│
└── Is this chunk doing one HTTP call with no logic around it?
└── Don't extract. Extra workflow boundary for nothing.准备编写一段逻辑?
├── 这段逻辑有可能在其他工作流中被用到吗?
│ ├── 是 → 提取为子工作流
│ └── 否 → 保留为内联逻辑
│
├── 这段逻辑包含超过5个节点且在概念上是一个独立功能吗?
│ └── 通常是→提取,即使不确定是否会被复用。隔离后会更易维护。
│
├── 这段逻辑处理的是通用问题(授权、重试、解析、格式化)吗?
│ └── 几乎肯定要提取。这些是典型的可复用子工作流场景。
│
└── 这段逻辑仅包含一个无额外逻辑的HTTP调用吗?
└── 不要提取。额外的工作流边界毫无意义。Stateless vs. stateful sub-workflows
无状态与有状态子工作流
Both are first-class. The choice is about intent and encapsulation.
两者都是一等公民。选择取决于意图和封装需求。
Stateless
无状态
Takes input, returns output. No I/O outside the inputs/outputs. Default for pure logic.
Examples:
- . Input: date string. Output: ISO date or error.
Subworkflow: Parse RFC2822 date - . Input: subscription object. Output: MRR number.
Subworkflow: Compute MRR from subscription - . Input: invoice data. Output: HTML string.
Subworkflow: Format invoice as HTML
When you need the logic again, call it without worrying about side effects firing.
接收输入,返回输出。除输入/输出外无其他I/O操作。是纯逻辑的默认选择。
示例:
- 。输入:日期字符串。输出:ISO日期或错误信息。
Subworkflow: Parse RFC2822 date - 。输入:订阅对象。输出:MRR数值。
Subworkflow: Compute MRR from subscription - 。输入:发票数据。输出:HTML字符串。
Subworkflow: Format invoice as HTML
当你需要再次使用该逻辑时,调用它即可,无需担心触发副作用。
Stateful (deliberate)
有状态(刻意设计)
Reads or writes external state behind a clean input/output contract. Comparable to a repository pattern: the sub-workflow abstracts the state operation so callers think in domain terms, not implementation.
Examples:
- . Input: id. Output: customer object or
Customer: get by id. Reads the DB.{ ok: false, error: 'not_found' } - . Input: record. Output:
Customer: write billing record. Writes the DB.{ ok: true, id } - . Input: event. Output:
Audit: append event. Writes to a logging store.{ ok: true, eventId } - . Input: channel, message. Output:
Notify: send to on-call. Calls Slack/SMTP.{ ok: true, messageId }
The point of building these as sub-workflows:
- Callers think in domain terms (), not in storage (
get customer by id).SELECT * FROM customers ... - Swap the underlying store/API behind it (Postgres → Supabase, native node → HTTP) without touching callers.
- Idempotency, retry, and validation become the sub-workflow's responsibility, centralized in one place.
What to avoid is accidental state: a sub-workflow named/described as pure that quietly writes to a log table. That ambushes callers who reasonably assumed it was safe to retry or compose. Either make the side effect part of the contract (rename, document, return its result) or move it out.
通过清晰的输入/输出契约读取或写入外部状态。类似仓储模式:子工作流抽象状态操作,使调用方从领域视角思考,而非关注实现细节。
示例:
- 。输入:id。输出:客户对象或
Customer: get by id。读取数据库。{ ok: false, error: 'not_found' } - 。输入:记录。输出:
Customer: write billing record。写入数据库。{ ok: true, id } - 。输入:事件。输出:
Audit: append event。写入日志存储。{ ok: true, eventId } - 。输入:渠道、消息。输出:
Notify: send to on-call。调用Slack/SMTP。{ ok: true, messageId }
将这些逻辑构建为子工作流的意义:
- 调用方从领域视角思考(“通过id获取客户”),而非关注存储细节()。
SELECT * FROM customers ... - 无需修改调用方即可替换底层存储/API(Postgres→Supabase、原生节点→HTTP)。
- 幂等性、重试和验证逻辑由子工作流集中处理。
要避免的是“意外”状态:例如命名/描述为纯逻辑的子工作流却悄悄写入日志表。这会让合理假设它可以安全重试或组合的调用方遭遇意外。要么将副作用纳入契约(重命名、文档说明、返回结果),要么移除该副作用。
When to extract
何时提取为子工作流
The two main signals:
主要有两种信号:
1. Conceptual coherence
1. 概念一致性
When a chunk of nodes does one logical thing, even unreused, it's often worth extracting. Beyond reuse:
- Readability. The caller sees one node ("Parse date") instead of five.
- Testability. Run the sub-workflow on its own with pinned data.
- Replaceability. Swapping implementations doesn't ripple to callers.
Cost: an extra workflow boundary.
For most 5+ node chunks doing one logical thing, extraction is worth it.
当一段节点实现一个独立的逻辑功能时,即使没有被复用,通常也值得提取。除了复用之外:
- 可读性。调用方看到的是一个节点(“解析日期”)而非五个节点。
- 可测试性。可以单独运行子工作流并使用固定数据测试。
- 可替换性。替换实现不会影响调用方。
代价:增加一个额外的工作流边界。
对于大多数包含5个以上节点且实现独立逻辑的块,提取是值得的。
1.5 The fire-and-forget audit-log pattern
1.5 即发即弃的审计日志模式
Audit logging is used here as a concrete illustration of the fire-and-forget stateful pattern. Don't add audit logging to a workflow unless the user asked for it. The pattern itself (fire a sub-workflow async, don't block on it) generalizes to any side observation: metrics, notifications, etc.
A deliberately stateful audit-log sub-workflow invoked with 's so the caller doesn't block on the write.
Execute WorkflowwaitForSubWorkflow: falseCaller ──→ [Execute Workflow: DB audit log]
{ title: 'Email Confirmation Received',
description: <serialized data> }
waitForSubWorkflow: false
↓ (caller continues immediately)
──→ [Continue with next step]The sub-workflow takes a title and description, writes to a logging table (or Slack, or both), returns. The caller doesn't wait. Audit log is a side observation, not the critical path.
When the user has asked for it, fire one at every meaningful state transition ("email confirmation received", "user verified", "processing started", "eligibility decision made") so the timeline reconstructs from logs.
Why it's valuable:
- Observability for free. Per-execution timeline when something goes wrong.
- No coupling. Implementation (DB, Slack, both) can change without touching callers.
- Async by default. means the audit doesn't slow the main workflow.
waitForSubWorkflow: false
The audit-log workflow is the right kind of stateful sub-workflow. The side effect is the point.
此处以审计日志为例说明即发即弃的有状态模式。除非用户要求,否则不要在工作流中添加审计日志。该模式(异步触发子工作流,不等待其完成)可推广到任何侧观测场景:指标、通知等。
通过的参数调用刻意设计的有状态审计日志子工作流,这样调用方不会等待写入操作完成。
Execute WorkflowwaitForSubWorkflow: false调用方 ──→ [Execute Workflow: DB audit log]
{ title: 'Email Confirmation Received',
description: <序列化数据> }
waitForSubWorkflow: false
↓ (调用方立即继续执行)
──→ [继续执行下一步]子工作流接收标题和描述,写入日志表(或Slack,或两者),然后返回。调用方无需等待。审计日志是侧观测,而非关键路径。
当用户要求时,在每个有意义的状态转换时触发一次(“收到邮件确认”、“用户已验证”、“处理已开始”、“资格判定完成”),以便从日志中重建时间线。
该模式的价值:
- 开箱即用的可观测性。当出现问题时,可查看每次执行的时间线。
- 无耦合。实现方式(数据库、Slack、两者结合)可以修改而无需触及调用方。
- 默认异步。意味着审计操作不会拖慢主工作流。
waitForSubWorkflow: false
审计日志工作流是正确的有状态子工作流示例。副作用是其核心目的。
1.7 The middleware pattern
1.7 中间件模式
When a webhook workflow is API-shaped, treat it like one. Sub-workflows become middleware: small stateless functions that run before the main handler and either pass through or short-circuit with a 4xx.
Webhook
→ [Subworkflow: Verify JWT] # decode + validate; 401 on failure
→ [Subworkflow: Rate limit] # check + bump counter; 429 on failure
→ IF (all middleware ok)
→ Main handler logic
→ Respond 200
→ ELSE → Respond with the 4xx the middleware returnedCanonical example: custom JWT auth rolled inside n8n. takes the raw header, decodes, validates signature and expiry, returns or . The caller IFs on , responds early on failure, continues on success.
Subworkflow: Verify JWTAuthorization{ ok: true, user_id }{ ok: false, status: 401, message }okWhy a sub-workflow and not inline: every webhook that needs auth calls the same one. Swap the library, rotate the signing key, or add refresh-token logic in a single place. The reuse target is exact, the contract is small, and the failure response shape is consistent across every API endpoint.
Pairs with for 4xx/5xx response shapes and for the underlying secret handling.
n8n-error-handlingn8n-credentials-and-security当Webhook工作流具有API形态时,将其视为API处理。子工作流充当中间件:小型无状态函数,在主处理程序之前运行,要么通过验证,要么返回4xx错误终止流程。
Webhook
→ [Subworkflow: Verify JWT] # 解码+验证;验证失败则返回401
→ [Subworkflow: Rate limit] # 检查+更新计数器;超出限制则返回429
→ IF (所有中间件验证通过)
→ 主处理逻辑
→ 返回200
→ ELSE → 返回中间件返回的4xx错误典型示例:在n8n内部实现自定义JWT授权。接收原始头,解码、验证签名和过期时间,返回或。调用方根据值进行分支,验证失败则提前返回,验证通过则继续执行。
Subworkflow: Verify JWTAuthorization{ ok: true, user_id }{ ok: false, status: 401, message }ok为什么使用子工作流而非内联逻辑:每个需要授权的Webhook都调用同一个子工作流。只需在一处修改即可更换库、轮换签名密钥或添加刷新令牌逻辑。复用目标明确,契约简洁,且所有API端点的错误响应格式一致。
可搭配处理4xx/5xx响应格式,搭配处理底层密钥管理。
n8n-error-handlingn8n-credentials-and-security2. Repetition pattern
2. 重复模式
You're about to build something you've built before. Stop. Search.
search_workflows({ query: 'date' })
search_workflows({ query: 'Customer' })
search_workflows({ query: 'Subworkflow:' })If something matches, use it. If not, build it as a sub-workflow so the next search finds it. The prefix convention (, , etc.) is what makes that work.
Subworkflow:Customer:当你准备构建之前已经构建过的内容时,停止操作,先搜索。
search_workflows({ query: 'date' })
search_workflows({ query: 'Customer' })
search_workflows({ query: 'Subworkflow:' })如果找到匹配的子工作流,就使用它。如果没有,就将其构建为子工作流,以便后续搜索能找到它。前缀约定(、等)是实现这一点的关键。
Subworkflow:Customer:Linear, long workflows are fine when most of the work is in sub-workflows
当大部分工作由子工作流完成时,线性长工作流是可接受的
A workflow can have 20+ nodes and still be readable if it's mostly a linear orchestration of sub-workflow calls and decisions. The shape (audit-log nodes shown only because they're a vivid example of "side observation between real steps", include them only if the user asked for audit logging):
Webhook
→ Audit log (sub-workflow)
→ Validate
→ Audit log (sub-workflow)
→ IF auth ok
→ Look up user (or sub-workflow)
→ Audit log (sub-workflow)
→ Process step 1 (sub-workflow)
→ Audit log (sub-workflow)
→ Process step 2 (sub-workflow)
→ Audit log (sub-workflow)
→ Decide eligibility (sub-workflow)
→ Audit log (sub-workflow)
→ Send notification (sub-workflow)
→ RespondEach "logical step" is a sub-workflow call. The caller is a long but linear narrative, easy to follow top-to-bottom. Logic lives in the sub-workflows.
This is not the same as a 20-node workflow with 20 inline transformations. That's hard to read. The pattern above is fine because:
- Each node has one purpose (call a specific sub-workflow).
- Sticky notes group sections (per "Readability").
n8n-workflow-lifecycle - Inspecting a section means opening the sub-workflow it calls. That's encapsulation.
- Orchestration logic at the top level is visible without reading implementations.
If your workflow has 15+ nodes and isn't mostly Execute Workflow calls and branches, extract more.
如果工作流主要是子工作流调用和决策的线性编排,那么即使包含20个以上节点,仍然具有良好的可读性。以下是示例结构(仅展示审计日志节点,因为它是“在实际步骤之间进行侧观测”的生动示例,仅当用户要求审计日志时才添加):
Webhook
→ 审计日志(子工作流)
→ 验证
→ 审计日志(子工作流)
→ IF 授权通过
→ 查询用户(或子工作流)
→ 审计日志(子工作流)
→ 处理步骤1(子工作流)
→ 审计日志(子工作流)
→ 处理步骤2(子工作流)
→ 审计日志(子工作流)
→ 判定资格(子工作流)
→ 审计日志(子工作流)
→ 发送通知(子工作流)
→ 返回响应每个“逻辑步骤”都是一个子工作流调用。调用方是一个长但线性的流程,从上到下易于理解。逻辑都存在于子工作流中。
这与包含20个内联转换节点的工作流不同,后者难以阅读。上述模式是可接受的,因为:
- 每个节点只有一个目的(调用特定子工作流)。
- 可使用便签分组(详见中的“可读性”部分)。
n8n-workflow-lifecycle - 查看某部分逻辑只需打开对应的子工作流。这就是封装的价值。
- 顶层的编排逻辑无需查看实现细节即可一目了然。
如果你的工作流包含15个以上节点且大部分不是Execute Workflow调用和分支,那么需要提取更多子工作流。
When NOT to extract
何时不提取为子工作流
- One HTTP call with no logic. A sub-workflow that's just adds a boundary for nothing. Inline it.
Execute Workflow → HTTP Request → return - Tightly coupled to the caller's specific shape. If the chunk takes a deeply nested input that only this caller produces, extracting it just relocates the coupling. Fix the data shape first.
- Performance-critical hot paths. Each sub-workflow call adds latency (small, but real). For high-throughput workflows, profile before adding boundaries.
- 仅包含一个无额外逻辑的HTTP调用。一个仅包含的子工作流只是增加了无意义的边界。应保留为内联逻辑。
Execute Workflow → HTTP Request → return - 与调用方的特定结构高度耦合。如果该逻辑块仅接收调用方产生的深度嵌套输入,提取它只是将耦合转移到其他地方。应先修复数据结构。
- 性能关键的热路径。每次子工作流调用都会增加延迟(虽小但确实存在)。对于高吞吐量工作流,添加边界前应先进行性能分析。
Search-before-build protocol
构建前搜索流程
When the user describes something multi-step or generic-sounding:
1. search_workflows with relevant queries (e.g. 'Subworkflow', the domain prefix, the operation keyword)
2. If candidates appear, fetch get_workflow_details on the top 1-3
3. Confirm fit by reading the inputs/outputs and (briefly) the body
4. If a fit exists → use it. Tell the user "I found `<name>`. Using that."
5. If no fit exists → build new with the prefix convention so the next search finds itThe "tell the user" step matters. They benefit from knowing what's already in their library.
If a workflow you expect to find isn't appearing, the most common cause is per-workflow MCP access not being enabled. See .
n8n-workflow-lifecyclereferences/MCP_ACCESS_PER_WORKFLOW.md当用户描述的内容是多步骤或通用型时:
1. 使用相关查询调用`search_workflows`(例如'Subworkflow'、领域前缀、操作关键词)
2. 如果找到候选结果,获取前1-3个工作流的`get_workflow_details`
3. 通过阅读输入/输出和(简要)主体内容确认是否匹配
4. 如果找到匹配项 → 使用它。告知用户“我找到了`<name>`,将使用该子工作流。”
5. 如果没有匹配项 → 按照前缀约定构建新的子工作流,以便后续搜索能找到它“告知用户”这一步很重要。他们会从中了解到库中已有的资源。
如果预期能找到的工作流未出现,最常见的原因是未启用每个工作流的MCP访问权限。详见中的。
n8n-workflow-lifecyclereferences/MCP_ACCESS_PER_WORKFLOW.mdSub-workflow inputs and outputs
子工作流的输入与输出
Sub-workflows are triggered by nodes. The trigger declares the input schema. The caller passes data via , and the sub-workflow returns whatever its last node outputs.
Execute Workflow TriggerExecute Workflow子工作流由节点触发。触发器声明输入模式。调用方通过传入数据,子工作流返回其最后一个节点的输出。
Execute Workflow TriggerExecute WorkflowAlways use "Define Below" with explicit fields
始终使用“Define Below”模式并配置明确字段
The has two input modes. Default to "Define Below" (typed fields). This is the only mode that lets agent tools (via ) and any structured caller pass values in. Without declared fields, the agent has no schema to fill and the sub-workflow can't be wired as a cleanly.
Execute Workflow TriggerfromAi()toolWorkflowShape:
ts
const subTrigger = trigger({
type: 'n8n-nodes-base.executeWorkflowTrigger',
config: {
parameters: {
workflowInputs: {
values: [
{ name: 'list_of_ids', type: 'array' },
{ name: 'include_transcript', type: 'boolean' },
{ name: 'session_id', type: 'string' },
],
},
},
},
})Each declared input becomes a typed parameter the caller can fill. Inside the workflow, access via , etc., or from anywhere downstream.
$json.list_of_ids$('When Executed by Another Workflow').first().json.<field>Pick types deliberately (, , , , ). The model uses these as the required types when filling agent tool parameters, and humans rely on them when wiring callers.
stringnumberbooleanarrayobjectExecute Workflow TriggerfromAi()toolWorkflow示例结构:
ts
const subTrigger = trigger({
type: 'n8n-nodes-base.executeWorkflowTrigger',
config: {
parameters: {
workflowInputs: {
values: [
{ name: 'list_of_ids', type: 'array' },
{ name: 'include_transcript', type: 'boolean' },
{ name: 'session_id', type: 'string' },
],
},
},
},
})每个声明的输入都会成为调用方可填充的类型化参数。在工作流内部,可通过等方式访问,或在任何下游节点通过访问。
$json.list_of_ids$('When Executed by Another Workflow').first().json.<field>需谨慎选择类型(、、、、)。模型在填充代理工具参数时会使用这些类型,开发人员在连接调用方时也会依赖这些类型。
stringnumberbooleanarrayobjectThe one exception: passthrough mode for binary
唯一例外:处理二进制数据时使用透传模式
If the sub-workflow needs to receive binary (image, file, PDF), doesn't work because typed fields are JSON only. Switch to passthrough:
Define Belowts
const subTrigger = trigger({
type: 'n8n-nodes-base.executeWorkflowTrigger',
config: {
parameters: {
inputSource: 'passthrough',
},
},
})In passthrough mode, the sub-workflow receives the caller's items as-is, including the slot. Cost: no typed input schema, so agent tools can't pass parameters through . Use this mode for sub-workflows called by other workflows (not agents) where binary needs to flow through.
binaryfromAi()For sub-workflows that need binary AND are called by an agent, see (agent tools can't pass binary directly).
n8n-binary-and-datareferences/AGENT_TOOL_BINARY.md如果子工作流需要接收二进制数据(图片、文件、PDF),“Define Below”模式无法工作,因为类型化字段仅支持JSON。此时需切换到透传模式:
ts
const subTrigger = trigger({
type: 'n8n-nodes-base.executeWorkflowTrigger',
config: {
parameters: {
inputSource: 'passthrough',
},
},
})在透传模式下,子工作流会原样接收调用方的条目,包括槽。代价:没有类型化输入模式,因此代理工具无法通过传入参数。该模式适用于由其他工作流(而非代理)调用且需要传递二进制数据的子工作流。
binaryfromAi()对于既需要接收二进制数据又要被代理调用的子工作流,详见中的(代理工具无法直接传递二进制数据)。
n8n-binary-and-datareferences/AGENT_TOOL_BINARY.mdOther conventions
其他约定
-
Document inputs and outputs in the workflow. Field names, types, purpose. The description is what callers (humans and agents) read for the contract.
description -
Return a consistent shape. For expected failures (e.g., parse error), returnrather than throwing. Callers can branch without wrapping error outputs.
{ success: false, error: '...' } -
Treat the input schema as a contract once it has callers. Adding optional fields is safe. Renaming or removing fields can be done, but only carefully: enumerate every caller (for the sub-workflow's name + manual scan), migrate them in the same change, and verify with
search_workflows+validate_workflowbefore publishing. A silent break here is hard to detect because n8n won't error on an unrecognized input field. The sub-workflow just seesget_workflow_detailsand the caller has no idea.undefined -
Use a final Set / Edit Fields node to shape the return. Optional, sometimes required (when the last computation node carries noise fields), and good practice for sub-workflows even when not strictly required. It makes the return contract explicit at the boundary, so readers see the API by reading one node. This is the legitimate exception to the Set-node antipattern from: the implicit consumer of a sub-workflow's last node is every caller, so the Set earns its place as the explicit API boundary. Name it
n8n-expressionsorReturn.Return <thing> -
Return natural shapes, not storage shapes. A sub-workflow that owns a Data Table, a file in S3, or any storage layer should hide that representation from callers. Arrays return as arrays, objects as objects, dates as ISO strings, regardless of whether the underlying storage was JSON-stringified text or another internal format. The return contract is the interface. The storage layout is implementation detail.Common slip: a sub-workflow has a "fresh" path (data just produced, natural shape) and a "cached" path (data just read from acolumn, still stringified). Wrong instinct: stringify the fresh path "to match" the cached path. Right instinct: parse the cached path so both return the natural shape. Callers shouldn't have to know which they got.
_object
For sub-workflows wired as agent tools specifically, see .
n8n-agentsreferences/SUBWORKFLOW_AS_TOOL.md- 在工作流中记录输入和输出。包括字段名称、类型、用途。描述是调用方(开发人员和代理)了解契约的依据。
description - 返回一致的结构。对于预期的失败(例如解析错误),返回而非抛出错误。调用方无需包装错误输出即可进行分支处理。
{ success: false, error: '...' } - 一旦有调用方,就将输入模式视为契约。添加可选字段是安全的。重命名或删除字段需谨慎:枚举所有调用方(通过搜索子工作流名称+手动检查),在同一变更中迁移所有调用方,并在发布前通过
search_workflows+validate_workflow验证。此处的静默故障难以检测,因为n8n不会对未识别的输入字段报错,子工作流只会将其视为get_workflow_details,而调用方毫无察觉。undefined - 使用最终的Set/Edit Fields节点定义返回结构。可选,但有时是必需的(当最后一个计算节点包含冗余字段时),即使不是必需的,对子工作流来说也是良好实践。它在边界处明确了返回契约,因此读者只需查看一个节点即可了解API。这是中Set节点反模式的合理例外:子工作流最后一个节点的隐式消费者是所有调用方,因此Set节点作为明确的API边界是合理的。将其命名为
n8n-expressions或Return。Return <thing> - 返回自然结构,而非存储结构。拥有数据表、S3文件或任何存储层的子工作流应向调用方隐藏存储表示。数组返回为数组,对象返回为对象,日期返回为ISO字符串,无论底层存储是JSON字符串化的文本还是其他内部格式。返回契约是接口,存储布局是实现细节。
常见错误:子工作流有“新鲜”路径(刚生成的数据,自然结构)和“缓存”路径(刚从列读取的数据,仍为字符串化格式)。错误的做法是将新鲜路径字符串化以“匹配”缓存路径。正确的做法是解析缓存路径,使两者都返回自然结构。调用方无需知道获取的是哪种路径的数据。
_object专门作为代理工具的子工作流,详见中的。
n8n-agentsreferences/SUBWORKFLOW_AS_TOOL.mdCalling sub-workflows: Execute Workflow
modes
Execute Workflow调用子工作流:Execute Workflow
模式
Execute WorkflowTwo settings on the caller-side node beyond inputs/workflowId:
Execute Workflow- defaults to
mode: the sub-workflow runs once with all N items as input. Items still flow through nodes per-item like any other workflow. Set'all'to run the sub-workflow N separate times, one item per execution. For sub-workflows whose body just processes items normally, the two are equivalent. The split matters when the sub-workflow's body assumes it sees exactly one item (per-run aggregation, "this is THE customer to operate on" logic, a final write that should fire once per input).mode: 'each'matches that assumption,mode: 'each'breaks it. When you DO need per-item iteration, prefermode: 'all'over a Loop Over Items node inside the sub-workflow.mode: 'each' - defaults to
waitForSubWorkflow. Settingtruefires the call and immediately moves on, and the sub-workflow continues in the background. The caller's downstream sees no return data.options.waitForSubWorkflow: false
mode: 'each'waitForSubWorkflow: falseFor the polling-after-fire-and-forget pattern, see "Fire-and-forget parallelization".
references/SUBWORKFLOW_PATTERNS.md调用方侧的节点除了输入/workflowId之外,还有两个设置:
Execute Workflow- 默认值为
mode:子工作流运行一次,将所有N个条目作为输入。条目仍会像其他工作流一样逐节点流转。设置'all'会让子工作流独立运行N次,每个条目对应一次执行。对于主体仅正常处理条目的子工作流,两种模式效果相同。当子工作流主体假设每次运行只处理一个条目时(每次运行聚合、“这是要操作的唯一客户”逻辑、应按每个输入触发一次的最终写入操作),两种模式的区别就很重要。mode: 'each'符合该假设,mode: 'each'则会破坏该假设。当需要按条目迭代时,优先使用mode: 'all'而非在子工作流内部使用Loop Over Items节点。mode: 'each' - ****默认值为
waitForSubWorkflow。设置true会触发调用并立即继续执行,子工作流在后台运行。调用方的下游节点不会收到返回数据。options.waitForSubWorkflow: false
mode: 'each'waitForSubWorkflow: false关于即发即弃后的轮询模式,详见中的“即发即弃并行化”部分。
references/SUBWORKFLOW_PATTERNS.mdReference files
参考文件
| File | Read when |
|---|---|
| |
| Naming a new sub-workflow, searching for existing ones, the prefix convention |
| 文件 | 阅读时机 |
|---|---|
| 了解 |
| 为新子工作流命名、搜索现有子工作流、前缀约定 |
Anti-patterns
反模式
| Anti-pattern | What goes wrong | Fix |
|---|---|---|
| Duplicating the same date-parsing nodes in three workflows | Bug fixes happen in two places, miss the third | Extract to |
| Building a new sub-workflow without searching | Library grows duplicates, and future searches find both | Always |
| Sub-workflow named/described as pure that quietly writes to a log table | Callers can't reason about retry or idempotency, side effect ambushes them | Either make the side effect part of the contract (rename, document, return its result) or move it out |
Sub-workflow with no | Won't be found in future searches, nobody knows what it does | Set |
Sub-workflow named | Name doesn't tell anyone what it does, and doesn't match any prefix-based search | Verb-first prefix name ( |
Sub-workflow with no | Won't show up under | Always use a prefix at create time |
| No typed schema means agent tools can't fill parameters via | Use "Define Below" with declared |
| Sub-workflow called as an agent tool that expects binary input | Agent tools can't pass binary directly | See |
| 30-node workflow with no extraction | Hard to read, hard to test, hard to replace | Extract logical sections into sub-workflows |
| 反模式 | 问题 | 修复方案 |
|---|---|---|
| 在三个工作流中重复相同的日期解析节点 | 修复bug时修改了两处,遗漏了第三处 | 提取为 |
| 未搜索就构建新子工作流 | 库中出现重复内容,后续搜索会找到多个结果 | 始终先调用 |
| 命名/描述为纯逻辑的子工作流却悄悄写入日志表 | 调用方无法判断是否可以重试或实现幂等,遭遇意外副作用 | 要么将副作用纳入契约(重命名、文档说明、返回结果),要么移除该副作用 |
子工作流无 | 后续搜索无法找到,没人知道它的用途 | 设置 |
子工作流命名为 | 名称无法说明功能,也不符合任何基于前缀的搜索 | 使用动词开头的前缀命名( |
子工作流无 | 在 | 创建时始终添加前缀 |
处理非二进制数据时 | 无类型化模式意味着代理工具无法通过 | 使用“Define Below”模式并配置 |
| 作为代理工具调用的子工作流期望接收二进制输入 | 代理工具无法直接传递二进制数据 | 详见 |
| 30个节点的工作流未进行任何提取 | 难以阅读、测试和替换 | 将逻辑部分提取为子工作流 |