Test Documentation — QA Bridge
测试文档 — QA 桥梁
Take already-validated tests and formalize them in the TMS (Jira, Xray, or equivalent) with full traceability, the right priority, and a clear automation verdict.
Three phases, always in this order: Analyze -> Prioritize (ROI) -> Document. Never skip prioritization: most scenarios should end up Deferred, not automated.
One hard prerequisite: the tests being documented must describe behavior that was already validated ({{jira.status.story.qa_approved}} story, closed bug, or finished exploratory session). The TMS is a documentation and regression-protection tool, not an exploration tool.
将已验证的测试用例在TMS(Jira、Xray或同类工具)中规范化,确保完整的可追溯性、正确的优先级以及明确的自动化判定。
分为三个阶段,必须按此顺序执行:分析 -> 优先级划分(ROI) -> 记录。绝不能跳过优先级划分:大多数场景最终应归为Deferred,而非自动化。
硬性前提:待记录的测试用例必须描述已验证的行为({{jira.status.story.qa_approved}}状态的用户故事、已修复的缺陷,或已完成的探索性测试会话)。TMS是文档记录和回归防护工具,而非探索工具。
Subagent Dispatch Strategy
子Agent调度策略
This skill is compliant with the doctrine in
§"Orchestration Mode (Subagent Strategy)". Every dispatch follows the 6-component briefing format defined in
.claude/skills/framework-core/references/briefing-template.md
, and the pattern selected per phase matches the decision guide in
.claude/skills/framework-core/references/dispatch-patterns.md
. Phase 1 (Analyze) and Phase 2 (Prioritize) stay inline because planning and decisions live in the orchestrator; the only Parallel hotspot is bulk TC creation in Phase 3, which is also the only step that branches per TMS modality.
| Phase | Pattern | Subagent role |
|---|
| Phase 1 — Analyze scope | Single | inline — planning lives in the orchestrator (anti-pattern to delegate) |
| Phase 2 — ROI / Candidate-Manual-Deferred verdict | Single | inline — decisions live in the orchestrator |
| Phase 3 — TMS TC creation (N > 10 TCs) | Parallel | M subagents, chunks of ~5-10 TCs per agent; cap = 10 to avoid Jira/Xray rate limits; each subagent loads (Modality A) or (Modality B) |
| Phase 3 — TMS TC creation (N ≤ 10 TCs) | Single | inline — dispatch overhead is not justified for small batches |
| Phase 3 — Traceability linking (US <-> ATP <-> ATR <-> TCs) | Single | inline — requires aggregated state of all created entities |
| Phase 3 — Final report / coverage matrix | Single | inline — synthesis lives in the orchestrator |
- Concurrency cap = 10 subagents for Parallel TC creation. Jira and Xray APIs both rate-limit at ~10 writes/sec sustained; fanning out wider triggers 429 responses. If a module has >100 TCs, batches per subagent must be larger than 10 each (cap is on subagent count, not chunk size).
- Error protocol: On any subagent failure: STOP, report the partial success state (which TCs landed, which failed, with their issue keys / errors), present retry / skip / abort options. Do NOT auto-fix nor auto-rollback. See
.claude/skills/framework-core/references/orchestration-doctrine.md
.
本技能符合
中“编排模式(子Agent策略)”的准则。每次调度均遵循
.claude/skills/framework-core/references/briefing-template.md
定义的6要素简报格式,且各阶段选择的模式与
.claude/skills/framework-core/references/dispatch-patterns.md
中的决策指南匹配。阶段1(分析)和阶段2(优先级划分)保持内联执行,因为规划和决策需由编排器主导;唯一的并行热点是阶段3中的批量测试用例创建,这也是唯一会根据TMS模式分支的步骤。
| 阶段 | 模式 | 子Agent角色 |
|---|
| 阶段1 — 分析范围 | 单Agent | 内联执行——规划需由编排器主导(委派是反模式) |
| 阶段2 — ROI / Candidate-Manual-Deferred判定 | 单Agent | 内联执行——决策需由编排器主导 |
| 阶段3 — TMS测试用例创建(数量>10) | 多Agent并行 | M个子Agent,每个Agent处理约5-10个测试用例;上限为10个,避免触发Jira/Xray速率限制;每个子Agent加载(模式A)或(模式B) |
| 阶段3 — TMS测试用例创建(数量≤10) | 单Agent | 内联执行——小批量任务的调度开销得不偿失 |
| 阶段3 — 可追溯性关联(US <-> ATP <-> ATR <-> 测试用例) | 单Agent | 内联执行——需要所有已创建实体的聚合状态 |
| 阶段3 — 最终报告 / 覆盖矩阵 | 单Agent | 内联执行——汇总工作需由编排器主导 |
- 并行测试用例创建的并发上限为10个子Agent。Jira和Xray API的持续写入速率限制约为10次/秒;超出此范围会触发429响应。若某模块包含超过100个测试用例,每个子Agent处理的批次需大于10个(限制针对子Agent数量,而非单批次大小)。
- 错误协议:若任何子Agent执行失败:立即停止,报告部分成功状态(已创建的测试用例、失败的测试用例及其问题键/错误信息),提供重试/跳过/终止选项。禁止自动修复或自动回滚。详情请见
.claude/skills/framework-core/references/orchestration-doctrine.md
。
Phase 0 — Resolve TMS modality (mandatory gate)
阶段0 — 确定TMS模式(强制前置步骤)
Every project runs in one of two modalities. Resolve it before Phase 1. The same ATP/ATR/TC concepts have different containers in each mode.
每个项目采用以下两种模式之一。必须在阶段1之前确定模式。相同的ATP/ATR/TC概念在不同模式下有不同的载体。
The question you MUST answer first
必须首先回答的问题
Does this project have Xray installed and licensed on Jira?
A. Yes -> Modality A: Xray on Jira
B. No -> Modality B: Jira-native (no Xray)
该项目是否在Jira上安装并授权了Xray?
A. 是 -> 模式A:Jira上的Xray
B. 否 -> 模式B:原生Jira(无Xray)
How to resolve it without asking (in order)
无需询问即可确定模式的方法(按优先级)
- Check for . Value (or any Xray CLI) -> Modality A. Value is unset, -only, or matches -> Modality B.
- If is ambiguous, look for a
.context/master-test-plan.md
line such as or .
- If still ambiguous, list existing issue types in the project via
[ISSUE_TRACKER_TOOL] List issue types
. If the project exposes / / / , it is Modality A. Otherwise Modality B.
- Only if all three checks fail, ask the user the question above. Do NOT ask by default — autoresolve first.
- 查看中的。若值为(或任何Xray CLI)-> 模式A。若值未设置、仅为,或与匹配 -> 模式B。
- 若内容模糊,查找
.context/master-test-plan.md
中类似或的行。
- 若仍不确定,通过
[ISSUE_TRACKER_TOOL] List issue types
列出项目中的现有问题类型。若项目包含///类型,则为模式A,否则为模式B。
- 仅当以上三种检查均失败时,才向用户询问上述问题。默认情况下请勿直接询问——先尝试自动确定。
What changes per modality
不同模式的差异
| Artifact | Modality A (Xray on Jira) | Modality B (Jira-native) |
|---|
| ATP (Acceptance Test Plan) | Xray issue, named Test Plan: {{PROJECT_KEY}}-{n}
, linked to US | Story {{jira.acceptance_test_plan_atp}}
+ comment mirror on the Story. No separate issue. |
| ATR (Acceptance Test Results) | Xray issue with Test Runs per TC, Environment, Begin/End Date, named Test Results: {{PROJECT_KEY}}-{n}
| Story {{jira.acceptance_test_results_atr}}
+ comment mirror on the Story. |
| TC (Test Case) | Xray issue (type Manual / Cucumber / Generic) | Jira-native issue type (or with custom type), Description carries the full TC template |
| Test Set / Precondition / Test Plan | First-class Xray issue types | Not available — use labels + Epic grouping |
| Result sync | CI imports JUnit/Cucumber via [TMS_TOOL] Import Results
-> Test Runs auto-update | Custom script updates Test Status field on each TC + comment with build context |
| CLI tag | resolves to or equivalent | falls through to (acli / Jira MCP) |
| 工件 | 模式A(Jira上的Xray) | 模式B(原生Jira) |
|---|
| ATP(验收测试计划) | Xray 类型问题,命名为Test Plan: {{PROJECT_KEY}}-{n}
,关联至用户故事 | 用户故事的{{jira.acceptance_test_plan_atp}}
字段 + 用户故事上的镜像评论。无独立问题。 |
| ATR(验收测试结果) | Xray 类型问题,包含每个测试用例的测试运行记录、环境、开始/结束日期,命名为Test Results: {{PROJECT_KEY}}-{n}
| 用户故事的{{jira.acceptance_test_results_atr}}
字段 + 用户故事上的镜像评论。 |
| TC(测试用例) | Xray 类型问题(Manual/Cucumber/Generic) | 原生Jira的问题类型(或带自定义类型的),描述字段包含完整测试用例模板 |
| Test Set / Precondition / Test Plan | Xray一等问题类型 | 不可用——使用标签+Epic分组替代 |
| 结果同步 | CI通过[TMS_TOOL] Import Results
导入JUnit/Cucumber结果 -> 测试运行记录自动更新 | 自定义脚本更新每个测试用例的Test Status字段 + 附带构建上下文的评论 |
| CLI标签 | 解析为或同类工具 | 默认使用(acli/Jira MCP) |
Persist the decision
持久化模式决策
Once resolved, save the modality into
for the ticket (if one exists) and treat it as sticky: do not re-resolve mid-session. If you detect drift (e.g.
suddenly fails), stop and ask the user before re-resolving.
Reference implementations:
- Modality A concepts + Xray REST/GraphQL/CLI ->
references/xray-platform.md
- Modality B project setup (Test issue type, Screen Scheme, custom fields) ->
- Both modes side-by-side (field mapping, workflow, Description template) ->
references/jira-test-management.md
确定模式后,将其保存至工单的
中(若存在工单),并保持粘性:会话中途不得重新确定模式。若检测到偏差(例如
突然执行失败),请停止操作并在重新确定前询问用户。
参考实现:
- 模式A概念 + Xray REST/GraphQL/CLI ->
references/xray-platform.md
- 模式B项目配置(Test问题类型、屏幕方案、自定义字段) ->
- 两种模式对比(字段映射、工作流、描述模板) ->
references/jira-test-management.md
When to use each scope
各范围的适用场景
Pick the scope based on the input, not the output. All four scopes share the same Analyze -> Prioritize -> Document pipeline; only the input source and defaults differ.
| Scope | Input | Typical volume | Default labels | Notes |
|---|
| Module-driven | A module of the system explored end-to-end | 20-100+ scenarios | , or | Batch of TCs grouped under the Regression Epic. Most scenarios will be Deferred. |
| Ticket-driven | A QA Approved user story from a sprint | 3-8 scenarios | , plus the test type | Output of a session. ATP/ATR created per US. |
| Bug-driven | A closed bug with a verified fix | 1-2 scenarios | , (usually) | One regression TC per bug. ROI is biased up: "it failed once, it can fail again." |
| Ad-hoc / Exploratory | New scenarios found in exploratory testing | 1-10 scenarios | | Apply the 3 Phase-0 questions harshly; ad-hoc scenarios are often one-time validations. |
If the user gives you a story ID, use ticket-driven. If they give you a bug ID, use bug-driven. If they give you a module name or a session output, use module- or ad-hoc accordingly.
根据输入而非输出选择范围。四种范围均遵循相同的分析->优先级划分->记录流程;仅输入源和默认设置不同。
| 范围 | 输入 | 典型数量 | 默认标签 | 说明 |
|---|
| 模块驱动 | 系统某一模块的端到端探索结果 | 20-100+个场景 | 、或 | 归为回归Epic下的批量测试用例。大多数场景会被标记为Deferred。 |
| 工单驱动 | 迭代中QA已批准的用户故事 | 3-8个场景 | + 测试类型标签 | 会话的输出。每个用户故事对应创建ATP/ATR。 |
| 缺陷驱动 | 已修复并验证的缺陷 | 1-2个场景 | 、(通常) | 每个缺陷对应一个回归测试用例。ROI权重偏高:“曾失败过,就可能再次失败”。 |
| 临时/探索性 | 探索性测试中发现的新场景 | 1-10个场景 | | 严格应用阶段0的三个筛选问题;临时场景通常为一次性验证。 |
若用户提供故事ID,使用工单驱动范围;若提供缺陷ID,使用缺陷驱动范围;若提供模块名称或会话输出,相应使用模块驱动或临时范围。
Phase 1 — Analyze
阶段1 — 分析
Inputs you must gather
必须收集的输入
| Source | What to read | Why |
|---|
| User Story / Epic | Description, ACs, comments, linked issues | Scenario identification, risk signals |
| Closed bugs linked to the story | Summary, root cause, fix area | Prior-bug prioritization rule |
| Exploratory session notes | Validated scenarios, observations | Reuse nomenclature already used |
| Existing ATP (if present) | .context/PBI/.../acceptance-test-plan.md
or TMS ATP | Scenarios may already exist — do not reinvent |
| Implementation plan / source code | Actual files, APIs, test IDs | Validate design matches implementation before documenting |
| 来源 | 需读取内容 | 原因 |
|---|
| 用户故事/Epic | 描述、验收标准(ACs)、评论、关联问题 | 识别场景、风险信号 |
| 关联至故事的已修复缺陷 | 摘要、根本原因、修复范围 | 基于历史缺陷的优先级规则 |
| 探索性测试会话笔记 | 已验证场景、观察结果 | 复用已采用的命名规范 |
| 现有ATP(若存在) | .context/PBI/.../acceptance-test-plan.md
或TMS中的ATP | 场景可能已存在——无需重复创建 |
| 实现计划/源代码 | 实际文件、API、测试ID | 记录前验证设计与实现是否匹配 |
Separate real scenarios from cross-cutting characteristics
区分真实场景与跨领域特性
Cross-cutting traits are validated inside every test, not as separate TCs.
| Cross-cutting (NOT a TC) | Validated by |
|---|
| Mobile responsive | Running each test in mobile viewport |
| XSS prevention | Using special-character test data inside tests |
| Performance | Timing assertions inside tests |
| Accessibility | A11y assertions inside UI tests |
| API contract | Response schema checks inside API tests |
| Generic "error handling" | Specific negative-path scenarios |
A real scenario is a
user flow: clear business objective, concrete precondition + action, verifiable outcome. Name format:
Validate <CORE> <CONDITIONAL>
.
跨领域特性需在每个测试用例中验证,而非作为独立测试用例。
| 跨领域特性(非测试用例) | 验证方式 |
|---|
| 移动端响应式 | 在移动端视口中运行每个测试用例 |
| XSS防护 | 在测试用例中使用含特殊字符的测试数据 |
| 性能 | 在测试用例中添加计时断言 |
| 可访问性 | 在UI测试中添加A11y断言 |
| API契约 | 在API测试中添加响应 schema 检查 |
| 通用“错误处理” | 通过特定负向场景验证 |
真实场景是
用户流程:明确的业务目标、具体的前置条件+操作、可验证的结果。命名格式:
。
Source-code validation (mandatory before documenting)
源代码验证(记录前强制步骤)
The design in the ATP was written before code existed. Before creating any TC:
- Open the implementation plan (if any) and list the files it touches.
- Grep the actual code for , route handlers, API paths, and text formats.
- Compare the ATP's assumptions against what the code does. If they diverge, correct the TC design and add a Refinement Notes section.
Common discrepancies to check for:
- An API the ATP assumed exists turns out to be SSR/direct DB.
- UI text format in the ATP ("based on N reviews") vs reality ("(N reviews)").
- Hardcoded IDs in the ATP vs variable pattern required in TMS.
Skipping this step is the single most common cause of invalid automated tests later.
ATP是在代码编写前设计的。创建任何测试用例前:
- 打开实现计划(若存在)并列出其涉及的文件。
- 在实际代码中搜索、路由处理器、API路径和文本格式。
- 对比ATP的假设与代码实际实现。若存在差异,修正测试用例设计并添加“优化说明”部分。
需检查的常见差异:
- ATP假设存在的API实际为SSR/直接数据库调用。
- ATP中的UI文本格式(“基于N条评论”)与实际(“(N条评论)”)不符。
- ATP中的硬编码ID与TMS所需的可变模式不符。
跳过此步骤是后续自动化测试失效的最常见原因。
TC identity rule (load-bearing)
测试用例标识规则(核心)
A TC is defined by Precondition + Action. All expected results from the same (precondition, action) pair belong to the same TC, not separate TCs.
Same TC: Different TCs:
Precondition: valid credentials Precondition: valid credentials -> TC-A
Action: submit login Precondition: locked account -> TC-B
Assertions: redirect + token + welcome Precondition: invalid credentials -> TC-C
(all one TC) (all same action, but preconditions differ)
Splitting one (precondition, action) into N "check panel A / check panel B / check panel C" TCs is a textbook anti-pattern. One TC, multiple assertions.
测试用例由前置条件+操作定义。同一(前置条件,操作)组合的所有预期结果均属于同一个测试用例,而非多个测试用例。
同一测试用例: 不同测试用例:
前置条件:有效凭证 前置条件:有效凭证 -> TC-A
操作:提交登录 前置条件:锁定账户 -> TC-B
断言:重定向+令牌+欢迎页 前置条件:无效凭证 -> TC-C
(均属于同一测试用例) (操作相同,但前置条件不同)
将一个(前置条件,操作)拆分为N个“检查面板A/检查面板B/检查面板C”测试用例是典型的反模式。一个测试用例可包含多个断言。
Equivalence Partitioning
等价类划分
Inputs that produce the same output -> one parameterized TC (Scenario Outline + Examples). Inputs that produce different outputs -> separate TCs.
产生相同输出的输入 -> 单个参数化测试用例(Scenario Outline + Examples)。产生不同输出的输入 -> 独立测试用例。
Phase 2 — Prioritize (ROI)
阶段2 — 优先级划分(ROI)
Every scenario passes three gates in order. Fail any gate -> Deferred.
每个场景依次通过三个筛选门。未通过任意一个 -> 标记为Deferred。
Phase 0: The three filter questions
阶段0:三个筛选问题
- Does it protect against FUTURE regressions? If the bug was a one-time typo in a stable area, the answer is no. Defer.
- Are there PRIOR bugs in this area? Yes -> prioritize even with moderate ROI ("it failed once, it can fail again").
- Is it an APP-level concern or a FEATURE-level concern? XSS / a11y / performance / responsive are APP-level suites, not per-feature TCs. Defer from this scope.
- 是否能防范未来的回归? 若缺陷是稳定区域中的一次性笔误,答案为否。标记为Deferred。
- 该区域是否存在历史缺陷? 是 -> 即使ROI中等也优先考虑(“曾失败过,就可能再次失败”)。
- 是应用级关注点还是功能级关注点? XSS/可访问性/性能/响应式属于应用级套件,而非每个功能的测试用例。从当前范围中排除,标记为Deferred。
ROI formula (load-bearing)
ROI公式(核心)
ROI = (Frequency x Impact x Stability) / (Effort x Dependencies)
Each factor is scored 1-5 independently:
| Factor | 1 | 2 | 3 | 4 | 5 |
|---|
| Frequency (how often run) | Yearly / rarely | Every release | Every sprint | Daily | Every PR / commit |
| Impact (if it fails) | Cosmetic | Minor inconvenience | Degrades UX | Blocks feature | Revenue / core business |
| Stability (of the flow) | Very volatile | Unstable | Moderate | Stable, minor changes | Unchanged for months |
| Effort (to automate) | Trivial | Low (hours) | Moderate (1-2 days) | High (several days) | Very high (week+) |
| Dependencies | None | 1-2 simple | 3-4 | 5+ | Complex externals |
Note: Effort and Dependencies are divisors — higher score = worse. The other three are multipliers.
ROI = (频率 × 影响 × 稳定性) / (实现成本 × 依赖项)
每个因素独立评分1-5:
| 因素 | 1 | 2 | 3 | 4 | 5 |
|---|
| 频率(运行频次) | 每年/极少 | 每次发布 | 每次迭代 | 每日 | 每次PR/提交 |
| 影响(失败后果) | cosmetic(外观问题) | 轻微不便 | 降低用户体验 | 阻塞功能 | 影响营收/核心业务 |
| 稳定性(流程稳定性) | 极不稳定 | 不稳定 | 中等稳定 | 稳定,仅小变更 | 数月未变更 |
| 实现成本(自动化所需成本) | trivial(极简单) | 低(数小时) | 中等(1-2天) | 高(数天) | 极高(一周以上) |
| 依赖项 | 无 | 1-2个简单依赖 | 3-4个 | 5个以上 | 复杂外部依赖 |
注意:实现成本和依赖项是除数——分数越高越不利。其他三个因素是乘数。
Component value bonus
组件价值加成
If a TC is reusable across multiple E2E flows:
Component Value = Base ROI x (1 + 0.2 x N)
where
= number of E2E flows that consume it. A low-ROI atomic like
can become automate-worthy purely through reuse.
若测试用例可在多个端到端流程中复用:
组件价值 = 基础ROI × (1 + 0.2 × N)
其中
= 可复用该测试用例的端到端流程数量。低ROI的原子测试用例(如
)可仅通过复用成为值得自动化的对象。
Three outcomes (load-bearing)
三种结果(核心)
Every scenario ends in exactly one of these buckets. There is no fourth.
| Outcome | Triggers it | Where it goes next | TMS status flow |
|---|
| Candidate | ROI > 3.0, OR (ROI 1.5-3.0 AND prior bug), OR critical happy path | Feeds skill | Draft -> In Design -> Ready -> In Review -> Candidate |
| Manual | ROI 0.5-1.5 AND not automatable (human judgment, visual inspection), OR explicitly manual-only | Terminal: manual regression suite | Draft -> In Design -> Ready -> Manual |
| Deferred | ROI < 0.5, OR failed Phase-0 filter, OR one-time validation | Terminal: not in regression. Can be revisited if system changes | Do not create TC in TMS; document as Deferred in the prioritization report |
Rule of thumb: if more than 50% of candidates end up Candidate or Manual, re-apply Phase 0 more strictly. Most scenarios should be Deferred.
每个场景最终必属以下类别之一,无第四种。
| 结果 | 触发条件 | 后续流向 | TMS状态流转 |
|---|
| Candidate | ROI > 3.0,或(ROI 1.5-3.0且存在历史缺陷),或关键主流程 | 传入技能 | Draft -> In Design -> Ready -> In Review -> Candidate |
| Manual | ROI 0.5-1.5且无法自动化(人工判断、视觉检查),或明确指定仅手动执行 | 终止:手动回归套件 | Draft -> In Design -> Ready -> Manual |
| Deferred | ROI < 0.5,或未通过阶段0筛选,或一次性验证 | 终止:不纳入回归。若系统变更可重新评估 | 不在TMS中创建测试用例;在优先级划分报告中记录为Deferred |
经验法则:若超过50%的场景被标记为Candidate或Manual,需更严格地应用阶段0筛选。大多数场景应标记为Deferred。
Phase 3 — Document in TMS
阶段3 — 在TMS中记录
Preflight: Regression Epic
预检查:回归Epic
Every documented TC must have a parent Regression Epic (single test repository for the project).
Prerequisite: Load
skill before executing commands below.
[ISSUE_TRACKER_TOOL] Search Issues:
project: {{PROJECT_KEY}}
query: type = Epic AND (summary ~ "regression" OR summary ~ "test repository" OR labels = "test-repository")
If none exists, ask the user before creating one with name
{{PROJECT_KEY}} Test Repository
and labels
test-repository, regression
.
每个已记录的测试用例必须关联至父回归Epic(项目的统一测试仓库)。
[ISSUE_TRACKER_TOOL] Search Issues:
project: {{PROJECT_KEY}}
query: type = Epic AND (summary ~ "regression" OR summary ~ "test repository" OR labels = "test-repository")
若不存在,需先询问用户,再创建名为
{{PROJECT_KEY}} Test Repository
、标签为
test-repository, regression
的Epic。
Entity model: ATP / ATR / TC
实体模型:ATP / ATR / TC
Four entities, always linked US <-> ATP <-> ATR <-> TC:
| Entity | Created | Naming | Main content |
|---|
| US (Story) | Pre-existing | | The requirement |
| ATP | Stage 1 (or now, if missing) | Test Plan: {{PROJECT_KEY}}-{n}
| Test Analysis + AC-to-TC coverage |
| ATR | Stage 1 (or now, if missing) | Test Results: {{PROJECT_KEY}}-{n}
| Test Report + execution results |
| TC | Stage 4 (this phase) | {US_ID}: TC#: Validate <CORE> <CONDITIONAL>
| Precondition + Action + Expected |
Read
references/tms-architecture.md
when creating ATP/ATR/TC for a ticket, checking required links, or validating that a story is fully documented.
四个实体,必须按以下顺序关联:US <-> ATP <-> ATR <-> TC:
| 实体 | 创建时机 | 命名规则 | 主要内容 |
|---|
| US(用户故事) | 已存在 | | 需求内容 |
| ATP | 阶段1(或当前,若缺失) | Test Plan: {{PROJECT_KEY}}-{n}
| 测试分析 + AC到测试用例的覆盖情况 |
| ATR | 阶段1(或当前,若缺失) | Test Results: {{PROJECT_KEY}}-{n}
| 测试报告 + 执行结果 |
| TC | 阶段4(本阶段) | {US_ID}: TC#: Validate <核心行为> <条件>
| 前置条件 + 操作 + 预期结果 |
为工单创建ATP/ATR/TC、检查关联链接或验证故事是否已完整记录时,请阅读
references/tms-architecture.md
。
Linking order (always)
关联顺序(必须遵循)
1. Create ATP -> link to US
2. Create ATR -> link to US
3. Update ATP -> link to ATR (bidirectional plan/results)
4. For each TC:
Create TC -> link to US + ATP + ATR + AC
Creating a TC before the ATP and ATR exist leaves orphaned references. Fix any broken links with
references/tms-architecture.md
§Traceability Rules.
1. 创建ATP -> 关联至US
2. 创建ATR -> 关联至US
3. 更新ATP -> 双向关联至ATR(计划/结果关联)
4. 对每个测试用例:
创建TC -> 关联至US + ATP + ATR + AC
先创建测试用例再创建ATP和ATR会导致孤立引用。可使用
references/tms-architecture.md
中的“可追溯性规则”修复损坏的链接。
Creating TCs — modality matrix
创建测试用例 — 模式矩阵
| TMS stack | Manual test | Automation-candidate test |
|---|
| Xray on Jira | [TMS_TOOL] Create Test: type=Manual, steps=...
then [ISSUE_TRACKER_TOOL] Update Issue
to paste the complete Description template | [TMS_TOOL] Create Test: type=Cucumber, gherkin=<high-quality gherkin>
then [ISSUE_TRACKER_TOOL] Update Issue
with the Description template |
| Native Jira (no Xray) | [ISSUE_TRACKER_TOOL] Create Issue: issueType=Test, description=<steps table>
| [ISSUE_TRACKER_TOOL] Create Issue: issueType=Test, description=<gherkin in Description>
|
Always populate Description with the full TC template (Related Story, Priority, ROI, Prior bugs, Test Design gherkin/steps, Variables table, Implementation Code table, Architecture, Available Test IDs, Preconditions, Expected Results). Read
references/jira-test-management.md
when choosing between Xray and native Jira, or when the Description must be filled.
Dispatch: Use the dispatch defined in §Subagent Dispatch Strategy:
Parallel when N > 10 TCs (cap = 10 subagents), inline otherwise. The full briefings for both Modality A (Xray via
) and Modality B (Jira-native via
) live in
references/tms-architecture.md
§"Parallel TC creation". The sharding rule, error protocol, and aggregation contract are documented there. The serial flow below is the canonical procedure each subagent runs internally for its assigned chunk.
| TMS栈 | 手动测试用例 | 自动化候选测试用例 |
|---|
| Jira上的Xray | [TMS_TOOL] Create Test: type=Manual, steps=...
然后通过[ISSUE_TRACKER_TOOL] Update Issue
粘贴完整描述模板 | [TMS_TOOL] Create Test: type=Cucumber, gherkin=<高质量Gherkin代码>
然后通过[ISSUE_TRACKER_TOOL] Update Issue
添加描述模板 |
| 原生Jira(无Xray) | [ISSUE_TRACKER_TOOL] Create Issue: issueType=Test, description=<步骤表格>
| [ISSUE_TRACKER_TOOL] Create Issue: issueType=Test, description=<描述字段中的Gherkin代码>
|
必须在描述字段中填充完整测试用例模板(关联故事、优先级、ROI、历史缺陷、测试设计Gherkin/步骤、变量表、实现代码表、架构、可用测试ID、前置条件、预期结果)。选择Xray或原生Jira,或需填充描述字段时,请阅读
references/jira-test-management.md
。
调度:遵循“子Agent调度策略”中定义的规则:当测试用例数量>10时使用
并行调度(上限10个子Agent),否则内联执行。模式A(通过
的Xray)和模式B(通过
的原生Jira)的完整简报位于
references/tms-architecture.md
的“并行测试用例创建”部分。分片规则、错误协议和聚合契约均已在其中记录。以下串行流程是每个子Agent处理分配到的测试用例块时内部执行的标准流程。
High-quality Gherkin (for Candidates)
高质量Gherkin代码(适用于Candidate)
gherkin
@{priority} @regression @automation-candidate @{US_ID}
Scenario Outline: Validate <core> <conditional>
"""
Bugs covered: BUG-1, BUG-2
Related Story: {US_ID}
"""
# === PRECONDITIONS (tester / script builds them) ===
Given <entity> exists with <identifier>
And <entity> has <quantity> <elements> where <quantity> <condition>
# === ACTION ===
When the user navigates to "<route>"
And the user <main_action>
# === VALIDATIONS ===
Then <ui_element> is displayed with format "<expected_format>"
And <additional_validation>
# === EQUIVALENT PARTITIONS ===
Examples: Happy path
| ... |
Examples: Edge case
| ... |
Rules that always apply:
- Variables, never hardcoded data: not . Include a Variables table with how to obtain each.
- Tags always include: priority (
@critical|@high|@medium|@low
), suite (, if critical path), automation flag (), traceability ().
- Structured comments: , , ,
# === EQUIVALENT PARTITIONS ===
.
- Docstring with metadata: related story, bugs covered, ROI.
gherkin
@{priority} @regression @automation-candidate @{US_ID}
Scenario Outline: Validate <核心行为> <条件>
"""
覆盖缺陷: BUG-1, BUG-2
关联故事: {US_ID}
"""
# === 前置条件(测试人员/脚本构建) ===
Given <实体> 已存在,标识符为 <标识符>
And <实体> 包含 <数量> 个 <元素>,满足 <数量> <条件>
# === 操作 ===
When 用户导航至 "<路由>"
And 用户执行 <主要操作>
# === 验证 ===
Then <UI元素> 已显示,格式为 "<预期格式>"
And <额外验证>
# === 等价类 ===
Examples: 主流程
| ... |
Examples: 边缘场景
| ... |
必须遵循的规则:
- 使用变量,禁止硬编码数据:使用而非。需包含变量表说明如何获取每个变量的值。
- 必须包含的标签:优先级(
@critical|@high|@medium|@low
)、套件(,关键流程可添加——目标占套件的10-20%)、自动化标记()、可追溯性()。
- 结构化注释:、、、。
- 含元数据的文档字符串:关联故事、覆盖缺陷、ROI。
Workflow transitions
工作流流转
Substrate reference: state and transition names below resolve from
.agents/jira-workflows.json
(manifest at
.agents/jira-required.yaml
). Use
{{jira.status.test_case.<slug>}}
and
{{jira.transition.test_case.<slug>}}
in skill code; the substrate maps the slug to the literal Jira name. See
references/tms-conventions.md
§5 for the full state machine.
Draft --start_design--> In Design --ready_to_run--> Ready --+-- for_manual --> Manual (terminal manual)
+-- automation_review_from_ready --> In Review
|
+-- approve_to_automate --> Candidate (feeds test-automation)
Never jump states. If a TC needs rework, use a
transition (e.g.
-> in_design).
底层参考:以下状态和流转名称解析自
.agents/jira-workflows.json
(定义于
.agents/jira-required.yaml
的
)。在技能代码中使用
{{jira.status.test_case.<slug>}}
和
{{jira.transition.test_case.<slug>}}
;底层会将slug映射为Jira中的实际名称。完整状态机请见
references/tms-conventions.md
第5节。
Draft --start_design--> In Design --ready_to_run--> Ready --+-- for_manual --> Manual (手动终止状态)
+-- automation_review_from_ready --> In Review
|
+-- approve_to_automate --> Candidate(传入test-automation)
绝不能跳过状态。若测试用例需要返工,使用
流转(例如
-> in_design)。
Naming — the one rule that matters
命名规则(核心)
{US_ID or TS_ID}: TC#: Validate <CORE> <CONDITIONAL>
- : verb + object — the behavior itself (, , ).
- : the distinguishing condition (,
when password is incorrect
, when exceeding 5 failed attempts
).
- In code (KATA): decorator and
Should <behavior> when <condition>
in blocks.
Anti-patterns to reject:
,
,
.
{US_ID或TS_ID}: TC#: Validate <核心行为> <条件>
- :动词+对象——行为本身(、、)。
- :区分场景的条件(、
when password is incorrect
、when exceeding 5 failed attempts
)。
- 代码中(KATA):使用装饰器,块中使用
Should <behavior> when <condition>
格式。
Labels — baseline per TC
标签 — 测试用例基线
Every TC gets at least one scope label and one status label:
- Scope (required, one+): (almost always), (critical path only — aim for 10-20% of suite), , , .
- Status (applied as it moves): , , . and are mutually exclusive; remove once it becomes .
- Priority (optional): , , , .
Full reference in
references/tms-conventions.md
§Labels.
每个测试用例至少包含一个范围标签和一个状态标签:
- 范围(必填,至少一个):(几乎所有场景)、(仅关键流程——目标占套件的10-20%)、、、。
- 状态(随流转更新):、、。和互斥;一旦标记为,需移除。
- 优先级(可选):、、、。
完整参考请见
references/tms-conventions.md
的“标签”部分。
Local cache (Claude Code convention)
本地缓存(Claude Code规范)
After TMS creation, write one markdown file per TC into
.context/PBI/{module}/{story}/tests/{TC-ID}-{slug}.md
. Template in
references/jira-test-management.md
§Local cache. This prevents re-reading the TMS in future sessions and gives
an immediate handoff.
在TMS中创建测试用例后,将每个测试用例写入
.context/PBI/{module}/{story}/tests/{TC-ID}-{slug}.md
文件。模板请见
references/jira-test-management.md
的“本地缓存”部分。这可避免在未来会话中重新读取TMS,并为
提供即时交接。
- ROI divisors matter: Effort and Dependencies go in the denominator. A "critical flow" with Effort=5 and Dependencies=5 has low ROI by design — that is correct, not a bug in the formula.
- Prior-bug rule overrides ROI thresholds: a scenario tied to a closed bug enters regression even at ROI 1.5-3.0. Source: +
atc-definition-strategy.md
— both agree.
- Cross-cutting is not a TC: "Mobile responsive", "XSS prevention", "Performance" are never TCs on their own. They are validated inside other TCs or in an app-level suite.
- Linking order is not optional: create ATP and ATR BEFORE the first TC. If you create TCs first, you get orphaned references and is the only way out.
- Xray requires two calls: one (registers in Xray), then one
[ISSUE_TRACKER_TOOL] Update Issue
to paste the full Description. Skipping the second call leaves a TC with no readable documentation in Jira.
- Never hardcode UUIDs or emails in Gherkin. Always use with a Variables table and a query showing how to obtain the real value at runtime.
- One (precondition, action) = one TC. Multiple expected results all belong to the same TC. Splitting assertions into separate TCs is the single most-diagnosed anti-pattern in reviews.
- Bug-driven TCs default to Candidate. A closed bug is empirical evidence that the area regresses. Start at "automate" unless automation is technically impossible.
- Source-code validation is mandatory: the ATP was written before code. Grep for , routes, text formats. Log discrepancies in a Refinement Notes section on the TC.
- Most candidates should be Deferred. If a module produces 80 scenarios and 60 end up in regression, re-apply Phase 0 more harshly. Target: a few well-chosen TCs per story (1-3 simple, 3-5 complex).
- Test Plan / Test Set ID (Xray) vs User Story ID (native Jira): the TC prefix depends on stack. In Xray with a Test Set, prefix is the TS ID. In native Jira, prefix is the US ID. Both work — pick one per project and stay consistent.
- ROI除数至关重要:实现成本和依赖项位于分母。“关键流程”若实现成本=5且依赖项=5,其ROI必然较低——这是正确的,并非公式缺陷。
- 历史缺陷规则覆盖ROI阈值:关联至已修复缺陷的场景即使ROI为1.5-3.0也需纳入回归。来源: +
atc-definition-strategy.md
——两者一致。
- 跨领域特性并非测试用例:“移动端响应式”、“XSS防护”、“性能”永远不能作为独立测试用例。需在其他测试用例中验证,或在应用级套件中验证。
- 关联顺序不可随意更改:先创建ATP和ATR,再创建第一个测试用例。若先创建测试用例,会产生孤立引用,只能通过修复。
- Xray需要两次调用:一次(在Xray中注册),一次
[ISSUE_TRACKER_TOOL] Update Issue
(粘贴完整描述)。跳过第二次调用会导致Jira中的测试用例无可读文档。
- Gherkin代码中禁止硬编码UUID或邮箱:始终使用并附带变量表,说明运行时如何获取实际值。
- 一个(前置条件,操作)对应一个测试用例。多个预期结果均属于同一测试用例。将断言拆分为多个测试用例是评审中最常发现的反模式。
- 缺陷驱动的测试用例默认标记为Candidate。已修复的缺陷是该区域存在回归风险的实证证据。除非技术上无法自动化,否则默认标记为“需自动化”。
- 源代码验证是强制步骤:ATP是在代码编写前设计的。搜索、路由、文本格式。在测试用例的“优化说明”部分记录差异。
- 大多数场景应标记为Deferred。若某模块产生80个场景,其中60个被纳入回归,需更严格地应用阶段0筛选。目标:每个故事对应少量精选测试用例(简单场景1-3个,复杂场景3-5个)。
- 测试计划/测试集ID(Xray) vs 用户故事ID(原生Jira):测试用例前缀取决于技术栈。在带测试集的Xray中,前缀为TS ID;在原生Jira中,前缀为US ID。两种方式均可——每个项目选择一种并保持一致。
- Creating ATP/ATR/TC for a story or checking links -> read
references/tms-architecture.md
(entity model, required fields, linking sequence, completeness criteria).
- Naming a TC, filling fields, picking labels, or choosing Gherkin vs Traditional -> read
references/tms-conventions.md
(naming formulas, label taxonomy, workflow state machine, ROI table).
- Working in Jira native or Jira+Xray mode, creating tests via the right tool, or producing the full Description template -> read
references/jira-test-management.md
(mode comparison, Xray issue types, Description template, local cache template, CI/CD sync).
- Fixing broken traceability (TC not linked to US/ATP/ATR, name wrong) -> use the procedure in the Linking Order section above, backed by
references/tms-architecture.md
§Traceability Rules.
- Deciding if a bug deserves a regression TC -> apply Phase 0 question 2 (prior bug = prioritize), then ROI; bug-driven scope defaults to Candidate.
- TMS operations -> load skill for concrete CLI syntax. Issue-tracker operations resolve via per CLAUDE.md Tool Resolution.
- 为故事创建ATP/ATR/TC或检查链接 -> 阅读
references/tms-architecture.md
(实体模型、必填字段、关联顺序、完整性标准)。
- 测试用例命名、字段填充、标签选择或Gherkin/传统格式选择 -> 阅读
references/tms-conventions.md
(命名公式、标签分类、工作流状态机、ROI表格)。
- 在原生Jira或Jira+Xray模式下工作、通过正确工具创建测试用例或生成完整描述模板 -> 阅读
references/jira-test-management.md
(模式对比、Xray问题类型、描述模板、本地缓存模板、CI/CD同步)。
- 修复损坏的可追溯性(测试用例未关联至US/ATP/ATR、命名错误) -> 使用上述“关联顺序”部分的流程,并参考
references/tms-architecture.md
的“可追溯性规则”。
- 判断缺陷是否需创建回归测试用例 -> 应用阶段0的问题2(历史缺陷=优先),再计算ROI;缺陷驱动范围默认标记为Candidate。
- TMS操作 -> 加载技能获取具体CLI语法。问题跟踪器操作通过的“工具解析”规则解析为。
Quick reference — pseudocode per modality
快速参考 — 各模式伪代码
Resolve
/
via
§Tool Resolution. The shape of the calls differs by modality — the two blocks below are parallel, pick one based on Phase 0.
通过
的“工具解析”规则确定
/
。调用格式因模式而异——以下两个代码块并行,根据阶段0的结果选择其一。
Regression epic (both modalities, run once per project)
回归Epic(两种模式,每个项目运行一次)
Prerequisite: Load
skill before executing commands below.
[ISSUE_TRACKER_TOOL] Search Issues:
project: {{PROJECT_KEY}}
query: type = Epic AND labels = "test-repository"
[ISSUE_TRACKER_TOOL] Search Issues:
project: {{PROJECT_KEY}}
query: type = Epic AND labels = "test-repository"
If none, ask the user before creating:
若不存在,先询问用户再创建:
[ISSUE_TRACKER_TOOL] Create Issue:
project: {{PROJECT_KEY}}
issueType: Epic
title: "{{PROJECT_KEY}} Test Repository"
labels: test-repository, regression, qa
[ISSUE_TRACKER_TOOL] Create Issue:
project: {{PROJECT_KEY}}
issueType: Epic
title: "{{PROJECT_KEY}} Test Repository"
labels: test-repository, regression, qa
Modality A — Xray on Jira
模式A — Jira上的Xray
Prerequisite: Load
and
skills before executing commands below.
ATP = Xray Test Plan issue
ATP = Xray Test Plan类型问题
[TMS_TOOL] Create TestPlan:
project: {{PROJECT_KEY}}
title: Test Plan: {{PROJECT_KEY}}-{n}
tests: [] # filled as TCs are created
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "tests"
outward: {ATP_KEY}
inward: {STORY_KEY}
[TMS_TOOL] Create TestPlan:
project: {{PROJECT_KEY}}
title: Test Plan: {{PROJECT_KEY}}-{n}
tests: [] # 创建测试用例后填充
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "tests"
outward: {ATP_KEY}
inward: {STORY_KEY}
ATR = Xray Test Execution issue
ATR = Xray Test Execution类型问题
[TMS_TOOL] Create Execution:
project: {{PROJECT_KEY}}
title: Test Results: {{PROJECT_KEY}}-{n}
testPlan: {ATP_KEY}
environment: {from .env or session context}
tests: [] # filled at Stage 3 or via CI import
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {ATR_KEY}
inward: {STORY_KEY}
[TMS_TOOL] Create Execution:
project: {{PROJECT_KEY}}
title: Test Results: {{PROJECT_KEY}}-{n}
testPlan: {ATP_KEY}
environment: {来自.env或会话上下文}
tests: [] # 阶段3或CI导入时填充
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {ATR_KEY}
inward: {STORY_KEY}
TC = Xray Test issue (Cucumber for Candidates; Manual for Manual-only)
TC = Xray Test类型问题(Candidate用Cucumber;Manual用Manual)
[TMS_TOOL] Create Test:
project: {{PROJECT_KEY}}
type: Cucumber
title: {US_ID}: TC#: Validate <CORE> <CONDITIONAL>
labels: regression, automation-candidate, e2e, critical
gherkin: {from high-quality gherkin}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
description: {full Description template}
[TMS_TOOL] Create Test:
project: {{PROJECT_KEY}}
type: Cucumber
title: {US_ID}: TC#: Validate <核心行为> <条件>
labels: regression, automation-candidate, e2e, critical
gherkin: {来自高质量Gherkin代码}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
description: {完整描述模板}
Link TC to ATP, ATR, Story
将TC关联至ATP、ATR、故事
[TMS_TOOL] AddTests:
testPlan: {ATP_KEY}
tests: [{TEST_KEY}]
[TMS_TOOL] AddTests:
execution: {ATR_KEY}
tests: [{TEST_KEY}]
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {TEST_KEY}
inward: {STORY_KEY}
[TMS_TOOL] AddTests:
testPlan: {ATP_KEY}
tests: [{TEST_KEY}]
[TMS_TOOL] AddTests:
execution: {ATR_KEY}
tests: [{TEST_KEY}]
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {TEST_KEY}
inward: {STORY_KEY}
CI result flow (Stage 6)
CI结果流转(阶段6)
[TMS_TOOL] Import Results:
format: junit # or cucumber, xray-json
file: ./test-results/junit.xml
execution: {ATR_KEY}
[TMS_TOOL] Import Results:
format: junit # 或cucumber、xray-json
file: ./test-results/junit.xml
execution: {ATR_KEY}
Modality B — Jira-native (no Xray)
模式B — 原生Jira(无Xray)
Prerequisite: Load
skill before executing commands below.
ATP = Story customfield + comment mirror. NO separate issue.
ATP = 故事自定义字段 + 镜像评论。无独立问题。
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {STORY_KEY}
fields:
{{jira.acceptance_test_plan_atp}}: {Test Analysis body}
labels: +shift-left-reviewed
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {STORY_KEY}
body: |
=== Acceptance Test Plan ({{PROJECT_KEY}}-{n}) ===
{Test Analysis body — byte-for-byte mirror of {{jira.acceptance_test_plan_atp}}}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {STORY_KEY}
fields:
{{jira.acceptance_test_plan_atp}}: {测试分析内容}
labels: +shift-left-reviewed
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {STORY_KEY}
body: |
=== 验收测试计划 ({{PROJECT_KEY}}-{n}) ===
{测试分析内容 — 与{{jira.acceptance_test_plan_atp}}完全一致}
ATR = Story customfield + comment mirror. NO separate issue.
ATR = 故事自定义字段 + 镜像评论。无独立问题。
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {STORY_KEY}
fields:
{{jira.acceptance_test_results_atr}}: {Test Report body}
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {STORY_KEY}
body: |
=== Acceptance Test Results ({{PROJECT_KEY}}-{n}) ===
{Test Report body — byte-for-byte mirror of {{jira.acceptance_test_results_atr}}}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {STORY_KEY}
fields:
{{jira.acceptance_test_results_atr}}: {测试报告内容}
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {STORY_KEY}
body: |
=== 验收测试结果 ({{PROJECT_KEY}}-{n}) ===
{测试报告内容 — 与{{jira.acceptance_test_results_atr}}完全一致}
TC = Jira-native Test issue (custom issue type configured per jira-setup.md)
TC = 原生Jira Test类型问题(按jira-setup.md配置的自定义问题类型)
[ISSUE_TRACKER_TOOL] Create Issue:
project: {{PROJECT_KEY}}
issueType: Test # or Task with a Test Type custom field
summary: {US_ID}: TC#: Validate <CORE> <CONDITIONAL>
priority: {Critical|High|Medium|Low}
labels: [regression, automation-candidate, e2e, critical]
epic: {REGRESSION_EPIC_KEY}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
description: {full Description template — includes Gherkin if Candidate}
fields:
Test Status: Draft # custom field per jira-setup.md
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {TEST_KEY}
inward: {STORY_KEY}
[ISSUE_TRACKER_TOOL] Create Issue:
project: {{PROJECT_KEY}}
issueType: Test # 或带Test Type自定义字段的Task
summary: {US_ID}: TC#: Validate <核心行为> <条件>
priority: {Critical|High|Medium|Low}
labels: [regression, automation-candidate, e2e, critical]
epic: {REGRESSION_EPIC_KEY}
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
description: {完整描述模板 — Candidate需包含Gherkin代码}
fields:
Test Status: Draft # 按jira-setup.md配置的自定义字段
[ISSUE_TRACKER_TOOL] Link Issues:
linkType: "is tested by"
outward: {TEST_KEY}
inward: {STORY_KEY}
CI result flow (Stage 6) — custom script, no auto-import
CI结果流转(阶段6)——自定义脚本,无自动导入
for each {TEST_KEY} in run:
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
fields:
Test Status: {PASSED|FAILED|BLOCKED}
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {TEST_KEY}
body: "Run {date}: {result}. Env: {env}. CI: {url}"
for each {TEST_KEY} in run:
[ISSUE_TRACKER_TOOL] Update Issue:
issue: {TEST_KEY}
fields:
Test Status: {PASSED|FAILED|BLOCKED}
[ISSUE_TRACKER_TOOL] Add Comment:
issue: {TEST_KEY}
body: "运行时间 {date}: {结果}. 环境: {env}. CI链接: {url}"
Workflow transition (both modalities — same state machine)
工作流流转(两种模式 — 相同状态机)
Prerequisite: Load
skill before executing commands below.
[ISSUE_TRACKER_TOOL] Transition Issue:
issue: {TEST_KEY}
transition: {{jira.transition.test_case.start_design}} # Draft -> In Design
# later: {{jira.transition.test_case.ready_to_run}} # In Design -> Ready
# later: {{jira.transition.test_case.automation_review_from_ready}} # Ready -> In Review
# later: {{jira.transition.test_case.approve_to_automate}} # In Review -> Candidate
# OR: {{jira.transition.test_case.for_manual}} # Ready -> Manual
[ISSUE_TRACKER_TOOL] Transition Issue:
issue: {TEST_KEY}
transition: {{jira.transition.test_case.start_design}} # Draft -> In Design
# 后续: {{jira.transition.test_case.ready_to_run}} # In Design -> Ready
# 后续: {{jira.transition.test_case.automation_review_from_ready}} # Ready -> In Review
# 后续: {{jira.transition.test_case.approve_to_automate}} # In Review -> Candidate
# 或: {{jira.transition.test_case.for_manual}} # Ready -> Manual