imperatives
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese/imperatives
/imperatives
Extract atomic imperatives (MUST/SHOULD/MAY) from markdown instruction files into JSONL.
从Markdown指令文件中提取原子命令式语句(MUST/SHOULD/MAY)并转换为JSONL格式。
Usage
使用方法
/imperatives # default: ai-workspace/rules/*.md + AGENTS.md
/imperatives .claude/rules/*.md # specific globs
/imperatives --output imperatives.jsonl # write to file/imperatives # 默认路径:ai-workspace/rules/*.md + AGENTS.md
/imperatives .claude/rules/*.md # 指定通配符路径
/imperatives --output imperatives.jsonl # 写入文件Steps
步骤
-
Resolve files. Default:and
ai-workspace/rules/*.md. Expand globs to absolute paths. Zero matches → error, stop.AGENTS.md -
Pass 1 — Script extraction (fast, deterministic).bash
node --import tsx scripts/extract-imperatives.ts <files...> --output /tmp/imperatives-pass1.jsonlKeyword-based: catches RFC 2119 terms + imperative verbs. ~80% coverage, <1s. -
Pass 2 — Subagent reasoning (catches implicit imperatives). For each file, dispatch a subagent to identify imperatives the regex missed — contextual rules, implicit constraints, prose-embedded obligations.Subagent prompt:Read. Here are the imperatives already extracted by the script for this file:
<file><pass 1 JSONL lines where source.file matches>Identify additional imperative statements the regex missed. Look for:- Implicit obligations hidden in prose (e.g., "The workflow is a menu, not a pipeline" implies MUST NOT enforce order)
- Contextual constraints (e.g., section context that makes a statement imperative)
- Conditional rules not triggered by keyword patterns
For each new imperative found, output one JSON line (same schema — id, level, polarity, subject, predicate, when, source, tool_scope, tags, raw). Do NOT duplicate entries already in pass 1. If none found, output nothing.Use(Haiku) for most files. If a file had zero pass-1 imperatives OR contains dense prose (>50 lines without a keyword match), escalate that file to Sonnet for deeper reasoning.subagent_type: "explorer" -
Merge passes. Concatenate pass 1 + pass 2 outputs. Deduplicate bytext. Write final output to
rawor present inline.--output <path> -
Present summary. Report in a table:
- Total count (pass 1 baseline + pass 2 additions)
- Breakdown by and
leveltool_scope - Files with zero imperatives after both passes
- Pass 2 additions highlighted separately
-
Downstream callers. If invoked byor
/policy-algebra, return the JSONL path directly — skip the summary./distill
-
解析文件。默认路径:和
ai-workspace/rules/*.md。将通配符路径扩展为绝对路径。如果匹配到零个文件→报错并终止。AGENTS.md -
第一轮——脚本提取(快速、确定性)bash
node --import tsx scripts/extract-imperatives.ts <files...> --output /tmp/imperatives-pass1.jsonl基于关键词:捕获RFC 2119术语和命令式动词。覆盖率约80%,耗时<1秒。 -
第二轮——子代理推理(捕获隐含命令式语句) 针对每个文件,调度一个子代理识别正则表达式遗漏的命令式语句——上下文规则、隐含约束、嵌入在散文式文本中的义务。子代理提示语:读取。以下是脚本已从该文件中提取的命令式语句:
<file><pass 1中source.file匹配的JSONL行>识别正则表达式遗漏的其他命令式语句。请查找:- 隐藏在散文式文本中的隐含义务(例如:“工作流是菜单而非流水线”意味着MUST NOT强制顺序)
- 上下文约束(例如:使语句成为命令式的章节上下文)
- 未被关键词模式触发的条件规则
对于每个找到的新命令式语句,输出一行JSON(遵循相同的 schema — id、level、polarity、subject、predicate、when、source、tool_scope、tags、raw)。 请勿重复第一轮中已有的条目。如果未找到任何新条目,请输出空内容。大多数文件使用(Haiku)。如果某个文件在第一轮中未提取到任何命令式语句,或包含密集的散文式文本(超过50行无关键词匹配),则将该文件升级到Sonnet进行深度推理。subagent_type: "explorer" -
合并结果。将第一轮和第二轮的输出拼接起来。根据文本去重。将最终输出写入
raw或直接展示在界面中。--output <path> -
展示摘要。以表格形式报告:
- 总数(第一轮基准数 + 第二轮新增数)
- 按和
level分类的明细tool_scope - 两轮后仍未提取到命令式语句的文件
- 单独高亮显示第二轮新增的内容
-
下游调用方。如果被/policy-algebra或/distill调用,则直接返回JSONL路径——跳过摘要展示。
Failure modes
失败场景
| Condition | Behavior |
|---|---|
| No files matched | Error message. Stop. |
| File not found | Script warns on stderr, continues with remaining files. |
| Zero imperatives | Report zero. Not an error. |
| Script exits non-zero | Surface stderr to user. |
| 条件 | 行为 |
|---|---|
| 未匹配到任何文件 | 输出错误信息并终止。 |
| 文件未找到 | 脚本在stderr中发出警告,继续处理剩余文件。 |
| 未提取到任何命令式语句 | 报告数量为零,不视为错误。 |
| 脚本非零退出 | 将stderr内容展示给用户。 |
Cross-tool notes
跨工具说明
- Codex / Cursor: run the script directly — it's tool-agnostic.
- Codex / Cursor:直接运行脚本——该工具与编辑器无关。