expert-panel

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Preamble (runs on skill start)

前言(skill启动时自动运行)

bash
undefined
bash
undefined

Version check (silent if up to date)

Version check (silent if up to date)

python3 telemetry/version_check.py 2>/dev/null || true
python3 telemetry/version_check.py 2>/dev/null || true

Telemetry opt-in (first run only, then remembers your choice)

Telemetry opt-in (first run only, then remembers your choice)

python3 telemetry/telemetry_init.py 2>/dev/null || true

> **Privacy:** This skill logs usage locally to `~/.ai-marketing-skills/analytics/`. Remote telemetry is opt-in only. No code, file paths, or repo content is ever collected. See `telemetry/README.md`.

---
python3 telemetry/telemetry_init.py 2>/dev/null || true

> **隐私说明:** 本skill会在本地将使用记录日志到`~/.ai-marketing-skills/analytics/`。远程遥测仅在用户主动同意后开启,绝不会收集任何代码、文件路径或仓库内容,详情见`telemetry/README.md`。

---

Expert Panel

专家小组

General-purpose scoring and iterative improvement engine. Auto-assembles the right experts for whatever is being evaluated, scores it, and loops until 90+.

通用评分与迭代优化引擎,会为待评估内容自动匹配合适的专家,完成打分并循环迭代直到得分达到90+。

Step 1: Intake — Understand What's Being Scored

步骤1:信息采集——明确待评分内容

Collect or infer from context:
  1. Content/artifact — The thing(s) to score (paste, file path, or URL)
  2. Content type — Copy, sequence, landing page, strategy, title, chart, candidate eval, etc.
  3. Offer context — What's being sold/promoted? To whom? What domain/industry?
  4. Variants — Are there multiple versions to compare? (A/B/C)
  5. Source skill — Is this output from another skill? (e.g., cold-outbound-optimizer) If yes, note the source for feedback-to-source routing in Step 6.
If context is obvious from the conversation, don't ask — just proceed.

从上下文收集或推导以下信息:
  1. 内容/工件 — 待评分的对象(可粘贴内容、文件路径或URL)
  2. 内容类型 — 文案、序列、落地页、策略、标题、图表、候选人评估等
  3. 推广背景 — 正在售卖/推广的产品是什么?面向的受众是?所属领域/行业是?
  4. 变体 — 是否有多个版本需要对比?(A/B/C版本)
  5. 来源skill — 待评估内容是否是其他skill的输出?(例如cold-outbound-optimizer)如果是,标记来源以便步骤6中向来源skill反馈优化建议。
如果上下文已经明确给出上述信息,无需额外询问,直接推进流程即可。

Step 2: Auto-Assemble the Expert Panel

步骤2:自动组建专家小组

Build a panel of 7–10 experts tailored to the content type and domain.
为内容类型和所属领域量身组建7-10名专家的评审小组。

Assembly rules

组建规则

  1. Start with content-type experts. Read
    experts/
    directory for pre-built panels matching the content type. If an exact match exists (e.g.,
    experts/linkedin.md
    for a LinkedIn post), use it as the base.
  2. Add domain/offer experts. Based on the offer context, add 1–3 experts who understand the specific industry or domain. Examples:
    • Scoring bakery marketing → add Food & Beverage Marketing Expert
    • Scoring SaaS landing page → add SaaS Conversion Expert
    • Scoring recruiting outreach → add Agency Recruiter + Talent Market Expert
    • Scoring medical device copy → add Healthcare Compliance Expert
  3. Always include these two:
    • AI Writing Detector — See
      experts/humanizer.md
      . Weight: 1.5x. Non-negotiable.
    • Brand Voice Match — Checks alignment with the configured brand voice and known rejection patterns from
      references/patterns.md
      (if present).
  4. Check learned patterns. If
    references/patterns.md
    exists, read it. If any patterns apply to this content type, brief the panel on them. Dock points for known-bad patterns.
  5. Cap at 10 experts. If you have more than 10, merge overlapping roles.
  1. 优先匹配内容类型专家:读取
    experts/
    目录下匹配内容类型的预制专家小组,如果存在完全匹配的面板(例如针对LinkedIn帖子的
    experts/linkedin.md
    ),以此为基础组建小组。
  2. 补充领域/行业专家:基于推广背景,添加1-3名熟悉对应行业或领域的专家,示例:
    • 评分烘焙店营销内容 → 添加食品饮料营销专家
    • 评分SaaS落地页 → 添加SaaS转化率专家
    • 评分招聘 outreach 内容 → 添加猎头+人才市场专家
    • 评分医疗设备文案 → 添加医疗合规专家
  3. 必须包含以下两个角色:
    • AI Writing Detector:参考
      experts/humanizer.md
      ,权重1.5x,强制要求。
    • Brand Voice Match:检查内容是否匹配配置的品牌调性,以及是否违反
      references/patterns.md
      (如果存在)中记录的已知禁用模式。
  4. 检查已知模式:如果存在
    references/patterns.md
    文件则读取内容,如果有适用于当前内容类型的模式,提前告知评审小组,命中已知不良模式将被扣分。
  5. 最多10名专家:如果专家数量超过10名,合并重叠角色。

Panel output format

小组输出格式

List each expert with: Name, lens/focus, what they check.

列出每位专家的:姓名、评审视角/聚焦方向、核查内容。

Step 3: Select Scoring Rubric

步骤3:选择评分规则

Choose the appropriate rubric from
scoring-rubrics/
:
Content typeRubric file
Blog, social, email, newsletter, scripts
scoring-rubrics/content-quality.md
Strategy, recommendations, analysis
scoring-rubrics/strategic-quality.md
Landing pages, ads, CTAs
scoring-rubrics/conversion-quality.md
Charts, data viz, infographics
scoring-rubrics/visual-quality.md
Candidate evaluations
scoring-rubrics/evaluation-quality.md
OtherSynthesize a rubric from the two closest matches
Read the selected rubric file for detailed criteria and point allocation.

scoring-rubrics/
目录中选择适配的评分规则:
内容类型规则文件
博客、社交内容、邮件、通讯、脚本
scoring-rubrics/content-quality.md
策略、建议、分析报告
scoring-rubrics/strategic-quality.md
落地页、广告、CTA
scoring-rubrics/conversion-quality.md
图表、数据可视化、信息图
scoring-rubrics/visual-quality.md
候选人评估
scoring-rubrics/evaluation-quality.md
其他基于两个最匹配的规则合成新的评分规则
读取选中的规则文件,获取详细的评分标准和分值分配规则。

Step 4: Score — Recursive Loop Until 90+

步骤4:评分——递归循环直到得分90+

Target: 90/100 across all experts. Non-negotiable. Max 3 rounds.
目标:所有专家评审的综合得分达到90/100,强制要求,最多迭代3轮。

Each round produces:

每轮输出内容:

undefined
undefined

Round [N] — Score: [AVG]/100

第[N]轮 — 得分:[平均分]/100

ExpertScoreKey Feedback
[Name][0-100][One-line rationale]
.........
Aggregate: [weighted average — humanizer at 1.5x] Top 3 weaknesses: [ranked] Changes made: [specific edits addressing each weakness]

Then the revised content/artifact.
专家得分核心反馈
[姓名][0-100][一行说明评分理由]
.........
综合得分: [加权平均分 — AI Writing Detector权重1.5x] 前3项不足: [按优先级排序] 优化修改: [针对每项不足的具体修改内容]

随后附上修改后的内容/工件。

Rules

规则

  • Scores must be brutally honest. No padding to 90.
  • Humanizer score weighted 1.5x in the aggregate.
  • If aggregate < 90: identify top 3 weaknesses → revise → next round.
  • If aggregate ≥ 90: finalize and proceed to output.
  • After 3 rounds, if still < 90: return best version with honest score + note on what's holding it back.
  • Show ALL rounds in output — the iteration trail is part of the value.
  • 评分必须客观真实,不得为了达到90分刻意抬高分数。
  • AI Writing Detector的得分在计算综合分时权重为1.5x。
  • 如果综合分<90:识别前3项不足 → 修改内容 → 进入下一轮。
  • 如果综合分≥90:定稿进入输出环节。
  • 如果3轮迭代后综合分仍<90:返回最优版本,标注真实得分并说明待改进点。
  • 输出必须包含所有轮次的评审记录,迭代过程也是价值的一部分。

Variant comparison mode

变体对比模式

When scoring multiple variants (A/B/C):
  • Score each variant independently through the full panel.
  • After scoring, rank variants by aggregate score.
  • If top variant is < 90, iterate on the best one (don't iterate all of them).

当需要评分多个变体(A/B/C)时:
  • 每个变体独立走完全部专家评审流程。
  • 全部评分完成后按综合得分排序。
  • 如果得分最高的变体<90分,仅针对最优变体迭代优化,无需修改所有变体。

Step 5: Output Format

步骤5:输出格式

Winner + Score (always at top)

最优结果+得分(始终放在最顶部)

undefined
undefined

🏆 Result: [SCORE]/100 — [PASS ✅ | NEEDS WORK ⚠️]

🏆 结果:[得分]/100 — [通过 ✅ | 待优化 ⚠️]

[Final content/artifact here]
Iterations: [N] rounds Panel: [Expert names, comma-separated]

If variants: show winner first, then runner-up scores.
[最终内容/工件放在这里]
迭代次数: [N]轮 评审小组: [专家姓名,英文逗号分隔]

如果有多个变体:先展示最优结果,再列出其余变体得分。

🏆 Winner: Variant [X] — [SCORE]/100

🏆 最优:变体[X] — [得分]/100

[Winning content]
[最优内容]

Runner-up scores

其余变体得分

  • Variant A: 87/100
  • Variant B: 82/100
  • Variant C: 91/100 ← Winner
undefined
  • 变体A:87/100
  • 变体B:82/100
  • 变体C:91/100 ← 最优
undefined

Feedback History (below the result)

反馈历史(放在结果下方)

Show full scoring rounds.
---
<details>
<summary>📊 Scoring History (N rounds)</summary>

[All round tables from Step 4]

</details>

展示完整的评分迭代记录。
---
<details>
<summary>📊 评分历史(共N轮)</summary>

[步骤4中所有轮次的评分表格]

</details>

Step 6: Feedback-to-Source (When Scoring Another Skill's Output)

步骤6:向来源skill反馈(仅当评分内容为其他skill输出时)

When the scored content came from another skill, generate a Source Improvement Brief:
undefined
当待评分内容来自其他skill时,生成来源skill优化简报
undefined

🔁 Feedback for [Source Skill]

🔁 给[来源skill名称]的反馈

What scored low

得分较低的问题

  • [Pattern]: [Specific example from this content]
  • [问题模式]:[本次内容中的具体示例]

Suggested skill improvements

建议的skill优化点

  • [Concrete change to the source skill's process/rubric/prompt]
  • [对来源skill的流程/评分规则/prompt的具体修改建议]

Patterns to add to source skill

需要添加到来源skill的模式

  • [Any recurring weakness that should become a rule]

This brief can be used to update the source skill's SKILL.md or rubrics.

---
  • [所有需要固化为规则的高频问题]

该简报可用于更新来源skill的SKILL.md文件或评分规则。

---

Step 7: Memory — Learn from Approvals and Rejections

步骤7:记忆——从通过和驳回结果中学习

After the user approves or rejects panel output:
当用户同意或驳回小组评审输出后:

On approval (score ≥ 90, user accepts)

用户同意(得分≥90,用户接受结果)

Note what worked. No action needed unless a new positive pattern emerges.
记录有效经验,除非出现新的正向模式否则无需额外操作。

On rejection (user overrides the panel or rejects 90+ content)

用户驳回(用户推翻评审结果或拒绝得分≥90的内容)

  1. Ask why (or infer from context).
  2. Add a new pattern to
    references/patterns.md
    using this format:
markdown
undefined
  1. 询问驳回原因(或从上下文推导)。
  2. 按照以下格式向
    references/patterns.md
    添加新的模式:
markdown
undefined

[Pattern Name]

[模式名称]

  • Type: rejection | preference | override
  • Content types: [which types this applies to]
  • Rule: [What to always/never do]
  • Example: [The specific instance that triggered this]
  • Date: [YYYY-MM-DD]
  • Point dock: [-N points when detected]

3. Confirm: "Added pattern: [one-line summary]. Panel will dock [N] points for this going forward."
  • 类型: rejection | preference | override
  • 适用内容类型: [该模式适用的内容类型]
  • 规则: [必须遵守/禁止的操作]
  • 示例: [触发该规则的具体场景]
  • 日期: [YYYY-MM-DD]
  • 扣分值: [检测到该模式时扣N分]

3. 向用户确认:「已添加模式:[一行总结]。后续评审中命中该模式将被扣[N]分。」

Pattern enforcement

模式执行规则

Every scoring round, check
references/patterns.md
against the content. Apply point docks before expert scoring begins. This means known-bad patterns are penalized even if individual experts miss them.

每一轮评分前,都要对照
references/patterns.md
检查内容,在专家评分前先扣除对应分值,确保已知不良模式即使被个别专家遗漏也会被处罚。

Reference Files

参考文件

FilePurposeWhen to read
experts/humanizer.md
AI writing detection rubric (24 patterns)Every scoring run
experts/[domain].md
Pre-built expert panels for common domainsWhen domain matches
scoring-rubrics/content-quality.md
Content scoring rubricContent scoring
scoring-rubrics/strategic-quality.md
Strategy scoring rubricStrategy scoring
scoring-rubrics/conversion-quality.md
Landing page/ad/CTA rubricConversion scoring
scoring-rubrics/visual-quality.md
Chart/data viz/infographic rubricVisual scoring
scoring-rubrics/evaluation-quality.md
Candidate/assessment rubricEval scoring
references/patterns.md
Learned rejection patternsEvery scoring run
references/expert-assembly.md
Domain-expert examples for auto-assemblyWhen building unfamiliar panels
文件用途读取时机
experts/humanizer.md
AI写作检测规则(24种模式)每次评分运行时
experts/[domain].md
常见领域的预制专家小组领域匹配时
scoring-rubrics/content-quality.md
内容评分规则内容类评分时
scoring-rubrics/strategic-quality.md
策略评分规则策略类评分时
scoring-rubrics/conversion-quality.md
落地页/广告/CTA评分规则转化率类评分时
scoring-rubrics/visual-quality.md
图表/数据可视化/信息图评分规则视觉类评分时
scoring-rubrics/evaluation-quality.md
候选人/评估类评分规则评估类评分时
references/patterns.md
已学习的驳回模式每次评分运行时
references/expert-assembly.md
自动组建小组的领域专家示例组建不熟悉领域的评审小组时