expert-panel
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePreamble (runs on skill start)
前言(skill启动时自动运行)
bash
undefinedbash
undefinedVersion check (silent if up to date)
Version check (silent if up to date)
python3 telemetry/version_check.py 2>/dev/null || true
python3 telemetry/version_check.py 2>/dev/null || true
Telemetry opt-in (first run only, then remembers your choice)
Telemetry opt-in (first run only, then remembers your choice)
python3 telemetry/telemetry_init.py 2>/dev/null || true
> **Privacy:** This skill logs usage locally to `~/.ai-marketing-skills/analytics/`. Remote telemetry is opt-in only. No code, file paths, or repo content is ever collected. See `telemetry/README.md`.
---python3 telemetry/telemetry_init.py 2>/dev/null || true
> **隐私说明:** 本skill会在本地将使用记录日志到`~/.ai-marketing-skills/analytics/`。远程遥测仅在用户主动同意后开启,绝不会收集任何代码、文件路径或仓库内容,详情见`telemetry/README.md`。
---Expert Panel
专家小组
General-purpose scoring and iterative improvement engine. Auto-assembles the
right experts for whatever is being evaluated, scores it, and loops until 90+.
通用评分与迭代优化引擎,会为待评估内容自动匹配合适的专家,完成打分并循环迭代直到得分达到90+。
Step 1: Intake — Understand What's Being Scored
步骤1:信息采集——明确待评分内容
Collect or infer from context:
- Content/artifact — The thing(s) to score (paste, file path, or URL)
- Content type — Copy, sequence, landing page, strategy, title, chart, candidate eval, etc.
- Offer context — What's being sold/promoted? To whom? What domain/industry?
- Variants — Are there multiple versions to compare? (A/B/C)
- Source skill — Is this output from another skill? (e.g., cold-outbound-optimizer) If yes, note the source for feedback-to-source routing in Step 6.
If context is obvious from the conversation, don't ask — just proceed.
从上下文收集或推导以下信息:
- 内容/工件 — 待评分的对象(可粘贴内容、文件路径或URL)
- 内容类型 — 文案、序列、落地页、策略、标题、图表、候选人评估等
- 推广背景 — 正在售卖/推广的产品是什么?面向的受众是?所属领域/行业是?
- 变体 — 是否有多个版本需要对比?(A/B/C版本)
- 来源skill — 待评估内容是否是其他skill的输出?(例如cold-outbound-optimizer)如果是,标记来源以便步骤6中向来源skill反馈优化建议。
如果上下文已经明确给出上述信息,无需额外询问,直接推进流程即可。
Step 2: Auto-Assemble the Expert Panel
步骤2:自动组建专家小组
Build a panel of 7–10 experts tailored to the content type and domain.
为内容类型和所属领域量身组建7-10名专家的评审小组。
Assembly rules
组建规则
-
Start with content-type experts. Readdirectory for pre-built panels matching the content type. If an exact match exists (e.g.,
experts/for a LinkedIn post), use it as the base.experts/linkedin.md -
Add domain/offer experts. Based on the offer context, add 1–3 experts who understand the specific industry or domain. Examples:
- Scoring bakery marketing → add Food & Beverage Marketing Expert
- Scoring SaaS landing page → add SaaS Conversion Expert
- Scoring recruiting outreach → add Agency Recruiter + Talent Market Expert
- Scoring medical device copy → add Healthcare Compliance Expert
-
Always include these two:
- AI Writing Detector — See . Weight: 1.5x. Non-negotiable.
experts/humanizer.md - Brand Voice Match — Checks alignment with the configured brand voice and
known rejection patterns from (if present).
references/patterns.md
- AI Writing Detector — See
-
Check learned patterns. Ifexists, read it. If any patterns apply to this content type, brief the panel on them. Dock points for known-bad patterns.
references/patterns.md -
Cap at 10 experts. If you have more than 10, merge overlapping roles.
- 优先匹配内容类型专家:读取目录下匹配内容类型的预制专家小组,如果存在完全匹配的面板(例如针对LinkedIn帖子的
experts/),以此为基础组建小组。experts/linkedin.md - 补充领域/行业专家:基于推广背景,添加1-3名熟悉对应行业或领域的专家,示例:
- 评分烘焙店营销内容 → 添加食品饮料营销专家
- 评分SaaS落地页 → 添加SaaS转化率专家
- 评分招聘 outreach 内容 → 添加猎头+人才市场专家
- 评分医疗设备文案 → 添加医疗合规专家
- 必须包含以下两个角色:
- AI Writing Detector:参考,权重1.5x,强制要求。
experts/humanizer.md - Brand Voice Match:检查内容是否匹配配置的品牌调性,以及是否违反(如果存在)中记录的已知禁用模式。
references/patterns.md
- AI Writing Detector:参考
- 检查已知模式:如果存在文件则读取内容,如果有适用于当前内容类型的模式,提前告知评审小组,命中已知不良模式将被扣分。
references/patterns.md - 最多10名专家:如果专家数量超过10名,合并重叠角色。
Panel output format
小组输出格式
List each expert with: Name, lens/focus, what they check.
列出每位专家的:姓名、评审视角/聚焦方向、核查内容。
Step 3: Select Scoring Rubric
步骤3:选择评分规则
Choose the appropriate rubric from :
scoring-rubrics/| Content type | Rubric file |
|---|---|
| Blog, social, email, newsletter, scripts | |
| Strategy, recommendations, analysis | |
| Landing pages, ads, CTAs | |
| Charts, data viz, infographics | |
| Candidate evaluations | |
| Other | Synthesize a rubric from the two closest matches |
Read the selected rubric file for detailed criteria and point allocation.
从目录中选择适配的评分规则:
scoring-rubrics/| 内容类型 | 规则文件 |
|---|---|
| 博客、社交内容、邮件、通讯、脚本 | |
| 策略、建议、分析报告 | |
| 落地页、广告、CTA | |
| 图表、数据可视化、信息图 | |
| 候选人评估 | |
| 其他 | 基于两个最匹配的规则合成新的评分规则 |
读取选中的规则文件,获取详细的评分标准和分值分配规则。
Step 4: Score — Recursive Loop Until 90+
步骤4:评分——递归循环直到得分90+
Target: 90/100 across all experts. Non-negotiable. Max 3 rounds.
目标:所有专家评审的综合得分达到90/100,强制要求,最多迭代3轮。
Each round produces:
每轮输出内容:
undefinedundefinedRound [N] — Score: [AVG]/100
第[N]轮 — 得分:[平均分]/100
| Expert | Score | Key Feedback |
|---|---|---|
| [Name] | [0-100] | [One-line rationale] |
| ... | ... | ... |
Aggregate: [weighted average — humanizer at 1.5x]
Top 3 weaknesses: [ranked]
Changes made: [specific edits addressing each weakness]
Then the revised content/artifact.| 专家 | 得分 | 核心反馈 |
|---|---|---|
| [姓名] | [0-100] | [一行说明评分理由] |
| ... | ... | ... |
综合得分: [加权平均分 — AI Writing Detector权重1.5x]
前3项不足: [按优先级排序]
优化修改: [针对每项不足的具体修改内容]
随后附上修改后的内容/工件。Rules
规则
- Scores must be brutally honest. No padding to 90.
- Humanizer score weighted 1.5x in the aggregate.
- If aggregate < 90: identify top 3 weaknesses → revise → next round.
- If aggregate ≥ 90: finalize and proceed to output.
- After 3 rounds, if still < 90: return best version with honest score + note on what's holding it back.
- Show ALL rounds in output — the iteration trail is part of the value.
- 评分必须客观真实,不得为了达到90分刻意抬高分数。
- AI Writing Detector的得分在计算综合分时权重为1.5x。
- 如果综合分<90:识别前3项不足 → 修改内容 → 进入下一轮。
- 如果综合分≥90:定稿进入输出环节。
- 如果3轮迭代后综合分仍<90:返回最优版本,标注真实得分并说明待改进点。
- 输出必须包含所有轮次的评审记录,迭代过程也是价值的一部分。
Variant comparison mode
变体对比模式
When scoring multiple variants (A/B/C):
- Score each variant independently through the full panel.
- After scoring, rank variants by aggregate score.
- If top variant is < 90, iterate on the best one (don't iterate all of them).
当需要评分多个变体(A/B/C)时:
- 每个变体独立走完全部专家评审流程。
- 全部评分完成后按综合得分排序。
- 如果得分最高的变体<90分,仅针对最优变体迭代优化,无需修改所有变体。
Step 5: Output Format
步骤5:输出格式
Winner + Score (always at top)
最优结果+得分(始终放在最顶部)
undefinedundefined🏆 Result: [SCORE]/100 — [PASS ✅ | NEEDS WORK ⚠️]
🏆 结果:[得分]/100 — [通过 ✅ | 待优化 ⚠️]
[Final content/artifact here]
Iterations: [N] rounds
Panel: [Expert names, comma-separated]
If variants: show winner first, then runner-up scores.
[最终内容/工件放在这里]
迭代次数: [N]轮
评审小组: [专家姓名,英文逗号分隔]
如果有多个变体:先展示最优结果,再列出其余变体得分。
🏆 Winner: Variant [X] — [SCORE]/100
🏆 最优:变体[X] — [得分]/100
[Winning content]
[最优内容]
Runner-up scores
其余变体得分
- Variant A: 87/100
- Variant B: 82/100
- Variant C: 91/100 ← Winner
undefined- 变体A:87/100
- 变体B:82/100
- 变体C:91/100 ← 最优
undefinedFeedback History (below the result)
反馈历史(放在结果下方)
Show full scoring rounds.
---
<details>
<summary>📊 Scoring History (N rounds)</summary>
[All round tables from Step 4]
</details>展示完整的评分迭代记录。
---
<details>
<summary>📊 评分历史(共N轮)</summary>
[步骤4中所有轮次的评分表格]
</details>Step 6: Feedback-to-Source (When Scoring Another Skill's Output)
步骤6:向来源skill反馈(仅当评分内容为其他skill输出时)
When the scored content came from another skill, generate a Source Improvement Brief:
undefined当待评分内容来自其他skill时,生成来源skill优化简报:
undefined🔁 Feedback for [Source Skill]
🔁 给[来源skill名称]的反馈
What scored low
得分较低的问题
- [Pattern]: [Specific example from this content]
- [问题模式]:[本次内容中的具体示例]
Suggested skill improvements
建议的skill优化点
- [Concrete change to the source skill's process/rubric/prompt]
- [对来源skill的流程/评分规则/prompt的具体修改建议]
Patterns to add to source skill
需要添加到来源skill的模式
- [Any recurring weakness that should become a rule]
This brief can be used to update the source skill's SKILL.md or rubrics.
---- [所有需要固化为规则的高频问题]
该简报可用于更新来源skill的SKILL.md文件或评分规则。
---Step 7: Memory — Learn from Approvals and Rejections
步骤7:记忆——从通过和驳回结果中学习
After the user approves or rejects panel output:
当用户同意或驳回小组评审输出后:
On approval (score ≥ 90, user accepts)
用户同意(得分≥90,用户接受结果)
Note what worked. No action needed unless a new positive pattern emerges.
记录有效经验,除非出现新的正向模式否则无需额外操作。
On rejection (user overrides the panel or rejects 90+ content)
用户驳回(用户推翻评审结果或拒绝得分≥90的内容)
- Ask why (or infer from context).
- Add a new pattern to using this format:
references/patterns.md
markdown
undefined- 询问驳回原因(或从上下文推导)。
- 按照以下格式向添加新的模式:
references/patterns.md
markdown
undefined[Pattern Name]
[模式名称]
- Type: rejection | preference | override
- Content types: [which types this applies to]
- Rule: [What to always/never do]
- Example: [The specific instance that triggered this]
- Date: [YYYY-MM-DD]
- Point dock: [-N points when detected]
3. Confirm: "Added pattern: [one-line summary]. Panel will dock [N] points for this going forward."- 类型: rejection | preference | override
- 适用内容类型: [该模式适用的内容类型]
- 规则: [必须遵守/禁止的操作]
- 示例: [触发该规则的具体场景]
- 日期: [YYYY-MM-DD]
- 扣分值: [检测到该模式时扣N分]
3. 向用户确认:「已添加模式:[一行总结]。后续评审中命中该模式将被扣[N]分。」Pattern enforcement
模式执行规则
Every scoring round, check against the content. Apply point docks
before expert scoring begins. This means known-bad patterns are penalized even if individual
experts miss them.
references/patterns.md每一轮评分前,都要对照检查内容,在专家评分前先扣除对应分值,确保已知不良模式即使被个别专家遗漏也会被处罚。
references/patterns.mdReference Files
参考文件
| File | Purpose | When to read |
|---|---|---|
| AI writing detection rubric (24 patterns) | Every scoring run |
| Pre-built expert panels for common domains | When domain matches |
| Content scoring rubric | Content scoring |
| Strategy scoring rubric | Strategy scoring |
| Landing page/ad/CTA rubric | Conversion scoring |
| Chart/data viz/infographic rubric | Visual scoring |
| Candidate/assessment rubric | Eval scoring |
| Learned rejection patterns | Every scoring run |
| Domain-expert examples for auto-assembly | When building unfamiliar panels |
| 文件 | 用途 | 读取时机 |
|---|---|---|
| AI写作检测规则(24种模式) | 每次评分运行时 |
| 常见领域的预制专家小组 | 领域匹配时 |
| 内容评分规则 | 内容类评分时 |
| 策略评分规则 | 策略类评分时 |
| 落地页/广告/CTA评分规则 | 转化率类评分时 |
| 图表/数据可视化/信息图评分规则 | 视觉类评分时 |
| 候选人/评估类评分规则 | 评估类评分时 |
| 已学习的驳回模式 | 每次评分运行时 |
| 自动组建小组的领域专家示例 | 组建不熟悉领域的评审小组时 |