seo-maker
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese@rules/seo-workflow.md
@rules/validation.md
@rules/seo-workflow.md
@rules/validation.md
SEO Maker
SEO Maker
<purpose>Audit and improve a project's search visibility across traditional search engines and AI answer engines.
- Audit website or project SEO in a systematic way.
- Cover on-page SEO, technical SEO, content SEO, and Core Web Vitals.
- Evaluate AEO readiness for featured snippets, voice search, and direct-answer surfaces.
- Evaluate GEO readiness for citation likelihood in generative AI responses.
- Evaluate LLMO readiness for AI crawler access, freshness, and model-readable context.
- Save prioritized recommendations and evidence under .
.hypercore/seo-maker/[slug]/ - Update existing reports so SEO improvement history remains traceable.
- If the user asks for highest score, max score, maximum score, perfect score, or continuous improvement, run an audit to fix/recommendation to re-audit loop and keep the best result.
<routing_rule>
Use when the main outcome is an SEO/AEO/GEO/LLMO audit, optimization report, or evidence-backed search visibility improvement plan.
seo-makerRoute neighboring work elsewhere:
- Page or product UI design: use or the relevant frontend design skill.
designer - Competitor or market research without site audit: use .
research - Pre-release build and deployment checks: use .
pre-deploy - Pure performance engineering without search context: use the relevant performance or optimization workflow.
- Broad AI search trend research without a target site or content set: use .
research
</routing_rule>
<trigger_conditions>
Positive examples:
- "Audit this site's SEO."
- "Check metadata and structured data."
- "Create an SEO audit report."
- "Review search-engine optimization status and give improvement recommendations."
- "Summarize how to improve Core Web Vitals scores."
- "Optimize our content so AI search engines can cite it."
- "Check whether ChatGPT or Perplexity can surface our brand."
- "Analyze this site from AEO and GEO perspectives."
- "Keep iterating fixes until the SEO score is as high as possible."
- "Audit, fix, and re-verify until the search optimization score is close to perfect."
Negative examples:
- "Design this landing page." -> use .
designer - "Research competitor market positioning." -> use .
research - "Check the pre-deploy checklist." -> use .
pre-deploy
Boundary examples:
- "Optimize this page's performance."
Use only when performance is evaluated through SEO/Core Web Vitals impact.
seo-maker - "Research AI search trends."
Use only when the output is tied to a target site, page, or content inventory.
seo-maker
</trigger_conditions>
<modes>
| Situation | Mode |
|---|---|
| Full SEO audit for a new project or site | create |
| On-page SEO review for a specific page | create |
| Add a new analysis to an existing SEO report | update |
| Focused Core Web Vitals or technical SEO analysis | create |
| Re-check after SEO improvements | update |
| Iterative improvement toward best or perfect score | optimize |
| AEO/GEO citation readiness analysis | create |
| Add AEO/GEO analysis to an existing report | update |
<supported_targets>
- Metadata and SEO elements in HTML pages and Next.js/React components.
- ,
robots.txt,sitemap.xml, canonical tags, and structured data.llms.txt - Core Web Vitals signals such as LCP, INP, and CLS.
- elements including title, meta description, Open Graph, and Twitter Card.
<head> - Heading hierarchy from through
h1.h6 - Image alt text and internal link structure.
- Schema.org JSON-LD markup, including AI trust signals.
- AEO elements such as Q&A formats, direct-answer structure, and featured-snippet optimization.
- GEO elements such as citable sentence structure, statistics with sources, and entity authority.
- LLMO elements such as , AI crawler accessibility, and content freshness.
llms.txt
</supported_targets>
<complexity_routing>
| Complexity | Signals | Handling |
|---|---|---|
| Simple | Single-page review, one SEO element, quick metadata audit | Direct: write |
| Complex | Full-site audit, many pages, technical SEO plus content SEO plus Core Web Vitals, competitor comparison | Tracked: use |
Before starting, record:
text
Complexity: [simple/complex] — [one-line reason]
Mode: [create/update/optimize]
Target: [site/page/project path]
Proof surface: [commands, browser checks, web sources, or local files]</complexity_routing>
<universal_intake>
Before scoring any project, classify the audit context so this skill works across stacks:
- :
target_type,live-url,local-static,nextjs,react-spa,docs-site,ecommerce, orblogapp-with-marketing-pages - : live URL, local files only, Search Console available, analytics available, field Core Web Vitals available, or AI citation probe available
access_level - :
allowed_action,audit-only,recommend, oredit-codeoptimize-loop - : lower confidence when live URL, Search Console, field Core Web Vitals, or AI citation probes are unavailable
measurement_confidence
Do not hide missing evidence. If a recommendation is based on static files, lab data, synthetic probes, or heuristics, label it that way in .
results.json</universal_intake>
<artifact_contract>
Create or update .
.hypercore/seo-maker/[slug]/Expected files:
text
.hypercore/seo-maker/[slug]/
├── dashboard.html # Browser-readable dashboard
├── results.json # Structured audit results
├── results.js # File URL fallback for browser rendering
├── report.md # Markdown report
├── sources.md # Source and evidence log
└── flow.json # Required for complex or optimize modeFor simple mode, and are the minimum. For complex or optimize mode, all files are expected.
report.mdsources.mdFollow references/artifact-spec.md for the file schema.
Render order:
- Gather evidence and write/update .
results.json - Generate for direct local browser viewing.
results.js - Render from the current results.
dashboard.html - Write and
report.mdwith links or file references.sources.md
</artifact_contract>
<workflow>
| Phase | Task | Output |
|---|---|---|
| 0 | Determine target, mode, complexity, proof surface, and universal intake fields | Execution brief |
| 1 | Establish measurement methods and confidence limits | |
| 2 | Collect evidence from local code, pages, browser checks, and web sources | Evidence log |
| 3 | Audit technical SEO, platform policy, AEO, GEO, LLMO, Core Web Vitals, and structured data | Structured findings |
| 4 | Separate official requirements from field/tool/lab/synthetic/heuristic findings | Evidence-graded findings |
| 5 | Prioritize issues by impact, confidence, effort, and source tier | Recommendation set |
| 6 | Write artifacts and dashboard | |
| 7 | If optimize mode, apply or recommend fixes and re-audit | Best verified result |
| 8 | Summarize score, wins, confidence limits, risks, and next actions | Final report |
<audit_dimensions>
Check these dimensions when relevant to the target:
- Technical SEO: crawlability, indexability, canonicalization, sitemap, robots directives, response status, redirects, and duplicate pages.
- Platform policy: Googlebot, Google-Extended, OAI-SearchBot, GPTBot, ChatGPT-User, PerplexityBot/ClaudeBot when present, snippet controls, X-Robots-Tag, and optional .
llms.txt - On-page SEO: title, description, heading hierarchy, keyword alignment, URL readability, and internal links.
- Content SEO: intent match, depth, freshness, topical coverage, uniqueness, and readability.
- Core Web Vitals: LCP, INP, CLS, render-blocking resources, image sizing, and interaction latency.
- Structured data: JSON-LD validity, Schema.org fit, visible-content parity, entity identifiers, breadcrumbs, FAQs, products, articles, or organization markup. Do not imply structured data guarantees rich results or AI citations.
- AEO: concise visible answer blocks, Q&A structure, snippet-ready summaries, voice-search phrasing, and direct-answer clarity. Treat fixed answer lengths as heuristic.
- GEO: citable claims, statistics with sources, entity authority, author or brand trust signals, and content that AI systems can quote safely.
- LLMO: optional , AI crawler access, clean markdown or semantic HTML, clear entity relationships, and updated canonical content. Missing
llms.txtis not critical by default.llms.txt
</audit_dimensions>
<scoring>
Use a transparent 100-point score when enough evidence exists:
- Technical SEO: 20
- On-page SEO: 20
- Content SEO: 15
- Core Web Vitals: 15
- Structured data: 10
- AEO readiness: 10
- GEO/LLMO readiness: 10
If evidence is incomplete, mark affected categories as instead of inventing certainty.
unknownEach finding should include:
- Severity: ,
critical, orwarning(use impact/effort fields for prioritization beyond severity).info - Confidence: high, medium, or low.
- :
evidence_grade,official,field,tool,lab, orsynthetic.heuristic - : scan, tool, probe, source, or command used.
measurement_method - :
source_tier,official-doc,observed-file,field-data,tool-output, orsynthetic-probe.research-backed-heuristic - Evidence: command output, URL, local file path, browser observation, or saved probe result.
- Recommendation: specific action and expected impact.
- Owner surface: code, content, infrastructure, analytics, or external platform.
<optimize_loop>
Use optimize mode when the user requests a maximum score, perfect score, continuous iteration, or "keep fixing until it passes" behavior.
Loop rules:
- Run a baseline audit and write the score.
- Pick the highest-impact fix or recommendation with the best confidence/effort ratio.
- Apply safe local code/content fixes when they are in scope; otherwise record an actionable recommendation.
- Re-run the relevant audit checks.
- Keep the change only if the score or verified evidence improves without regression.
- Stop when the score target is met, no safe local fixes remain, or further work requires external credentials or business decisions.
Do not fake a perfect score. If external evidence is unavailable, report the unknowns and the best verified score.
</optimize_loop>
<validation>
At completion, should contain:
.hypercore/seo-maker/[slug]/- with structured audit results and status
results.jsonfor complex or optimize mode.complete - rendered from the latest results when dashboard output is expected.
dashboard.html - for local browser fallback when dashboard output is expected.
results.js - with prioritized findings, score, and recommendations.
report.md - with the evidence log.
sources.md
Validate:
- Every critical or warning finding has evidence.
- Recommendations are specific enough for an engineer, marketer, or content owner to act on.
- Scores are derived from observed evidence, not assumptions.
- Google AI features are not described as requiring special schema, AI text files, or magic markup.
- FAQPage recommendations distinguish Google rich-result eligibility from answer-friendly visible FAQ content.
- Unknowns are explicitly marked.
- Optimize mode records baseline score, changes/recommendations, re-audit evidence, and the best verified result.
<purpose>针对传统搜索引擎和AI答案引擎,审计并提升项目的搜索可见性。
- 系统化审计网站或项目的SEO状况。
- 覆盖页面内SEO、技术SEO、内容SEO及Core Web Vitals。
- 评估针对特色摘要、语音搜索和直接答案展示面的AEO就绪性。
- 评估针对生成式AI响应中引用可能性的GEO就绪性。
- 评估针对AI爬虫访问、内容新鲜度和模型可读上下文的LLMO就绪性。
- 将优先级排序的建议及证据保存至 目录下。
.hypercore/seo-maker/[slug]/ - 更新现有报告,确保SEO改进历史可追溯。
- 若用户要求最高分、满分、完美分数或持续改进,则运行审计-修复/建议-重新审计循环,并保留最佳结果。
<routing_rule>
当主要目标是生成SEO/AEO/GEO/LLMO审计报告、优化报告或有证据支持的搜索可见性提升计划时,使用。
seo-maker相关邻域工作请路由至对应工具:
- 页面或产品UI设计:使用或相关前端设计技能。
designer - 无站点审计的竞品或市场调研:使用。
research - 预发布构建与部署检查:使用。
pre-deploy - 无搜索上下文的纯性能优化:使用相关性能或优化工作流。
- 无目标站点或内容集的宽泛AI搜索趋势调研:使用。
research
</routing_rule>
<trigger_conditions>
触发示例:
- "审计这个网站的SEO。"
- "检查元数据和结构化数据。"
- "创建一份SEO审计报告。"
- "评估搜索引擎优化状态并给出改进建议。"
- "总结如何提升Core Web Vitals分数。"
- "优化我们的内容以便AI搜索引擎能够引用。"
- "检查ChatGPT或Perplexity是否能展示我们的品牌。"
- "从AEO和GEO角度分析这个网站。"
- "持续迭代修复直到SEO分数尽可能高。"
- "审计、修复并重新验证直到搜索优化分数接近完美。"
非触发示例:
- "设计这个着陆页。" -> 使用。
designer - "调研竞品市场定位。" -> 使用。
research - "检查预发布清单。" -> 使用。
pre-deploy
边界示例:
- "优化这个页面的性能。"
仅当性能评估涉及SEO/Core Web Vitals影响时,使用。
seo-maker - "调研AI搜索趋势。"
仅当输出内容与目标站点、页面或内容库相关时,使用。
seo-maker
</trigger_conditions>
<modes>
| 场景 | 模式 |
|---|---|
| 新项目或站点的完整SEO审计 | create |
| 特定页面的页面内SEO审核 | create |
| 为现有SEO报告添加新分析内容 | update |
| 聚焦Core Web Vitals或技术SEO分析 | create |
| SEO改进后的重新检查 | update |
| 朝向最佳或完美分数的迭代改进 | optimize |
| AEO/GEO引用就绪性分析 | create |
| 为现有报告添加AEO/GEO分析内容 | update |
<supported_targets>
- HTML页面和Next.js/React组件中的元数据及SEO元素。
- 、
robots.txt、sitemap.xml、规范标签和结构化数据。llms.txt - Core Web Vitals指标,如LCP、INP和CLS。
- 元素,包括标题、元描述、Open Graph和Twitter Card。
<head> - 从到
h1的标题层级。h6 - 图片替代文本和内部链接结构。
- Schema.org JSON-LD标记,包括AI信任信号。
- AEO元素,如问答格式、直接答案结构和特色摘要优化。
- GEO元素,如可引用语句结构、带来源的统计数据和实体权威性。
- LLMO元素,如、AI爬虫可访问性和内容新鲜度。
llms.txt
</supported_targets>
<complexity_routing>
| 复杂度 | 特征 | 处理方式 |
|---|---|---|
| 简单 | 单页面审核、单一SEO元素、快速元数据审计 | 直接处理:立即生成 |
| 复杂 | 全站审计、多页面、技术SEO+内容SEO+Core Web Vitals、竞品对比 | 跟踪处理:使用 |
开始前需记录:
text
Complexity: [simple/complex] — [单行理由]
Mode: [create/update/optimize]
Target: [站点/页面/项目路径]
Proof surface: [命令、浏览器检查、网络来源或本地文件]</complexity_routing>
<universal_intake>
在对任何项目评分前,需对审计上下文进行分类,以便该技能可跨技术栈使用:
- :
target_type、live-url、local-static、nextjs、react-spa、docs-site、ecommerce或blogapp-with-marketing-pages - : 在线URL、仅本地文件、可访问Search Console、可访问分析数据、可访问现场Core Web Vitals数据、或可访问AI引用探测数据
access_level - :
allowed_action、audit-only、recommend或edit-codeoptimize-loop - : 当无法获取在线URL、Search Console、现场Core Web Vitals或AI引用探测数据时,置信度较低
measurement_confidence
不得隐藏缺失的证据。若建议基于静态文件、实验室数据、合成探测或启发式规则,需在中明确标注。
results.json</universal_intake>
<artifact_contract>
创建或更新 目录。
.hypercore/seo-maker/[slug]/预期文件:
text
.hypercore/seo-maker/[slug]/
├── dashboard.html # 浏览器可读的仪表盘
├── results.json # 结构化审计结果
├── results.js # 浏览器渲染的文件URL备选方案
├── report.md # Markdown报告
├── sources.md # 来源与证据日志
└── flow.json # 复杂模式或优化模式必填简单模式下,和为最低要求。复杂模式或优化模式下,所有文件均需生成。
report.mdsources.md文件架构需遵循 references/artifact-spec.md。
生成顺序:
- 收集证据并写入/更新。
results.json - 生成用于本地浏览器直接查看。
results.js - 根据当前结果渲染。
dashboard.html - 写入包含链接或文件引用的和
report.md。sources.md
</artifact_contract>
<workflow>
| 阶段 | 任务 | 输出 |
|---|---|---|
| 0 | 确定目标、模式、复杂度、证据来源及通用采集字段 | 执行简报 |
| 1 | 确立测量方法及置信限制 | |
| 2 | 从本地代码、页面、浏览器检查及网络来源收集证据 | 证据日志 |
| 3 | 审计技术SEO、平台政策、AEO、GEO、LLMO、Core Web Vitals及结构化数据 | 结构化发现 |
| 4 | 区分官方要求与现场/工具/实验室/合成/启发式发现 | 分级证据发现 |
| 5 | 根据影响、置信度、工作量及来源层级对问题进行优先级排序 | 建议集 |
| 6 | 生成产物与仪表盘 | |
| 7 | 若为优化模式,应用或建议修复方案并重新审计 | 最佳验证结果 |
| 8 | 总结分数、成果、置信限制、风险及后续行动 | 最终报告 |
<audit_dimensions>
针对目标相关维度进行检查:
- 技术SEO:可抓取性、可索引性、规范化、站点地图、robots指令、响应状态、重定向及重复页面。
- 平台政策:Googlebot、Google-Extended、OAI-SearchBot、GPTBot、ChatGPT-User、PerplexityBot/ClaudeBot(若存在)、摘要控制、X-Robots-Tag及可选的。
llms.txt - 页面内SEO:标题、描述、标题层级、关键词匹配、URL可读性及内部链接。
- 内容SEO:意图匹配、深度、新鲜度、主题覆盖度、独特性及可读性。
- Core Web Vitals:LCP、INP、CLS、阻塞渲染资源、图片尺寸及交互延迟。
- 结构化数据:JSON-LD有效性、Schema.org适配性、可见内容一致性、实体标识符、面包屑、常见问题、产品、文章或组织标记。不得暗示结构化数据可保证富媒体结果或AI引用。
- AEO:简洁的可见答案块、问答结构、摘要就绪型总结、语音搜索措辞及直接答案清晰度。将固定答案长度视为启发式规则。
- GEO:可引用声明、带来源的统计数据、实体权威性、作者或品牌信任信号,以及AI系统可安全引用的内容。
- LLMO:可选的、AI爬虫可访问性、简洁的Markdown或语义化HTML、清晰的实体关系及更新后的规范内容。默认情况下,缺失
llms.txt并非关键问题。llms.txt
</audit_dimensions>
<scoring>
当有足够证据时,采用透明的100分制评分:
- 技术SEO:20分
- 页面内SEO:20分
- 内容SEO:15分
- Core Web Vitals:15分
- 结构化数据:10分
- AEO就绪性:10分
- GEO/LLMO就绪性:10分
若证据不完整,受影响类别标记为,不得凭空捏造确定性。
unknown每个发现需包含:
- 严重程度:、
critical或warning(除严重程度外,使用影响/工作量字段进行优先级排序)。info - 置信度:高、中或低。
- :
evidence_grade、official、field、tool、lab或synthetic。heuristic - :使用的扫描、工具、探测、来源或命令。
measurement_method - :
source_tier、official-doc、observed-file、field-data、tool-output或synthetic-probe。research-backed-heuristic - 证据:命令输出、URL、本地文件路径、浏览器观察结果或保存的探测结果。
- 建议:具体行动及预期影响。
- 负责方:代码、内容、基础设施、分析或外部平台。
<optimize_loop>
当用户要求最高分、完美分数、持续迭代或“持续修复直到通过”时,使用优化模式。
循环规则:
- 运行基准审计并记录分数。
- 选择影响最高、置信度/工作量比最优的修复方案或建议。
- 在范围内应用安全的本地代码/内容修复;否则记录可执行建议。
- 重新运行相关审计检查。
- 仅当分数或验证证据提升且无回归时,保留更改。
- 当达到分数目标、无安全本地修复方案剩余,或后续工作需要外部凭据或业务决策时,停止循环。
不得伪造完美分数。若无法获取外部证据,需报告未知项及最佳验证分数。
</optimize_loop>
<validation>
完成后, 目录需包含:
.hypercore/seo-maker/[slug]/- ,包含结构化审计结果,且复杂模式或优化模式下状态为
results.json。complete - 若需生成仪表盘,需包含基于最新结果渲染的。
dashboard.html - 若需生成仪表盘,需包含用于本地浏览器备选的。
results.js - 包含优先级排序发现、分数及建议的。
report.md - 包含证据日志的。
sources.md
验证项:
- 每个严重或警告发现均有证据支持。
- 建议足够具体,可供工程师、营销人员或内容负责人执行。
- 分数基于观察到的证据,而非假设。
- 不得将Google AI功能描述为需要特殊Schema、AI文本文件或魔法标记。
- FAQPage建议需区分Google富媒体结果资格与友好的可见FAQ内容。
- 未知项需明确标记。
- 优化模式需记录基准分数、更改/建议、重新审计证据及最佳验证结果。