seo-maker

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
@rules/seo-workflow.md @rules/validation.md
@rules/seo-workflow.md @rules/validation.md

SEO Maker

SEO Maker

Audit and improve a project's search visibility across traditional search engines and AI answer engines.
<purpose>
  • Audit website or project SEO in a systematic way.
  • Cover on-page SEO, technical SEO, content SEO, and Core Web Vitals.
  • Evaluate AEO readiness for featured snippets, voice search, and direct-answer surfaces.
  • Evaluate GEO readiness for citation likelihood in generative AI responses.
  • Evaluate LLMO readiness for AI crawler access, freshness, and model-readable context.
  • Save prioritized recommendations and evidence under
    .hypercore/seo-maker/[slug]/
    .
  • Update existing reports so SEO improvement history remains traceable.
  • If the user asks for highest score, max score, maximum score, perfect score, or continuous improvement, run an audit to fix/recommendation to re-audit loop and keep the best result.
</purpose>
<routing_rule>
Use
seo-maker
when the main outcome is an SEO/AEO/GEO/LLMO audit, optimization report, or evidence-backed search visibility improvement plan.
Route neighboring work elsewhere:
  • Page or product UI design: use
    designer
    or the relevant frontend design skill.
  • Competitor or market research without site audit: use
    research
    .
  • Pre-release build and deployment checks: use
    pre-deploy
    .
  • Pure performance engineering without search context: use the relevant performance or optimization workflow.
  • Broad AI search trend research without a target site or content set: use
    research
    .
</routing_rule>
<trigger_conditions>
Positive examples:
  • "Audit this site's SEO."
  • "Check metadata and structured data."
  • "Create an SEO audit report."
  • "Review search-engine optimization status and give improvement recommendations."
  • "Summarize how to improve Core Web Vitals scores."
  • "Optimize our content so AI search engines can cite it."
  • "Check whether ChatGPT or Perplexity can surface our brand."
  • "Analyze this site from AEO and GEO perspectives."
  • "Keep iterating fixes until the SEO score is as high as possible."
  • "Audit, fix, and re-verify until the search optimization score is close to perfect."
Negative examples:
  • "Design this landing page." -> use
    designer
    .
  • "Research competitor market positioning." -> use
    research
    .
  • "Check the pre-deploy checklist." -> use
    pre-deploy
    .
Boundary examples:
  • "Optimize this page's performance." Use
    seo-maker
    only when performance is evaluated through SEO/Core Web Vitals impact.
  • "Research AI search trends." Use
    seo-maker
    only when the output is tied to a target site, page, or content inventory.
</trigger_conditions>
<modes>
SituationMode
Full SEO audit for a new project or sitecreate
On-page SEO review for a specific pagecreate
Add a new analysis to an existing SEO reportupdate
Focused Core Web Vitals or technical SEO analysiscreate
Re-check after SEO improvementsupdate
Iterative improvement toward best or perfect scoreoptimize
AEO/GEO citation readiness analysiscreate
Add AEO/GEO analysis to an existing reportupdate
</modes>
<supported_targets>
  • Metadata and SEO elements in HTML pages and Next.js/React components.
  • robots.txt
    ,
    sitemap.xml
    ,
    llms.txt
    , canonical tags, and structured data.
  • Core Web Vitals signals such as LCP, INP, and CLS.
  • <head>
    elements including title, meta description, Open Graph, and Twitter Card.
  • Heading hierarchy from
    h1
    through
    h6
    .
  • Image alt text and internal link structure.
  • Schema.org JSON-LD markup, including AI trust signals.
  • AEO elements such as Q&A formats, direct-answer structure, and featured-snippet optimization.
  • GEO elements such as citable sentence structure, statistics with sources, and entity authority.
  • LLMO elements such as
    llms.txt
    , AI crawler accessibility, and content freshness.
</supported_targets>
<complexity_routing>
ComplexitySignalsHandling
SimpleSingle-page review, one SEO element, quick metadata auditDirect: write
report.md
immediately
ComplexFull-site audit, many pages, technical SEO plus content SEO plus Core Web Vitals, competitor comparisonTracked: use
flow.json
for phase tracking
Before starting, record:
text
Complexity: [simple/complex] — [one-line reason]
Mode: [create/update/optimize]
Target: [site/page/project path]
Proof surface: [commands, browser checks, web sources, or local files]
</complexity_routing>
<universal_intake>
Before scoring any project, classify the audit context so this skill works across stacks:
  • target_type
    :
    live-url
    ,
    local-static
    ,
    nextjs
    ,
    react-spa
    ,
    docs-site
    ,
    ecommerce
    ,
    blog
    , or
    app-with-marketing-pages
  • access_level
    : live URL, local files only, Search Console available, analytics available, field Core Web Vitals available, or AI citation probe available
  • allowed_action
    :
    audit-only
    ,
    recommend
    ,
    edit-code
    , or
    optimize-loop
  • measurement_confidence
    : lower confidence when live URL, Search Console, field Core Web Vitals, or AI citation probes are unavailable
Do not hide missing evidence. If a recommendation is based on static files, lab data, synthetic probes, or heuristics, label it that way in
results.json
.
</universal_intake>
<artifact_contract>
Create or update
.hypercore/seo-maker/[slug]/
.
Expected files:
text
.hypercore/seo-maker/[slug]/
├── dashboard.html      # Browser-readable dashboard
├── results.json        # Structured audit results
├── results.js          # File URL fallback for browser rendering
├── report.md           # Markdown report
├── sources.md          # Source and evidence log
└── flow.json           # Required for complex or optimize mode
For simple mode,
report.md
and
sources.md
are the minimum. For complex or optimize mode, all files are expected.
Follow references/artifact-spec.md for the file schema.
Render order:
  1. Gather evidence and write/update
    results.json
    .
  2. Generate
    results.js
    for direct local browser viewing.
  3. Render
    dashboard.html
    from the current results.
  4. Write
    report.md
    and
    sources.md
    with links or file references.
</artifact_contract>
<workflow>
PhaseTaskOutput
0Determine target, mode, complexity, proof surface, and universal intake fieldsExecution brief
1Establish measurement methods and confidence limits
measurement_methods
2Collect evidence from local code, pages, browser checks, and web sourcesEvidence log
3Audit technical SEO, platform policy, AEO, GEO, LLMO, Core Web Vitals, and structured dataStructured findings
4Separate official requirements from field/tool/lab/synthetic/heuristic findingsEvidence-graded findings
5Prioritize issues by impact, confidence, effort, and source tierRecommendation set
6Write artifacts and dashboard
.hypercore/seo-maker/[slug]/
7If optimize mode, apply or recommend fixes and re-auditBest verified result
8Summarize score, wins, confidence limits, risks, and next actionsFinal report
</workflow>
<audit_dimensions>
Check these dimensions when relevant to the target:
  • Technical SEO: crawlability, indexability, canonicalization, sitemap, robots directives, response status, redirects, and duplicate pages.
  • Platform policy: Googlebot, Google-Extended, OAI-SearchBot, GPTBot, ChatGPT-User, PerplexityBot/ClaudeBot when present, snippet controls, X-Robots-Tag, and optional
    llms.txt
    .
  • On-page SEO: title, description, heading hierarchy, keyword alignment, URL readability, and internal links.
  • Content SEO: intent match, depth, freshness, topical coverage, uniqueness, and readability.
  • Core Web Vitals: LCP, INP, CLS, render-blocking resources, image sizing, and interaction latency.
  • Structured data: JSON-LD validity, Schema.org fit, visible-content parity, entity identifiers, breadcrumbs, FAQs, products, articles, or organization markup. Do not imply structured data guarantees rich results or AI citations.
  • AEO: concise visible answer blocks, Q&A structure, snippet-ready summaries, voice-search phrasing, and direct-answer clarity. Treat fixed answer lengths as heuristic.
  • GEO: citable claims, statistics with sources, entity authority, author or brand trust signals, and content that AI systems can quote safely.
  • LLMO: optional
    llms.txt
    , AI crawler access, clean markdown or semantic HTML, clear entity relationships, and updated canonical content. Missing
    llms.txt
    is not critical by default.
</audit_dimensions>
<scoring>
Use a transparent 100-point score when enough evidence exists:
  • Technical SEO: 20
  • On-page SEO: 20
  • Content SEO: 15
  • Core Web Vitals: 15
  • Structured data: 10
  • AEO readiness: 10
  • GEO/LLMO readiness: 10
If evidence is incomplete, mark affected categories as
unknown
instead of inventing certainty.
Each finding should include:
  • Severity:
    critical
    ,
    warning
    , or
    info
    (use impact/effort fields for prioritization beyond severity).
  • Confidence: high, medium, or low.
  • evidence_grade
    :
    official
    ,
    field
    ,
    tool
    ,
    lab
    ,
    synthetic
    , or
    heuristic
    .
  • measurement_method
    : scan, tool, probe, source, or command used.
  • source_tier
    :
    official-doc
    ,
    observed-file
    ,
    field-data
    ,
    tool-output
    ,
    synthetic-probe
    , or
    research-backed-heuristic
    .
  • Evidence: command output, URL, local file path, browser observation, or saved probe result.
  • Recommendation: specific action and expected impact.
  • Owner surface: code, content, infrastructure, analytics, or external platform.
</scoring>
<optimize_loop>
Use optimize mode when the user requests a maximum score, perfect score, continuous iteration, or "keep fixing until it passes" behavior.
Loop rules:
  1. Run a baseline audit and write the score.
  2. Pick the highest-impact fix or recommendation with the best confidence/effort ratio.
  3. Apply safe local code/content fixes when they are in scope; otherwise record an actionable recommendation.
  4. Re-run the relevant audit checks.
  5. Keep the change only if the score or verified evidence improves without regression.
  6. Stop when the score target is met, no safe local fixes remain, or further work requires external credentials or business decisions.
Do not fake a perfect score. If external evidence is unavailable, report the unknowns and the best verified score.
</optimize_loop>
<validation>
At completion,
.hypercore/seo-maker/[slug]/
should contain:
  • results.json
    with structured audit results and status
    complete
    for complex or optimize mode.
  • dashboard.html
    rendered from the latest results when dashboard output is expected.
  • results.js
    for local browser fallback when dashboard output is expected.
  • report.md
    with prioritized findings, score, and recommendations.
  • sources.md
    with the evidence log.
Validate:
  • Every critical or warning finding has evidence.
  • Recommendations are specific enough for an engineer, marketer, or content owner to act on.
  • Scores are derived from observed evidence, not assumptions.
  • Google AI features are not described as requiring special schema, AI text files, or magic markup.
  • FAQPage recommendations distinguish Google rich-result eligibility from answer-friendly visible FAQ content.
  • Unknowns are explicitly marked.
  • Optimize mode records baseline score, changes/recommendations, re-audit evidence, and the best verified result.
</validation>
针对传统搜索引擎和AI答案引擎,审计并提升项目的搜索可见性。
<purpose>
  • 系统化审计网站或项目的SEO状况。
  • 覆盖页面内SEO、技术SEO、内容SEO及Core Web Vitals。
  • 评估针对特色摘要、语音搜索和直接答案展示面的AEO就绪性。
  • 评估针对生成式AI响应中引用可能性的GEO就绪性。
  • 评估针对AI爬虫访问、内容新鲜度和模型可读上下文的LLMO就绪性。
  • 将优先级排序的建议及证据保存至
    .hypercore/seo-maker/[slug]/
    目录下。
  • 更新现有报告,确保SEO改进历史可追溯。
  • 若用户要求最高分、满分、完美分数或持续改进,则运行审计-修复/建议-重新审计循环,并保留最佳结果。
</purpose>
<routing_rule>
当主要目标是生成SEO/AEO/GEO/LLMO审计报告、优化报告或有证据支持的搜索可见性提升计划时,使用
seo-maker
相关邻域工作请路由至对应工具:
  • 页面或产品UI设计:使用
    designer
    或相关前端设计技能。
  • 无站点审计的竞品或市场调研:使用
    research
  • 预发布构建与部署检查:使用
    pre-deploy
  • 无搜索上下文的纯性能优化:使用相关性能或优化工作流。
  • 无目标站点或内容集的宽泛AI搜索趋势调研:使用
    research
</routing_rule>
<trigger_conditions>
触发示例:
  • "审计这个网站的SEO。"
  • "检查元数据和结构化数据。"
  • "创建一份SEO审计报告。"
  • "评估搜索引擎优化状态并给出改进建议。"
  • "总结如何提升Core Web Vitals分数。"
  • "优化我们的内容以便AI搜索引擎能够引用。"
  • "检查ChatGPT或Perplexity是否能展示我们的品牌。"
  • "从AEO和GEO角度分析这个网站。"
  • "持续迭代修复直到SEO分数尽可能高。"
  • "审计、修复并重新验证直到搜索优化分数接近完美。"
非触发示例:
  • "设计这个着陆页。" -> 使用
    designer
  • "调研竞品市场定位。" -> 使用
    research
  • "检查预发布清单。" -> 使用
    pre-deploy
边界示例:
  • "优化这个页面的性能。" 仅当性能评估涉及SEO/Core Web Vitals影响时,使用
    seo-maker
  • "调研AI搜索趋势。" 仅当输出内容与目标站点、页面或内容库相关时,使用
    seo-maker
</trigger_conditions>
<modes>
场景模式
新项目或站点的完整SEO审计create
特定页面的页面内SEO审核create
为现有SEO报告添加新分析内容update
聚焦Core Web Vitals或技术SEO分析create
SEO改进后的重新检查update
朝向最佳或完美分数的迭代改进optimize
AEO/GEO引用就绪性分析create
为现有报告添加AEO/GEO分析内容update
</modes>
<supported_targets>
  • HTML页面和Next.js/React组件中的元数据及SEO元素。
  • robots.txt
    sitemap.xml
    llms.txt
    、规范标签和结构化数据。
  • Core Web Vitals指标,如LCP、INP和CLS。
  • <head>
    元素,包括标题、元描述、Open Graph和Twitter Card。
  • h1
    h6
    的标题层级。
  • 图片替代文本和内部链接结构。
  • Schema.org JSON-LD标记,包括AI信任信号。
  • AEO元素,如问答格式、直接答案结构和特色摘要优化。
  • GEO元素,如可引用语句结构、带来源的统计数据和实体权威性。
  • LLMO元素,如
    llms.txt
    、AI爬虫可访问性和内容新鲜度。
</supported_targets>
<complexity_routing>
复杂度特征处理方式
简单单页面审核、单一SEO元素、快速元数据审计直接处理:立即生成
report.md
复杂全站审计、多页面、技术SEO+内容SEO+Core Web Vitals、竞品对比跟踪处理:使用
flow.json
进行阶段跟踪
开始前需记录:
text
Complexity: [simple/complex] — [单行理由]
Mode: [create/update/optimize]
Target: [站点/页面/项目路径]
Proof surface: [命令、浏览器检查、网络来源或本地文件]
</complexity_routing>
<universal_intake>
在对任何项目评分前,需对审计上下文进行分类,以便该技能可跨技术栈使用:
  • target_type
    :
    live-url
    local-static
    nextjs
    react-spa
    docs-site
    ecommerce
    blog
    app-with-marketing-pages
  • access_level
    : 在线URL、仅本地文件、可访问Search Console、可访问分析数据、可访问现场Core Web Vitals数据、或可访问AI引用探测数据
  • allowed_action
    :
    audit-only
    recommend
    edit-code
    optimize-loop
  • measurement_confidence
    : 当无法获取在线URL、Search Console、现场Core Web Vitals或AI引用探测数据时,置信度较低
不得隐藏缺失的证据。若建议基于静态文件、实验室数据、合成探测或启发式规则,需在
results.json
中明确标注。
</universal_intake>
<artifact_contract>
创建或更新
.hypercore/seo-maker/[slug]/
目录。
预期文件:
text
.hypercore/seo-maker/[slug]/
├── dashboard.html      # 浏览器可读的仪表盘
├── results.json        # 结构化审计结果
├── results.js          # 浏览器渲染的文件URL备选方案
├── report.md           # Markdown报告
├── sources.md          # 来源与证据日志
└── flow.json           # 复杂模式或优化模式必填
简单模式下,
report.md
sources.md
为最低要求。复杂模式或优化模式下,所有文件均需生成。
文件架构需遵循 references/artifact-spec.md
生成顺序:
  1. 收集证据并写入/更新
    results.json
  2. 生成
    results.js
    用于本地浏览器直接查看。
  3. 根据当前结果渲染
    dashboard.html
  4. 写入包含链接或文件引用的
    report.md
    sources.md
</artifact_contract>
<workflow>
阶段任务输出
0确定目标、模式、复杂度、证据来源及通用采集字段执行简报
1确立测量方法及置信限制
measurement_methods
2从本地代码、页面、浏览器检查及网络来源收集证据证据日志
3审计技术SEO、平台政策、AEO、GEO、LLMO、Core Web Vitals及结构化数据结构化发现
4区分官方要求与现场/工具/实验室/合成/启发式发现分级证据发现
5根据影响、置信度、工作量及来源层级对问题进行优先级排序建议集
6生成产物与仪表盘
.hypercore/seo-maker/[slug]/
7若为优化模式,应用或建议修复方案并重新审计最佳验证结果
8总结分数、成果、置信限制、风险及后续行动最终报告
</workflow>
<audit_dimensions>
针对目标相关维度进行检查:
  • 技术SEO:可抓取性、可索引性、规范化、站点地图、robots指令、响应状态、重定向及重复页面。
  • 平台政策:Googlebot、Google-Extended、OAI-SearchBot、GPTBot、ChatGPT-User、PerplexityBot/ClaudeBot(若存在)、摘要控制、X-Robots-Tag及可选的
    llms.txt
  • 页面内SEO:标题、描述、标题层级、关键词匹配、URL可读性及内部链接。
  • 内容SEO:意图匹配、深度、新鲜度、主题覆盖度、独特性及可读性。
  • Core Web Vitals:LCP、INP、CLS、阻塞渲染资源、图片尺寸及交互延迟。
  • 结构化数据:JSON-LD有效性、Schema.org适配性、可见内容一致性、实体标识符、面包屑、常见问题、产品、文章或组织标记。不得暗示结构化数据可保证富媒体结果或AI引用。
  • AEO:简洁的可见答案块、问答结构、摘要就绪型总结、语音搜索措辞及直接答案清晰度。将固定答案长度视为启发式规则。
  • GEO:可引用声明、带来源的统计数据、实体权威性、作者或品牌信任信号,以及AI系统可安全引用的内容。
  • LLMO:可选的
    llms.txt
    、AI爬虫可访问性、简洁的Markdown或语义化HTML、清晰的实体关系及更新后的规范内容。默认情况下,缺失
    llms.txt
    并非关键问题。
</audit_dimensions>
<scoring>
当有足够证据时,采用透明的100分制评分:
  • 技术SEO:20分
  • 页面内SEO:20分
  • 内容SEO:15分
  • Core Web Vitals:15分
  • 结构化数据:10分
  • AEO就绪性:10分
  • GEO/LLMO就绪性:10分
若证据不完整,受影响类别标记为
unknown
,不得凭空捏造确定性。
每个发现需包含:
  • 严重程度:
    critical
    warning
    info
    (除严重程度外,使用影响/工作量字段进行优先级排序)。
  • 置信度:高、中或低。
  • evidence_grade
    official
    field
    tool
    lab
    synthetic
    heuristic
  • measurement_method
    :使用的扫描、工具、探测、来源或命令。
  • source_tier
    official-doc
    observed-file
    field-data
    tool-output
    synthetic-probe
    research-backed-heuristic
  • 证据:命令输出、URL、本地文件路径、浏览器观察结果或保存的探测结果。
  • 建议:具体行动及预期影响。
  • 负责方:代码、内容、基础设施、分析或外部平台。
</scoring>
<optimize_loop>
当用户要求最高分、完美分数、持续迭代或“持续修复直到通过”时,使用优化模式。
循环规则:
  1. 运行基准审计并记录分数。
  2. 选择影响最高、置信度/工作量比最优的修复方案或建议。
  3. 在范围内应用安全的本地代码/内容修复;否则记录可执行建议。
  4. 重新运行相关审计检查。
  5. 仅当分数或验证证据提升且无回归时,保留更改。
  6. 当达到分数目标、无安全本地修复方案剩余,或后续工作需要外部凭据或业务决策时,停止循环。
不得伪造完美分数。若无法获取外部证据,需报告未知项及最佳验证分数。
</optimize_loop>
<validation>
完成后,
.hypercore/seo-maker/[slug]/
目录需包含:
  • results.json
    ,包含结构化审计结果,且复杂模式或优化模式下状态为
    complete
  • 若需生成仪表盘,需包含基于最新结果渲染的
    dashboard.html
  • 若需生成仪表盘,需包含用于本地浏览器备选的
    results.js
  • 包含优先级排序发现、分数及建议的
    report.md
  • 包含证据日志的
    sources.md
验证项:
  • 每个严重或警告发现均有证据支持。
  • 建议足够具体,可供工程师、营销人员或内容负责人执行。
  • 分数基于观察到的证据,而非假设。
  • 不得将Google AI功能描述为需要特殊Schema、AI文本文件或魔法标记。
  • FAQPage建议需区分Google富媒体结果资格与友好的可见FAQ内容。
  • 未知项需明确标记。
  • 优化模式需记录基准分数、更改/建议、重新审计证据及最佳验证结果。
</validation>