seo-audit
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chineseseo-audit — Basic SEO Audit
seo-audit — 基础SEO审计
A lightweight SEO agent skill designed for quick, default single-page SEO audits. Powered by OpenClaw. Suitable for first-time page checks or when a rapid assessment is needed without full technical depth.
一款轻量的SEO Agent技能,专为快速的默认单页SEO审计设计,由OpenClaw提供支持,适合首次页面检查,或不需要完整技术深度的快速评估场景。
When to Use This Skill
何时使用该技能
Use when:
seo-audit- The user says: "audit this page", "check SEO", "analyze my URL", "quick SEO check", "what's wrong with my page"
- No specific depth is requested — this is the default entry point
- The user needs a fast, readable summary rather than a comprehensive technical breakdown
If the user wants more depth, upgrade to :
seo-audit-fullTip: For deep technical audits, advanced on-page SEO, or full reports, use theskill.seo-audit-full
当满足以下条件时使用:
seo-audit- 用户提出需求:「审计这个页面」、「检查SEO」、「分析我的URL」、「快速SEO检查」、「我的页面有什么问题」
- 没有指定审计深度——这是默认的入口技能
- 用户需要快速、易读的摘要,而非全面的技术拆解
如果用户需要更深的审计深度,请升级到:
seo-audit-full提示: 如需深度技术审计、高级页面SEO或完整报告,请使用技能。seo-audit-full
Input Expected
预期输入
| Input | Required | Notes |
|---|---|---|
| Page URL | Yes | The page to audit |
| Raw HTML or page content | Optional | Enables more accurate on-page analysis |
| GSC / analytics data | Optional | Not required for basic audit |
If only a URL is provided and no source code or crawler data is available, clearly state:
Limitation: This audit is based on visible page content and publicly available signals only. Source code, GSC data, crawl logs, and performance metrics are not available for this audit.
| 输入项 | 是否必填 | 说明 |
|---|---|---|
| 页面URL | 是 | 要审计的页面地址 |
| 原始HTML或页面内容 | 可选 | 可实现更准确的页面分析 |
| GSC / 分析数据 | 可选 | 基础审计不需要 |
如果仅提供了URL,没有可用的源代码或爬虫数据,请明确说明:
限制: 本次审计仅基于可见页面内容和公开可用信号,无法获取源代码、GSC数据、爬取日志和性能指标。
Output
输出要求
Produce a Basic SEO Audit Report by filling the template at assets/report-template.html,
then save it to a file — never print raw HTML to the terminal.
File naming:
reports/<hostname>-<slug>-audit.htmlhttps://example.com/blog/best-tools → reports/example-com-blog-best-tools-audit.html
https://example.com/ → reports/example-com-audit.htmlAfter saving, tell the user:
✅ Report saved → reports/example-com-audit.html
Open it now? (yes / no)If yes → run:
open reports/example-com-audit.htmlTemplate placeholders — fill each independently:
| Placeholder | Content |
|---|---|
| One sentence: total checks run, how many failed/warned/passed |
| |
| |
| |
填写assets/report-template.html的模板生成基础SEO审计报告,然后保存为文件——绝对不要直接在终端输出原始HTML。
文件命名规则:
reports/<hostname>-<slug>-audit.htmlhttps://example.com/blog/best-tools → reports/example-com-blog-best-tools-audit.html
https://example.com/ → reports/example-com-audit.html保存完成后告知用户:
✅ 报告已保存 → reports/example-com-audit.html
现在打开吗?(yes / no)如果用户回复yes → 执行命令:
open reports/example-com-audit.html模板占位符——请独立填充每个占位符:
| 占位符 | 填充内容 |
|---|---|
| 一句话说明:总检查项数量,失败/警告/通过的数量 |
| 每个严重(失败)项对应一个 |
| 每个警告项对应一个 |
| 每个通过的检查项对应一个 |
Scripts
脚本使用
Run these scripts before writing any findings. They output structured JSON — use the JSON directly as evidence; do not re-fetch the same URLs manually.
Dependencies: (html parsing uses Python stdlib)
pip install requestsbash
undefined在写入任何发现前先运行以下脚本,它们会输出结构化JSON,可以直接将JSON作为证据使用,不要手动重复请求相同URL。
依赖: (HTML解析使用Python标准库)
pip install requestsbash
undefinedStep 1: site-level checks (robots.txt + sitemap.xml)
步骤1:站点级检查(robots.txt + sitemap.xml)
python scripts/check-site.py https://example.com
python scripts/check-site.py https://example.com
Step 2: page-level checks (H1, title, meta description, canonical)
步骤2:页面级检查(H1、title、meta描述、canonical)
python scripts/check-page.py https://example.com
python scripts/check-page.py https://example.com
With primary keyword (recommended — enables H1 keyword presence check)
带核心关键词(推荐——可开启H1关键词存在检查)
python scripts/check-page.py https://example.com --keyword "running shoes"
python scripts/check-page.py https://example.com --keyword "running shoes"
Optional: fetch raw page HTML for further inspection
可选:拉取原始页面HTML用于进一步检查
python scripts/fetch-page.py https://example.com --output page.html
python scripts/fetch-page.py https://example.com --output page.html
Step 3: JSON-LD schema validation
步骤3:JSON-LD schema校验
python scripts/check-schema.py https://example.com
python scripts/check-schema.py https://example.com
Or from previously fetched HTML (avoids redundant fetch):
或使用之前拉取的HTML(避免重复请求):
python scripts/check-schema.py --file page.html
Each script exits with code `0` (all pass/warn) or `1` (any fail/error).
**STRICT SCOPE — do not add any check not listed below. No exceptions.**
Allowed site-level checks (in `{{site_checks_html}}`):
- robots.txt · sitemap.xml · 404 Handling · URL Canonicalization · i18n / hreflang
Allowed E-E-A-T checks (in `{{eeat_checks_html}}`):
- About Us · Contact · Privacy Policy · Terms of Service · Media/Partners (only if present)
Allowed page-level checks (in `{{page_checks_html}}`), output in this exact order:
PageSpeed (Mobile) · PageSpeed (Desktop) · URL Slug · Title Tag · Meta Description · H1 Tag · Canonical Tag · Image Alt Text · Word Count · Keyword Placement · Heading Structure · Internal Links · Schema (JSON-LD)
Image Alt Text logic:
- Parse <img> tags from static HTML
- Pass: all images have non-empty alt (decorative images with alt="" are OK)
- Warn: any content image missing alt attribute
- Unverified (status-info): 0 images found in static HTML → likely JS-rendered, cannot verify
⛔ HARD RULE — Output ONLY the check rows defined in report-template.html.
If a check is not in the allowed lists above, do NOT output it — not even if you find issues.
No exceptions. No "bonus" checks. No improvisation.
The template is the single source of truth. Treat it as a strict whitelist.
Still BANNED (belong to seo-audit-full): OG tags · Twitter Card · Social tags · Page Weight · Core Web Vitals · Robots Meta
**How to use the JSON output:**
- Map each field's `status` → `pass` / `warn` / `fail` / `error` directly to the report check table
- Use each field's `detail` string as the starting point for the Evidence line in findings
- Do not contradict the script output unless you have additional observable evidence
- Separate check groups with `<div class="subsection-label">Label</div>` inside `{{site_checks_html}}`:
`Crawlability` · `URL Canonicalization` · `i18n / hreflang` · `Schema (JSON-LD)`
and `<div class="subsection-label">E-E-A-T Trust Pages</div>` before `{{eeat_checks_html}}`
**LLM review — mandatory when `llm_review_required: true`:**
The script flags fields that require semantic or quality judgment it cannot perform.
Never leave `llm_review_required: true` unresolved — always make an explicit judgment call.
**H1 — triggered when `keyword_match == "partial"`:**h1_text : (from h1.values[0])
keyword : (the --keyword passed to the script)
Judge: Does this H1 semantically cover the keyword's search intent?
- Consider synonyms, natural variants, topic coverage
- yes → downgrade to "pass", note the variant
- no → keep "warn" or upgrade to "fail", explain the gap
**Title — triggered when `keyword_match == "partial"` OR `keyword_position != "start"`:**title : (from title.value)
keyword : (the --keyword passed)
Judge:
- Does the title semantically cover the keyword's search intent?
- Is the title grammatically correct and naturally readable?
- Keyword position — apply different standards by page type:
- Homepage : Brand + core keyword is correct (e.g. "Acme | AI Workflow Automation") Do NOT flag brand-first as a problem.
- Inner pages: Core keyword should lead (e.g. "AI Workflow Automation for Teams — Acme") Flag if keyword is buried mid-title without good reason.
IMPORTANT — do NOT flag these as negatives:
- Years (e.g. "2026") → signal freshness, increase CTR — treat as positive unless the page is explicitly evergreen content where dating would hurt longevity.
- Numbers (e.g. "5 best", "Top 10", "3 steps") → set clear expectations, consistently outperform non-numeric titles in CTR — always treat as a plus.
- Specific qualifiers ("Open-Source", "Self-Hosted", "Free") → narrow intent and attract higher-quality clicks — do not penalize.
**URL Slug — triggered when `keyword_match != "full"` or `is_homepage == false`:**slug : (from url_slug.slug)
keyword : (the --keyword passed)
Judge:
- Does the slug contain the primary keyword or a natural variant?
- Is the path hierarchy logical? (/category/keyword is ideal)
- Is it concise and human-readable? Homepage (is_homepage: true): skip — no judgment needed.
**Meta Description — always triggered when content is present:**meta_description : (from meta_description.value)
keyword : (the --keyword passed)
Judge all four:
- Complete sentence(s)? (1-2 sentences, no fragments)
- Mentions a concrete result — not vague fluff? Good: "Cut design time by 60% with AI-powered templates" Bad: "The best tool for all your design needs"
- Keyword or natural synonym used once — not stuffed?
- More specific than what a typical competitor would write?
IMPORTANT — do NOT flag these as negatives:
- Years (e.g. "2026") → signal freshness, improve CTR for time-sensitive queries. Only note the year if the page is explicitly evergreen content where dating hurts.
- Numbers (e.g. "5 best", "3 steps") → concrete specificity, strong CTR signal.
- Trailing "and more." → minor style note at most, never a Warning or Fail.
---python scripts/check-schema.py --file page.html
每个脚本的退出码为`0`(全部通过/警告)或`1`(存在失败/错误)。
**严格范围限制——不要添加任何下方未列出的检查项,无例外。**
允许的站点级检查(填入`{{site_checks_html}}`):
- robots.txt · sitemap.xml · 404处理 · URL规范化 · i18n / hreflang
允许的E-E-A-T检查(填入`{{eeat_checks_html}}`):
- 关于我们 · 联系我们 · 隐私政策 · 服务条款 · 媒体/合作伙伴(仅当存在时检查)
允许的页面级检查(填入`{{page_checks_html}}`),严格按照以下顺序输出:
PageSpeed(移动端) · PageSpeed(桌面端) · URL Slug · Title标签 · Meta描述 · H1标签 · Canonical标签 · 图片Alt文本 · 字数 · 关键词布局 · 标题结构 · 内链 · Schema(JSON-LD)
图片Alt文本检查逻辑:
- 从静态HTML解析`<img>`标签
- 通过:所有图片都有非空alt(装饰性图片alt=""也可)
- 警告:任意内容图片缺失alt属性
- 未验证(状态信息):静态HTML中未找到图片→大概率是JS渲染,无法验证
⛔ 硬性规则——仅输出report-template.html中定义的检查行。
如果某检查项不在上述允许列表中,即使发现问题也不要输出,无例外。不要加「额外」检查,不要自行发挥。模板是唯一的判断标准,请将其视为严格白名单。
仍被禁止的检查项(属于seo-audit-full范围):OG标签 · Twitter Card · 社交标签 · 页面体积 · Core Web Vitals · Robots Meta
**JSON输出使用方法:**
- 将每个字段的`status` → `pass` / `warn` / `fail` / `error`直接映射到报告检查表
- 将每个字段的`detail`字符串作为发现中证据行的起始内容
- 除非有额外可观测的证据,否则不要与脚本输出矛盾
- 在`{{site_checks_html}}`中使用`<div class="subsection-label">标签</div>`分隔检查组:
`可爬取性` · `URL规范化` · `i18n / hreflang` · `Schema (JSON-LD)`
在`{{eeat_checks_html}}`前添加`<div class="subsection-label">E-E-A-T信任页面</div>`
**LLM审核——当`llm_review_required: true`时必须执行:**
脚本会标记出需要语义或质量判断、无法自动处理的字段。不要让`llm_review_required: true`处于未解决状态,必须给出明确的判断结论。
**H1——当`keyword_match == "partial"`时触发:**h1_text : (来自h1.values[0])
keyword : (传递给脚本的--keyword参数)
判断:该H1是否在语义上覆盖了关键词的搜索意图?
- 考虑同义词、自然变体、主题覆盖度
- 是 → 降级为「pass」,标注变体
- 否 → 保持「warn」或升级为「fail」,解释差距
**Title——当`keyword_match == "partial"` 或 `keyword_position != "start"`时触发:**title : (来自title.value)
keyword : (传递的--keyword参数)
判断:
- 该标题是否在语义上覆盖了关键词的搜索意图?
- 标题是否语法正确、自然易读?
- 关键词位置——按页面类型适用不同标准:
- 首页:品牌+核心关键词是正确的(例如「Acme | AI Workflow Automation」),不要将品牌前置标记为问题。
- 内页:核心关键词应该前置(例如「AI Workflow Automation for Teams — Acme」),如果关键词没有合理理由被埋在标题中间则标记问题。
重要提示——不要将以下内容标记为负面:
- 年份(例如「2026」)→ 表示新鲜度,提升点击率——视为正向,除非页面明确是常青内容,标注年份会影响生命周期。
- 数字(例如「5 best」、「Top 10」、「3 steps」)→ 明确预期,点击率表现一直优于无数字标题——始终视为加分项。
- 特定限定词(「开源」、「自托管」、「免费」)→ 缩小意图范围,吸引更高质量的点击——不要惩罚。
**URL Slug——当`keyword_match != "full"` 或 `is_homepage == false`时触发:**slug : (来自url_slug.slug)
keyword : (传递的--keyword参数)
判断:
- Slug是否包含核心关键词或自然变体?
- 路径层级是否逻辑通顺?(/category/keyword是理想结构)
- 是否简洁易读? 首页(is_homepage: true):跳过——无需判断。
**Meta描述——存在内容时始终触发:**meta_description : (来自meta_description.value)
keyword : (传递的--keyword参数)
从四个维度判断:
- 是否是完整句子?(1-2句,无片段)
- 是否提到了具体结果——而非空泛的套话? 正面示例:「使用AI模板减少60%的设计时间」 负面示例:「满足你所有设计需求的最佳工具」
- 关键词或自然同义词仅使用一次——没有堆砌?
- 比典型竞品的描述更具体?
重要提示——不要将以下内容标记为负面:
- 年份(例如「2026」)→ 表示新鲜度,提升时间敏感查询的点击率,仅当页面明确是常青内容、标注年份有负面影响时才标注。
- 数字(例如「5 best」、「3 steps」)→ 具体明确,强点击率信号。
- 末尾的「and more.」→ 最多是小的风格问题,绝对不要标记为警告或失败。
---Recommended Workflow
推荐工作流
Follow these steps in order:
-
Acknowledge scope — confirm this is a basic audit; note any missing data
-
Infer primary keyword — fetch the page with, then determine the primary keyword:
fetch-page.py- If the user explicitly provided a keyword → use it directly
- If not → read the page H1, title, and first paragraph, then infer the single most likely target keyword phrase (what would a searcher type to find this page?)
- State the inferred keyword explicitly before running checks:
"Inferred primary keyword: open source claude alternatives"
-
Run— parse the JSON output for robots, sitemap, 404 handling, and URL canonicalization
check-site.py404 check: fetch<origin>/this-page-definitely-does-not-exist-seo-audit-check- Returns 404 → Pass · Returns 200 (soft 404) → Fail · Returns 301 to homepage → Warn
URL Canonicalization checks (each is a separate sub-check):- HTTP→HTTPS: fetch — must 301 to
http://<host>. Returns 200 → Fail.https:// - www consistency: fetch both and
https://www.<host>— one must 301 to the other. Both return 200 → Warn.https://<host> - Trailing slash: compare the URL actually served vs the canonical tag on the page. Mismatch → Warn.
- Canonical match: canonical tag href must exactly match the final URL after all redirects. Mismatch → Warn.
-
E-E-A-T infrastructure check — for each trust page below, check two layers:
- Layer 1 — Exists: fetch the URL, check HTTP status (200 = exists, 404/redirect = missing)
- Layer 2 — Reachable: fetch homepage HTML, check if footer or nav contains a link to this page
Page Required About Us Yes Contact Yes Privacy Policy Yes Terms of Service Yes Media / Partners No — include only if present Status rules:- Page missing (non-200) → Fail
- Page exists but not linked in footer/nav → Warn
- Page exists and linked in footer/nav → Pass
- Optional page missing → skip, do not include row
-
Run— fetch PageSpeed Insights scores for mobile + desktop
check-pagespeed.py <url>Thresholds (different per category and strategy):Category Desktop Pass Mobile Pass Warn Fail SEO 100 100 90–99 < 90 Best Practices 100 100 90–99 < 90 Accessibility 100 100 90–99 < 90 Performance ≥ 90 ≥ 80 Desktop 80–89 / Mobile 70–79 Desktop < 80 / Mobile < 70 Output two rows in: PageSpeed (Mobile) and PageSpeed (Desktop){{page_checks_html}} -
Run— parse the JSON output for H1, title, meta description, canonical, and URL slug
check-page.py --keyword "<inferred_keyword>" -
i18n / hreflang check — only run if the page contains hreflang tags orsuggests multi-language:
<html lang>- Skip entirely (N/A) if no hreflang tags found and site appears single-language
- If hreflang tags present, check:
- Reciprocal symmetry: every URL referenced must link back to all other variants — any broken link = Fail
- Language codes: must be valid BCP 47 (e.g. not
zh-CN,zhnoten-US) — wrong code = Warnen-us - x-default: should be present for language-selector or fallback pages — missing = Warn
- html[lang] attribute: must match the primary hreflang of the page — mismatch = Warn
- URL structure: recommended pattern — default language (usually ) at root with no prefix, other languages under subpaths (
en,/zh/)./es/- (en) +
/page+/zh/page→ Pass/es/page - +
/en/page→ Warn (en prefix is redundant, wastes crawl depth)/zh/page - Only flag if the pattern is clearly inconsistent or en is unnecessarily prefixed
-
Run— parse the JSON output for schema types and field validation
check-schema.pybashpython scripts/check-schema.py https://example.com # Or from previously fetched HTML: python scripts/check-schema.py --file page.htmlThe script extracts JSON-LD blocks, validatesand required fields per Schema.org spec.@typeis always set — confirmllm_review_required: truematches actual page content.inferred_page_typePage type → expectedreference:@typePage Type Expected @type Min. required fields Homepage WebSite + Organization name, url, logo Blog / Article Article or BlogPosting headline, datePublished, author, image Product Product name, image, offers (price, priceCurrency) FAQ FAQPage mainEntity[].name, acceptedAnswer.text How-to HowTo name, step[].text Local business LocalBusiness name, address, telephone Generic landing — N/A — skip, no widely-supported type - Pass: correct @type present, all required fields valid, no conflicts
- Warn: @type present but missing recommended fields
- Fail: expected @type missing entirely
- N/A: generic landing page — do not penalize
-
Summarize findings — each finding must follow the Evidence / Impact / Fix format
-
Priority actions — list the top 3 highest-impact fixes
-
Render report — save to, then ask user to open
reports/<hostname>-<slug>-audit.html -
Upgrade prompt — if issues beyond basic scope are found, suggest
seo-audit-full
请按顺序执行以下步骤:
-
确认范围——明确这是基础审计;标注所有缺失的数据
-
推断核心关键词——使用拉取页面,然后确定核心关键词:
fetch-page.py- 如果用户明确提供了关键词 → 直接使用
- 如果没有 → 阅读页面H1、标题和第一段,推断最可能的单一目标关键词短语(搜索者会输入什么内容找到这个页面?)
- 运行检查前明确说明推断的关键词:
「推断核心关键词:open source claude alternatives」
-
运行——解析JSON输出,检查robots、sitemap、404处理和URL规范化
check-site.py404检查: 请求<origin>/this-page-definitely-does-not-exist-seo-audit-check- 返回404 → 通过 · 返回200(软404) → 失败 · 返回301跳转到首页 → 警告
URL规范化检查(每个都是独立的子检查项):- HTTP→HTTPS: 请求—— 必须301跳转到
http://<host>,返回200 → 失败。https:// - www一致性: 同时请求和
https://www.<host>—— 其中一个必须301跳转到另一个,两者都返回200 → 警告。https://<host> - 末尾斜杠: 对比实际服务的URL和页面上的canonical标签,不匹配 → 警告。
- Canonical匹配: canonical标签的href必须与所有跳转后的最终URL完全匹配,不匹配 → 警告。
-
E-E-A-T基础设施检查——对以下每个信任页面,检查两个层级:
- 第一层——存在: 请求URL,检查HTTP状态(200=存在,404/跳转=缺失)
- 第二层——可访问: 拉取首页HTML,检查页脚或导航中是否有指向该页面的链接
页面 是否必填 关于我们 是 联系我们 是 隐私政策 是 服务条款 是 媒体 / 合作伙伴 否——仅当存在时包含 状态规则:- 页面缺失(非200) → 失败
- 页面存在但页脚/导航中没有链接 → 警告
- 页面存在且页脚/导航中有链接 → 通过
- 可选页面缺失 → 跳过,不要加入检查行
-
运行——拉取移动端+桌面端的PageSpeed Insights分数
check-pagespeed.py <url>阈值(不同分类和策略阈值不同):分类 桌面端通过阈值 移动端通过阈值 警告阈值 失败阈值 SEO 100 100 90–99 < 90 最佳实践 100 100 90–99 < 90 可访问性 100 100 90–99 < 90 性能 ≥ 90 ≥ 80 桌面端80–89 / 移动端70–79 桌面端 < 80 / 移动端 < 70 在中输出两行:PageSpeed(移动端)和PageSpeed(桌面端){{page_checks_html}} -
运行——解析JSON输出,检查H1、标题、meta描述、canonical和URL Slug
check-page.py --keyword "<inferred_keyword>" -
i18n / hreflang检查——仅当页面包含hreflang标签或暗示支持多语言时运行:
<html lang>- 如果未找到hreflang标签且站点看起来是单语言,完全跳过(标记N/A)
- 如果存在hreflang标签,检查:
- 双向对称性: 每个被引用的URL必须回链到所有其他变体——任何断链=失败
- 语言代码: 必须是有效的BCP 47格式(例如而非
zh-CN,zh而非en-US)——错误代码=警告en-us - x-default: 语言选择页或回落页应该存在该属性——缺失=警告
- html[lang]属性: 必须与页面的主hreflang匹配——不匹配=警告
- URL结构: 推荐模式——默认语言(通常是)放在根目录无前缀,其他语言放在子路径下(
en、/zh/)。/es/- (英文) +
/page+/zh/page→ 通过/es/page - +
/en/page→ 警告(en前缀冗余,浪费爬取深度)/zh/page - 仅当模式明显不一致或en被不必要地加了前缀时才标记
-
运行——解析JSON输出,检查schema类型和字段校验
check-schema.pybashpython scripts/check-schema.py https://example.com # 或使用之前拉取的HTML: python scripts/check-schema.py --file page.html脚本会提取JSON-LD块,按照Schema.org规范校验和必填字段。@type始终为开启状态——确认llm_review_required: true与实际页面内容匹配。inferred_page_type页面类型 → 预期参考:@type页面类型 预期@type 最低必填字段 首页 WebSite + Organization name, url, logo 博客 / 文章 Article或BlogPosting headline, datePublished, author, image 产品 Product name, image, offers (price, priceCurrency) FAQ FAQPage mainEntity[].name, acceptedAnswer.text 教程 HowTo name, step[].text 本地商家 LocalBusiness name, address, telephone 通用落地页 — N/A——跳过,没有广泛支持的类型 - 通过:存在正确的@type,所有必填字段有效,无冲突
- 警告:存在@type但缺失推荐字段
- 失败:完全缺失预期的@type
- N/A:通用落地页——不要惩罚
-
总结发现——每个发现必须遵循「证据/影响/修复方案」格式
-
优先级行动——列出3个最高影响的修复项
-
渲染报告——保存到,然后询问用户是否打开
reports/<hostname>-<slug>-audit.html -
升级提示——如果发现超出基础范围的问题,推荐使用
seo-audit-full
Report Detail Writing Rules
报告详情编写规则
The Detail cell in check tables must follow these rules — no exceptions:
Pass → one short phrase. No lists, no elaboration.
Good: "Valid XML urlset · 104 URLs · referenced in robots.txt."
Bad: "Valid XML urlset with 104 URLs. Correctly referenced in robots.txt.
Blog posts are likely indexed through this sitemap."Warn → one with ≤2 bullet points. One with the fix.
<div class="detail-issue"><div class="detail-fix">Good:
<div class="detail-issue">· Title 48 chars — 2 below minimum. · Year "2026" will date the page.</div>
<div class="detail-fix">Expand to 50–60 chars; remove year if evergreen.</div>
Bad: three-sentence prose explaining what a title tag is and why length matters.Fail → same as Warn. Lead with the exact failure. No background explanations.
Do NOT explain what a check is, do NOT repeat information already visible in the status badge,
do NOT treat the reader as unfamiliar with SEO basics.
检查表中的详情单元格必须遵循以下规则,无例外:
通过 → 一个短句,无列表,无展开说明。
正面示例:"Valid XML urlset · 104 URLs · referenced in robots.txt."
负面示例:"Valid XML urlset with 104 URLs. Correctly referenced in robots.txt.
博客文章大概率会通过这个sitemap被收录。"警告 → 一个包含不超过2个列表项,一个说明修复方案。
<div class="detail-issue"><div class="detail-fix">正面示例:
<div class="detail-issue">· Title 48 chars — 2 below minimum. · Year "2026" will date the page.</div>
<div class="detail-fix">Expand to 50–60 chars; remove year if evergreen.</div>
负面示例:用三句话解释什么是title标签以及长度为什么重要。失败 → 和警告格式相同,开头写明具体的失败点,不要背景解释。
不要解释检查项是什么,不要重复状态标签中已经可见的信息,不要假设读者不了解SEO基础知识。
Mandatory Finding Format
强制发现格式
Every important finding must follow this structure:
**Finding: [Finding Title]**
- **Evidence:** [What was observed — direct quote, screenshot ref, or measurable data]
- **Impact:** [Why this matters for SEO or UX]
- **Fix:** [Specific, actionable recommendation]Do not write vague conclusions. If evidence is insufficient, state assumptions explicitly.
每个重要发现必须遵循以下结构:
**发现:[发现标题]**
- **证据:** [观测到的内容——直接引用、截图引用或可衡量的数据]
- **影响:** [该问题对SEO或UX的负面影响]
- **修复方案:** [具体、可执行的建议]不要写空泛的结论,如果证据不足,明确说明假设。
Upgrade Prompt
升级提示
Include this at the end of every basic audit report:
Want a deeper analysis? This was a basic SEO audit covering site-level signals and core on-page checks. For advanced technical SEO, content quality scoring, structured data analysis, and full crawl-based findings, use theskill.seo-audit-full
每份基础审计报告末尾都要包含这段内容:
想要更深度的分析? 这是覆盖站点级信号和核心页面检查的基础SEO审计。如需高级技术SEO、内容质量评分、结构化数据分析和全爬取维度的发现,请使用技能。seo-audit-full
Reference Files
参考文件
- Detailed audit scope and field definitions: references/REFERENCE.md
- Final HTML report template: assets/report-template.html
- Site-level check script: scripts/check-site.py
- Page-level check script: scripts/check-page.py
- Raw page fetcher: scripts/fetch-page.py
- Schema validation script: scripts/check-schema.py
- 详细审计范围和字段定义:references/REFERENCE.md
- 最终HTML报告模板:assets/report-template.html
- 站点级检查脚本:scripts/check-site.py
- 页面级检查脚本:scripts/check-page.py
- 原始页面拉取脚本:scripts/fetch-page.py
- Schema校验脚本:scripts/check-schema.py