optimise-seo
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseOptimise SEO
优化SEO
No visual redesigns or layout changes. Allowed: metadata, structured data, semantic HTML, internal links, alt text, sitemap/robots, performance tuning.
不进行视觉重设计或布局更改。允许操作:元数据、结构化数据、语义化HTML、内部链接、替代文本、站点地图/robots协议、性能调优。
Workflow
工作流程
- Inventory routes and index intent
- Fix crawl/index foundations
- Implement metadata + structured data
- Improve semantics, links, and CWV
- Validate with seo-checklist.md and document changes
- 盘点路由并明确索引意图
- 修复爬取/索引基础设置
- 实现元数据+结构化数据
- 优化语义、链接和CWV
- 使用seo-checklist.md验证并记录更改
Must-have
必备项
- Sitemap () and robots (
app/sitemap.ts)app/robots.ts - Canonicals consistent on every page
- Unique titles + descriptions
- OpenGraph + Twitter Card tags
- JSON-LD: Organization, WebSite, BreadcrumbList (+ Article/Product/FAQ as needed)
- One h1 and logical heading hierarchy
- Alt text, internal links, CWV targets, mobile/desktop parity
- 站点地图()和robots协议(
app/sitemap.ts)app/robots.ts - 每个页面的规范URL保持一致
- 唯一的标题+描述
- OpenGraph + Twitter Card标签
- JSON-LD:Organization、WebSite、BreadcrumbList(根据需要添加Article/Product/FAQ)
- 单个h1标签和合理的标题层级
- 替代文本、内部链接、CWV指标目标、移动端/桌面端一致性
Programmatic SEO (pages at scale)
程序化SEO(批量页面)
- Validate demand for a repeatable pattern before generating pages
- Require unique value per page and defensible data
- Clean subfolder URLs, hubs/spokes, and breadcrumbs
- Index only strong pages; monitor indexation and cannibalization
- 在生成页面之前验证可重复模式的需求
- 要求每个页面具备独特价值和可靠数据
- 清晰的子文件夹URL、枢纽/辐射结构和面包屑导航
- 仅索引优质页面;监控索引情况和内容同质化问题
SEO audit (triage order)
SEO审计(优先级排序)
- Crawl/index: robots, sitemap, noindex, canonicals, redirects, soft 404s
- Technical: HTTPS, CWV, mobile parity
- On-page/content: titles/H1, internal links, remove or noindex thin pages
- 爬取/索引:robots协议、站点地图、noindex标签、规范URL、重定向、软404
- 技术层面:HTTPS、CWV、移动端一致性
- 页面内/内容:标题/H1、内部链接、移除或设置noindex标记薄内容页面
Don't
禁止操作
- Over-generate thin pages or doorway pages
- Omit or conflict canonicals
- Block crawlers unintentionally
- Rely on JS-only rendering without SSR/SSG
- 过度生成薄内容页面或门页
- 遗漏或冲突的规范URL
- 无意中阻止爬虫访问
- 依赖纯JS渲染而不使用SSR/SSG
Resources
资源
- nextjs-implementation.md — implementation patterns for steps 2-4
- seo-checklist.md — pass/fail validation during step 5
- nextjs-implementation.md — 步骤2-4的实现模式
- seo-checklist.md — 步骤5中的通过/失败验证清单
Validation
验证
bash
curl -I https://site.com
curl -s https://site.com/robots.txt
curl -s https://site.com/sitemap.xml | head -n 20
curl -s https://site.com/page | rg -n 'rel="canonical"|property="og:|name="twitter:'
lighthouse https://site.com --output=json --output-path=report.json- Validate JSON-LD with Rich Results Test per URL.
- Report remaining blockers with exact URLs and owner/action.
bash
curl -I https://site.com
curl -s https://site.com/robots.txt
curl -s https://site.com/sitemap.xml | head -n 20
curl -s https://site.com/page | rg -n 'rel="canonical"|property="og:|name="twitter:'
lighthouse https://site.com --output=json --output-path=report.json- 针对每个URL使用Rich Results Test验证JSON-LD。
- 报告剩余的障碍,包含确切URL、负责人和行动项。