audit-website

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Website Audit Skill

网站审核技能

Audit websites for SEO, technical, content, performance and security issues using the squirrelscan cli.
squirrelscan provides a cli tool squirrel - available for macos, windows and linux. It carries out extensive website auditing by emulating a browser, search crawler, and analyzing the website's structure and content against over 230+ rules.
It will provide you a list of issues as well as suggestions on how to fix them.
使用squirrelscan cli针对SEO、技术、内容、性能和安全问题审核网站。
squirrelscan提供了一款名为squirrel的CLI工具,支持macOS、Windows和Linux系统。它通过模拟浏览器、搜索引擎爬虫,并根据230+条规则分析网站的结构和内容,进行全面的网站审核。
它会为你列出问题列表以及修复建议。

Links

链接

You can look up the docs for any rule with this template:
example:
你可以使用以下模板查询任意规则的文档:
示例:

What This Skill Does

本技能的功能

This skill enables AI agents to audit websites for over 230 rules in 21 categories, including:
  • SEO issues: Meta tags, titles, descriptions, canonical URLs, Open Graph tags
  • Technical problems: Broken links, redirect chains, page speed, mobile-friendliness
  • Performance: Page load time, resource usage, caching
  • Content quality: Heading structure, image alt text, content analysis
  • Security: Leaked secrets, HTTPS usage, security headers, mixed content
  • Accessibility: Alt text, color contrast, keyboard navigation
  • Usability: Form validation, error handling, user flow
  • Links: Checks for broken internal and external links
  • E-E-A-T: Expertise, Experience, Authority, Trustworthiness
  • User Experience: User flow, error handling, form validation
  • Mobile: Checks for mobile-friendliness, responsive design, touch-friendly elements
  • Crawlability: Checks for crawlability, robots.txt, sitemap.xml and more
  • Schema: Schema.org markup, structured data, rich snippets
  • Legal: Compliance with legal requirements, privacy policies, terms of service
  • Social: Open graph, twitter cards and validating schemas, snippets etc.
  • Url Structure: Length, hyphens, keywords
  • Keywords: Keyword stuffing
  • Content: Content structure, headings
  • Images: Alt text, color contrast, image size, image format
  • Local SEO: NAP consistency, geo metadata
  • Video: VideoObject schema, accessibility
and more
The audit crawls the website, analyzes each page against audit rules, and returns a comprehensive report with:
  • Overall health score (0-100)
  • Category breakdowns (core SEO, technical SEO, content, security)
  • Specific issues with affected URLs
  • Broken link detection
  • Actionable recommendations
  • Rules have levels of error, warning and notice and also have a rank between 1 and 10
该技能让AI Agent能够针对21个类别中的230+条规则审核网站,包括:
  • SEO问题:元标签、标题、描述、规范URL、Open Graph标签
  • 技术问题:失效链接、重定向链、页面速度、移动端适配性
  • 性能:页面加载时间、资源占用、缓存
  • 内容质量:标题结构、图片替代文本、内容分析
  • 安全:密钥泄露、HTTPS使用、安全标头、混合内容
  • 可访问性:替代文本、颜色对比度、键盘导航
  • 可用性:表单验证、错误处理、用户流程
  • 链接:检查内部和外部失效链接
  • E-E-A-T:专业性、经验、权威性、可信度
  • 用户体验:用户流程、错误处理、表单验证
  • 移动端:检查移动端适配性、响应式设计、触控友好元素
  • 可抓取性:检查可抓取性、robots.txt、sitemap.xml等
  • Schema:Schema.org标记、结构化数据、富文本摘要
  • 合规性:符合法律要求、隐私政策、服务条款
  • 社交:Open Graph、Twitter卡片以及验证Schema、摘要等
  • URL结构:长度、连字符、关键词
  • 关键词:关键词堆砌
  • 内容:内容结构、标题
  • 图片:替代文本、颜色对比度、图片大小、图片格式
  • 本地SEO:NAP一致性、地理元数据
  • 视频:VideoObject Schema、可访问性
以及更多类别
审核会抓取网站,针对每条审核规则分析每个页面,并返回包含以下内容的综合报告:
  • 总体健康评分(0-100)
  • 分类细分(核心SEO、技术SEO、内容、安全)
  • 具体问题及受影响的URL
  • 失效链接检测
  • 可执行的建议
  • 规则分为错误、警告和通知三个级别,并且有1到10的优先级排名

When to Use

使用场景

Use this skill when you need to:
  • Analyze a website's health
  • Debug technical SEO issues
  • Fix all of the issues mentioned above
  • Check for broken links
  • Validate meta tags and structured data
  • Generate site audit reports
  • Compare site health before/after changes
  • Improve website performance, accessibility, SEO, security and more.
You should re-audit as often as possible to ensure your website remains healthy and performs well.
在以下场景中使用本技能:
  • 分析网站的健康状况
  • 调试技术SEO问题
  • 修复上述所有问题
  • 检查失效链接
  • 验证元标签和结构化数据
  • 生成网站审核报告
  • 比较更改前后的网站健康状况
  • 提升网站的性能、可访问性、SEO、安全性等
你应该尽可能频繁地重新审核,以确保网站保持健康并良好运行。

Prerequisites

前提条件

This skill requires the squirrel CLI to be installed and available in your PATH.
使用本技能需要安装squirrel CLI并将其添加到你的PATH中。

Installation

安装

If squirrel is not already installed, you can install it using:
bash
curl -fsSL https://squirrelscan.com/install | bash
This will:
  • Download the latest release binary
  • Install to
    ~/.local/share/squirrel/releases/{version}/
  • Create a symlink at
    ~/.local/bin/squirrel
  • Initialize settings at
    ~/.squirrel/settings.json
If
~/.local/bin
is not in your PATH, add it to your shell configuration:
bash
export PATH="$HOME/.local/bin:$PATH"
如果尚未安装squirrel,可以通过以下命令安装:
bash
curl -fsSL https://squirrelscan.com/install | bash
该命令会:
  • 下载最新版本的二进制文件
  • 安装到
    ~/.local/share/squirrel/releases/{version}/
  • ~/.local/bin/squirrel
    创建符号链接
  • ~/.squirrel/settings.json
    初始化设置
如果
~/.local/bin
不在你的PATH中,将其添加到shell配置文件:
bash
export PATH="$HOME/.local/bin:$PATH"

Windows Installation

Windows系统安装

Install using PowerShell:
powershell
irm https://squirrelscan.com/install.ps1 | iex
This will:
  • Download the latest release binary
  • Install to
    %LOCALAPPDATA%\squirrel\
  • Add squirrel to your PATH
If using Command Prompt, you may need to restart your terminal for PATH changes to take effect.
使用PowerShell安装:
powershell
irm https://squirrelscan.com/install.ps1 | iex
该命令会:
  • 下载最新版本的二进制文件
  • 安装到
    %LOCALAPPDATA%\squirrel\
  • 将squirrel添加到你的PATH中
如果使用命令提示符,可能需要重启终端才能使PATH更改生效。

Verify Installation

验证安装

Check that squirrel is installed and accessible:
bash
squirrel --version
检查squirrel是否安装并可访问:
bash
squirrel --version

Setup

配置

Running
squirrel init
will setup a squirrel.toml file for configuration in the current directory.
Each project should have a squirrel project name for the database - by default this is the name of the website you audit - but you can set it yourself so that you can place all audits for a project in one database
You do this either on init with:
bash
squirrel init --project-name my-project
运行
squirrel init
会在当前目录创建一个squirrel.toml配置文件。
每个项目都应该有一个用于数据库的squirrel项目名称——默认是你审核的网站名称,但你可以自行设置,以便将一个项目的所有审核结果存储到同一个数据库中。
你可以在初始化时设置:
bash
squirrel init --project-name my-project

or with aliases

或使用别名

squirrel init -n my-project
squirrel init -n my-project

overwrite existing config

覆盖现有配置

squirrel init -n my-project --force

or config:

```bash
squirrel config set project.name my-project
If there is no squirrel.toml in the directory you're running from CREATE ONE with
squirrel init
and specify the '-n' parameter for a project name (infer this)
The project name is used to identify the project in the database and is used to generate the database name.
It is stored in ~/.squirrel/projects/<project-name>
squirrel init -n my-project --force

或者通过配置命令设置:

```bash
squirrel config set project.name my-project
如果运行命令的目录中没有squirrel.toml文件,请使用
squirrel init
创建一个,并指定'-n'参数设置项目名称(自动推断)。
项目名称用于在数据库中标识项目,并用于生成数据库名称。
它存储在~/.squirrel/projects/<project-name>

Usage

使用方法

Intro

简介

There are three processes that you can run and they're all cached in the local project database:
  • crawl - subcommand to run a crawl or refresh, continue a crawl
  • analyze - subcommand to analyze the crawl results
  • report - subcommand to generate a report in desired format (llm, text, console, html etc.)
the 'audit' command is a wrapper around these three processes and runs them sequentially:
bash
squirrel audit https://example.com --format llm
YOU SHOULD always prefer format option llm - it was made for you and provides an exhaustive and compact output format.
FIRST SCAN should be a surface scan, which is a quick and shallow scan of the website to gather basic information about the website, such as its structure, content, and technology stack. This scan can be done quickly and without impacting the website's performance.
SECOND SCAN should be a deep scan, which is a thorough and detailed scan of the website to gather more information about the website, such as its security, performance, and accessibility. This scan can take longer and may impact the website's performance.
If the user doesn't provide a website to audit - extrapolate the possibilities in the local directory and checking environment variables (ie. linked vercel projects, references in memory or the code).
If the directory you're running for provides for a method to run or restart a local dev server - run the audit against that.
If you have more than one option on a website to audit that you discover - prompt the user to choose which one to audit.
If there is no website - either local, or on the web to discover to audit, then ask the user which URL they would like to audit.
You should PREFER to audit live websites - only there do we get a TRUE representation of the website and performance or rendering issuers.
If you have both local and live websites to audit, prompt the user to choose which one to audit and SUGGEST they choose live.
You can apply fixes from an audit on the live site against the local code.
When planning scope tasks so they can run concurrently as sub-agents to speed up fixes.
When implementing fixes take advantage of subagents to speed up implementation of fixes.
Run typechecking and formatting against generated code when you finish if available in the environment (ruff for python, biome and tsc for typescript etc.)
你可以运行三个流程,所有流程都会缓存到本地项目数据库中:
  • crawl - 子命令,用于运行抓取或刷新、继续抓取
  • analyze - 子命令,用于分析抓取结果
  • report - 子命令,用于生成指定格式的报告(llm、text、console、html等)
'audit'命令是这三个流程的包装器,会按顺序运行它们:
bash
squirrel audit https://example.com --format llm
你应该始终优先使用llm格式选项——它是为你量身打造的,提供详尽且紧凑的输出格式。
首次扫描应该是表面扫描(surface scan),这是对网站的快速浅层扫描,用于收集网站的基本信息,如结构、内容和技术栈。该扫描速度快,不会影响网站性能。
第二次扫描应该是深度扫描(deep scan),这是对网站的全面详细扫描,用于收集更多信息,如安全性、性能和可访问性。该扫描耗时较长,可能会影响网站性能。
如果用户未提供要审核的网站——在本地目录中推断可能的选项,并检查环境变量(即关联的Vercel项目、内存中的引用或代码)。
如果运行命令的目录提供了启动或重启本地开发服务器的方法——针对该服务器运行审核。
如果发现多个可审核的网站选项——提示用户选择要审核的网站。
如果没有可发现的本地或线上网站可审核——询问用户要审核哪个URL。
你应该优先审核线上网站——只有线上网站才能提供网站的真实状态以及性能或渲染问题。
如果同时有本地和线上网站可审核——提示用户选择要审核的网站,并建议选择线上网站。
你可以将线上网站审核中发现的问题修复应用到本地代码中。
在规划范围任务时,可让子Agent并行运行以加快修复速度。
在实施修复时,利用子Agent加快修复的执行速度。
完成修复后,如果环境中存在类型检查和格式化工具(如Python的ruff、TypeScript的biome和tsc等),请针对生成的代码运行这些工具。

Basic Workflow

基本工作流程

The audit process is two steps:
  1. Run the audit (saves to database, shows console output)
  2. Export report in desired format
bash
undefined
审核流程分为两步:
  1. 运行审核(保存到数据库,显示控制台输出)
  2. 导出报告为指定格式
bash
undefined

Step 1: Run audit (default: console output)

步骤1:运行审核(默认:控制台输出)

squirrel audit https://example.com
squirrel audit https://example.com

Step 2: Export as LLM format

步骤2:导出为LLM格式

squirrel report <audit-id> --format llm
undefined
squirrel report <audit-id> --format llm
undefined

Regression Diffs

回归差异

When you need to detect regressions between audits, use diff mode:
bash
undefined
当你需要检测两次审核之间的回归问题时,使用差异模式:
bash
undefined

Compare current report against a baseline audit ID

将当前报告与基准审核ID进行比较

squirrel report --diff <audit-id> --format llm
squirrel report --diff <audit-id> --format llm

Compare latest domain report against a baseline domain

将最新的域名报告与基准域名进行比较

squirrel report --regression-since example.com --format llm

Diff mode supports `console`, `text`, `json`, `llm`, and `markdown`. `html` and `xml` are not supported.
squirrel report --regression-since example.com --format llm

差异模式支持`console`、`text`、`json`、`llm`和`markdown`格式。不支持`html`和`xml`格式。

Running Audits

运行审核

When running an audit:
  1. Fix ALL issues - critical, high, medium, and low priority
  2. Don't stop early - continue until score target is reached (see Score Targets below)
  3. Parallelize fixes - use subagents for bulk content edits (alt text, headings, descriptions)
  4. Iterate - fix batch → re-audit → fix remaining → re-audit → until done
  5. Only pause for human judgment - broken links may need manual review; everything else should be fixed automatically
  6. Show before/after - present score comparison only AFTER all fixes are complete
IMPORTANT: Fix ALL issues, don't stop early.
  • Iteration Loop: After fixing a batch of issues, re-audit and continue fixing until:
    • Score reaches target (typically 85+), OR
    • Only issues requiring human judgment remain (e.g., "should this link be removed?")
  • Treat all fixes equally: Code changes (
    *.tsx
    ,
    *.ts
    ) and content changes (
    *.md
    ,
    *.mdx
    ,
    *.html
    ) are equally important. Don't stop after code fixes.
  • Parallelize content fixes: For issues affecting multiple files:
    • Spawn subagents to fix in parallel
    • Example: 7 files need alt text → spawn 1-2 agents to fix all
    • Example: 30 files have heading issues → spawn agents to batch edit
  • Don't ask, act: Don't pause to ask "should I continue?" - proceed autonomously until complete.
  • Completion criteria:
    • ✅ All errors fixed
    • ✅ All warnings fixed (or documented as requiring human review)
    • ✅ Re-audit confirms improvements
    • ✅ Before/after comparison shown to user
    • ✅ Site is complete and fixed (scores above 95 with full coverage)
Run multiple audits to ensure completeness and fix quality. Prompt the user to deploy fixes if auditing a live production, preview, staging or test environment.
运行审核时:
  1. 修复所有问题 - 严重、高、中、低优先级
  2. 不要提前停止 - 持续直到达到分数目标(见下文的分数目标)
  3. 并行修复 - 使用子Agent批量编辑内容(替代文本、标题、描述)
  4. 迭代 - 修复一批问题 → 重新审核 → 修复剩余问题 → 重新审核 → 直到完成
  5. 仅在需要人工判断时暂停 - 失效链接可能需要人工审核;其他所有问题都应自动修复
  6. 展示前后对比 - 仅在所有修复完成后向用户展示评分对比
重要提示:修复所有问题,不要提前停止。
  • 迭代循环:修复一批问题后,重新审核并继续修复,直到:
    • 评分达到目标(通常85+),或者
    • 仅剩下需要人工判断的问题(例如,“是否应该删除此链接?”)
  • 同等对待所有修复:代码更改(
    *.tsx
    *.ts
    )和内容更改(
    *.md
    *.mdx
    *.html
    )同样重要。不要在完成代码修复后停止。
  • 并行修复内容:对于影响多个文件的问题:
    • 启动子Agent并行修复
    • 示例:7个文件需要添加替代文本 → 启动1-2个Agent修复所有文件
    • 示例:30个文件存在标题问题 → 启动Agent批量编辑
  • 不要询问,直接行动:不要暂停询问“是否继续?”——自主执行直到完成。
  • 完成标准
    • ✅ 所有错误已修复
    • ✅ 所有警告已修复(或记录为需要人工审核)
    • ✅ 重新审核确认改进
    • ✅ 向用户展示前后对比
    • ✅ 网站已完成修复(评分95以上,覆盖范围完整)
运行多次审核以确保完整性和修复质量。如果审核的是线上生产环境、预览环境、 staging环境或测试环境,提示用户部署修复。

Score Targets

分数目标

Starting ScoreTarget ScoreExpected Work
< 50 (Grade F)75+ (Grade C)Major fixes
50-70 (Grade D)85+ (Grade B)Moderate fixes
70-85 (Grade C)90+ (Grade A)Polish
> 85 (Grade B+)95+Fine-tuning
A site is only considered COMPLETE and FIXED when scores are above 95 (Grade A) with coverage set to FULL (--coverage full).
Don't stop until target is reached.
初始评分目标评分预期工作量
< 50(F级)75+(C级)重大修复
50-70(D级)85+(B级)中等修复
70-85(C级)90+(A级)优化
> 85(B+级)95+微调
只有当评分超过95(A级)且覆盖范围设置为完整(--coverage full)时,网站才被视为已完成修复。
不要停止,直到达到目标。

Issue Categories

问题分类与修复方法

CategoryFix ApproachParallelizable
Meta tags/titlesEdit page components or metadata.tsNo
Structured dataAdd JSON-LD to page templatesNo
Missing H1/headingsEdit page components + content filesYes (content)
Image alt textEdit content filesYes
Heading hierarchyEdit content filesYes
Short descriptionsEdit content frontmatterYes
HTTP→HTTPS linksBulk sed/replace in contentYes
Broken linksManual review (flag for user)No
For parallelizable fixes: Spawn subagents with specific file assignments.
分类修复方法是否可并行
元标签/标题编辑页面组件或metadata.ts
结构化数据向页面模板添加JSON-LD
缺少H1/标题编辑页面组件 + 内容文件是(内容部分)
图片替代文本编辑内容文件
标题层级编辑内容文件
简短描述编辑内容前置元数据
HTTP→HTTPS链接批量sed替换内容
失效链接人工审核(标记给用户)
对于可并行的修复:为子Agent分配特定的文件任务。

Content File Fixes

内容文件修复

Many issues require editing content files (
*.md
,
*.mdx
). These are equally important as code fixes:
  • Image alt text: Edit markdown image tags to add descriptions
  • Heading hierarchy: Change
    ###
    to
    ##
    where H2 is skipped
  • Meta descriptions: Extend
    excerpt
    in frontmatter to 120+ chars
  • HTTP links: Replace
    http://
    with
    https://
    in all links
For 5+ files needing the same fix type, spawn a subagent:
Task: Fix missing alt text in 6 posts
Files: [list of files]
Pattern: Find `![](` or `<img src=` without alt, add descriptive text
许多问题需要编辑内容文件(
*.md
*.mdx
)。这些修复与代码修复同样重要:
  • 图片替代文本:编辑Markdown图片标签以添加描述
  • 标题层级:在跳过H2的位置将
    ###
    改为
    ##
  • 元描述:将前置元数据中的
    excerpt
    扩展到120个字符以上
  • HTTP链接:将所有链接中的
    http://
    替换为
    https://
如果有5个以上的文件需要相同类型的修复,启动子Agent:
任务:修复6篇文章中缺少的替代文本
文件:[文件列表]
模式:查找`![](`或`<img src=`且无alt属性的标签,添加描述性文本

Parallelizing Fixes with Subagents

使用子Agent并行修复

Use the Task tool to spawn subagents for parallel fixes. Critical rules:
  1. Multiple Task calls in ONE message = parallel execution
  2. Sequential Task calls = slower, only when fixes have dependencies
  3. Each subagent gets a focused scope - don't overload with too many files
When to parallelize:
  • 5+ files need same fix type (alt text, headings, meta descriptions)
  • Fixes have no dependencies on each other
  • Files are independent (not importing from each other)
Subagent prompt structure:
Fix [issue type] in the following files:
- path/to/file1.md
- path/to/file2.md
- path/to/file3.md

Pattern: [what to find]
Fix: [what to change]

Do not ask for confirmation. Make all changes and report what was fixed.
Example - parallel alt text fixes:
When audit shows 12 files missing alt text, spawn 2-3 subagents in a SINGLE message:
[Task tool call 1]
subagent_type: "general-purpose"
prompt: |
  Fix missing image alt text in these files:
  - content/blog/post-1.md
  - content/blog/post-2.md
  - content/blog/post-3.md
  - content/blog/post-4.md

  Find images without alt text (![](path) or <img without alt=).
  Add descriptive alt text based on image filename and context.
  Do not ask for confirmation.

[Task tool call 2]
subagent_type: "general-purpose"
prompt: |
  Fix missing image alt text in these files:
  - content/blog/post-5.md
  - content/blog/post-6.md
  - content/blog/post-7.md
  - content/blog/post-8.md

  [same instructions...]

[Task tool call 3]
subagent_type: "general-purpose"
prompt: |
  Fix missing image alt text in these files:
  - content/blog/post-9.md
  - content/blog/post-10.md
  - content/blog/post-11.md
  - content/blog/post-12.md

  [same instructions...]
Example - parallel heading fixes:
[Task tool call 1]
Fix H1/H2 heading hierarchy in: docs/guide-1.md, docs/guide-2.md, docs/guide-3.md
Change ### to ## where H2 is skipped. Ensure single H1 per page.

[Task tool call 2]
Fix H1/H2 heading hierarchy in: docs/guide-4.md, docs/guide-5.md, docs/guide-6.md
[same instructions...]
Batch sizing:
  • 3-5 files per subagent (optimal)
  • Max 10 files per subagent
  • Spawn 2-4 subagents for parallel work
DO NOT parallelize:
  • Shared component edits (layout.tsx, metadata.ts)
  • JSON-LD schema changes (single source of truth)
  • Config file edits (may conflict)
使用任务工具启动子Agent进行并行修复。关键规则:
  1. 在一条消息中进行多个任务调用 = 并行执行
  2. 顺序任务调用 = 速度较慢,仅在修复存在依赖关系时使用
  3. 每个子Agent的任务范围要聚焦 - 不要分配过多文件
何时并行化:
  • 5个以上文件需要相同类型的修复(替代文本、标题、元描述)
  • 修复之间无依赖关系
  • 文件相互独立(不互相导入)
子Agent提示结构:
修复以下文件中的[问题类型]:
- path/to/file1.md
- path/to/file2.md
- path/to/file3.md

模式:[要查找的内容]
修复:[要修改的内容]

不要请求确认。直接进行所有更改并报告修复内容。
示例 - 并行修复替代文本:
当审核显示12个文件缺少替代文本时,在一条消息中启动2-3个子Agent:
[任务工具调用1]
subagent_type: "general-purpose"
prompt: |
  修复以下文件中缺少的图片替代文本:
  - content/blog/post-1.md
  - content/blog/post-2.md
  - content/blog/post-3.md
  - content/blog/post-4.md

  查找没有替代文本的图片(![](path) 或无alt属性的<img>标签)。
  根据图片文件名和上下文添加描述性替代文本。
  不要请求确认。

[任务工具调用2]
subagent_type: "general-purpose"
prompt: |
  修复以下文件中缺少的图片替代文本:
  - content/blog/post-5.md
  - content/blog/post-6.md
  - content/blog/post-7.md
  - content/blog/post-8.md

  [相同说明...]

[任务工具调用3]
subagent_type: "general-purpose"
prompt: |
  修复以下文件中缺少的图片替代文本:
  - content/blog/post-9.md
  - content/blog/post-10.md
  - content/blog/post-11.md
  - content/blog/post-12.md

  [相同说明...]
示例 - 并行修复标题:
[任务工具调用1]
修复以下文件中的H1/H2标题层级:docs/guide-1.md, docs/guide-2.md, docs/guide-3.md
在跳过H2的位置将###改为##。确保每个页面只有一个H1。

[任务工具调用2]
修复以下文件中的H1/H2标题层级:docs/guide-4.md, docs/guide-5.md, docs/guide-6.md
[相同说明...]
批量大小:
  • 每个子Agent分配3-5个文件(最优)
  • 每个子Agent最多分配10个文件
  • 启动2-4个子Agent进行并行工作
不要并行化:
  • 共享组件编辑(layout.tsx、metadata.ts)
  • JSON-LD Schema更改(单一数据源)
  • 配置文件编辑(可能冲突)

Advanced Options

高级选项

Audit more pages:
bash
squirrel audit https://example.com --max-pages 200
Force fresh crawl (ignore cache):
bash
squirrel audit https://example.com --refresh
Resume interrupted crawl:
bash
squirrel audit https://example.com --resume
Verbose output for debugging:
bash
squirrel audit https://example.com --verbose
审核更多页面:
bash
squirrel audit https://example.com --max-pages 200
强制重新抓取(忽略缓存):
bash
squirrel audit https://example.com --refresh
恢复中断的抓取:
bash
squirrel audit https://example.com --resume
调试用详细输出:
bash
squirrel audit https://example.com --verbose

Common Options

常用选项

Audit Command Options

审核命令选项

OptionAliasDescriptionDefault
--format <fmt>
-f <fmt>
Output format: console, text, json, html, markdown, llmconsole
--coverage <mode>
-C <mode>
Coverage mode: quick, surface, fullsurface
--max-pages <n>
-m <n>
Maximum pages to crawl (max 5000)varies by coverage
--output <path>
-o <path>
Output file path-
--refresh
-r
Ignore cache, fetch all pages freshfalse
--resume
-Resume interrupted crawlfalse
--verbose
-v
Verbose outputfalse
--debug
-Debug loggingfalse
--trace
-Enable performance tracingfalse
--project-name <name>
-n <name>
Override project namefrom config
选项别名描述默认值
--format <fmt>
-f <fmt>
输出格式:console、text、json、html、markdown、llmconsole
--coverage <mode>
-C <mode>
覆盖模式:quick、surface、fullsurface
--max-pages <n>
-m <n>
最大抓取页面数(最多5000)因覆盖模式而异
--output <path>
-o <path>
输出文件路径-
--refresh
-r
忽略缓存,重新获取所有页面false
--resume
-恢复中断的抓取false
--verbose
-v
详细输出false
--debug
-调试日志false
--trace
-启用性能追踪false
--project-name <name>
-n <name>
覆盖项目名称来自配置

Coverage Modes

覆盖模式

Choose a coverage mode based on your audit needs:
ModeDefault PagesBehaviorUse Case
quick
25Seed + sitemaps only, no link discoveryCI checks, fast health check
surface
100One sample per URL patternGeneral audits (default)
full
500Crawl everything up to limitDeep analysis
Surface mode is smart - it detects URL patterns like
/blog/{slug}
or
/products/{id}
and only crawls one sample per pattern. This makes it efficient for sites with many similar pages (blogs, e-commerce).
bash
undefined
根据审核需求选择覆盖模式:
模式默认页面数行为使用场景
quick
25仅抓取种子页面和站点地图,不发现链接CI检查、快速健康检查
surface
100每个URL模式抓取一个样本常规审核(默认)
full
500抓取所有页面直到达到限制深度分析
surface模式很智能 - 它能检测到类似
/blog/{slug}
/products/{id}
的URL模式,每个模式仅抓取一个样本。这对于包含大量相似页面的网站(博客、电商)来说非常高效。
bash
undefined

Quick health check (25 pages, no link discovery)

快速健康检查(25个页面,不发现链接)

squirrel audit https://example.com -C quick --format llm
squirrel audit https://example.com -C quick --format llm

Default surface audit (100 pages, pattern sampling)

默认表面审核(100个页面,模式抽样)

squirrel audit https://example.com --format llm
squirrel audit https://example.com --format llm

Full comprehensive audit (500 pages)

全面深度审核(500个页面)

squirrel audit https://example.com -C full --format llm
squirrel audit https://example.com -C full --format llm

Override page limit for any mode

覆盖任意模式的页面限制

squirrel audit https://example.com -C surface -m 200 --format llm

**When to use each mode:**
- `quick`: CI pipelines, daily health checks, monitoring
- `surface`: Most audits - covers unique templates efficiently
- `full`: Before launches, comprehensive analysis, deep dives
squirrel audit https://example.com -C surface -m 200 --format llm

**何时使用每种模式:**
- `quick`:CI流水线、每日健康检查、监控
- `surface`:大多数审核 - 高效覆盖独特模板
- `full`:上线前、综合分析、深度排查

Report Command Options

报告命令选项

OptionAliasDescription
--list
-l
List recent audits
--severity <level>
-Filter by severity: error, warning, all
--category <cats>
-Filter by categories (comma-separated)
--format <fmt>
-f <fmt>
Output format: console, text, json, html, markdown, xml, llm
--output <path>
-o <path>
Output file path
--input <path>
-i <path>
Load from JSON file (fallback mode)
选项别名描述
--list
-l
列出最近的审核
--severity <level>
-按严重程度过滤:error、warning、all
--category <cats>
-按分类过滤(逗号分隔)
--format <fmt>
-f <fmt>
输出格式:console、text、json、html、markdown、xml、llm
--output <path>
-o <path>
输出文件路径
--input <path>
-i <path>
从JSON文件加载(回退模式)

Config Subcommands

配置子命令

CommandDescription
config show
Show current config
config set <key> <value>
Set config value
config path
Show config file path
config validate
Validate config file
命令描述
config show
显示当前配置
config set <key> <value>
设置配置值
config path
显示配置文件路径
config validate
验证配置文件

Other Commands

其他命令

CommandDescription
squirrel feedback
Send feedback to squirrelscan team
squirrel skills install
Install Claude Code skill
squirrel skills update
Update Claude Code skill
命令描述
squirrel feedback
向squirrelscan团队发送反馈
squirrel skills install
安装Claude Code技能
squirrel skills update
更新Claude Code技能

Self Commands

自我管理命令

Self-management commands under
squirrel self
:
CommandDescription
self install
Bootstrap local installation
self update
Check and apply updates
self completion
Generate shell completions
self doctor
Run health checks
self version
Show version information
self settings
Manage CLI settings
self uninstall
Remove squirrel from the system
squirrel self
下的自我管理命令:
命令描述
self install
引导本地安装
self update
检查并应用更新
self completion
生成Shell补全脚本
self doctor
运行健康检查
self version
显示版本信息
self settings
管理CLI设置
self uninstall
从系统中移除squirrel

Output Formats

输出格式

Console Output (default)

控制台输出(默认)

The
audit
command shows human-readable console output by default with colored output and progress indicators.
audit
命令默认显示人类可读的控制台输出,包含彩色输出和进度指示器。

LLM Format

LLM格式

To get LLM-optimized output, use the
report
command with
--format llm
:
bash
squirrel report <audit-id> --format llm
The LLM format is a compact XML/text hybrid optimized for token efficiency (40% smaller than verbose XML):
  • Summary: Overall health score and key metrics
  • Issues by Category: Grouped by audit rule category (core SEO, technical, content, security)
  • Broken Links: List of broken external and internal links
  • Recommendations: Prioritized action items with fix suggestions
See OUTPUT-FORMAT.md for detailed format specification.
要获取经过LLM优化的输出,使用
report
命令并指定
--format llm
bash
squirrel report <audit-id> --format llm
LLM格式是紧凑的XML/文本混合格式,针对令牌效率进行了优化(比冗长的XML小40%):
  • 摘要:总体健康评分和关键指标
  • 按分类分组的问题:按审核规则分类(核心SEO、技术、内容、安全)
  • 失效链接:失效的外部和内部链接列表
  • 建议:按优先级排序的行动项及修复建议
详细格式规范请参考OUTPUT-FORMAT.md

Examples

示例

Example 1: Quick Site Audit with LLM Output

示例1:快速网站审核并生成LLM输出

bash
undefined
bash
undefined

User asks: "Check squirrelscan.com for SEO issues"

用户提问:"检查squirrelscan.com的SEO问题"

squirrel audit https://squirrelscan.com --format llm
undefined
squirrel audit https://squirrelscan.com --format llm
undefined

Example 2: Deep Audit for Large Site

示例2:大型网站的深度审核

bash
undefined
bash
undefined

User asks: "Do a thorough audit of my blog with up to 500 pages"

用户提问:"全面审核我的博客,最多抓取500个页面"

squirrel audit https://myblog.com --max-pages 500 --format llm
undefined
squirrel audit https://myblog.com --max-pages 500 --format llm
undefined

Example 3: Fresh Audit After Changes

示例3:更改后的重新审核

bash
undefined
bash
undefined

User asks: "Re-audit the site and ignore cached results"

用户提问:"重新审核网站,忽略缓存结果"

squirrel audit https://example.com --refresh --format llm
undefined
squirrel audit https://example.com --refresh --format llm
undefined

Example 4: Two-Step Workflow (Reuse Previous Audit)

示例4:两步工作流程(复用之前的审核结果)

bash
undefined
bash
undefined

First run an audit

首先运行审核

squirrel audit https://example.com
squirrel audit https://example.com

Note the audit ID from output (e.g., "a1b2c3d4")

从输出中记录审核ID(例如,"a1b2c3d4")

Later, export in different format

之后,导出为其他格式

squirrel report a1b2c3d4 --format llm
undefined
squirrel report a1b2c3d4 --format llm
undefined

Output

输出

On completion give the user a summary of all of the changes you made.
完成后向用户总结所有已做的更改。

Troubleshooting

故障排除

squirrel command not found

squirrel命令未找到

If you see this error, squirrel is not installed or not in your PATH.
Solution:
  1. Install squirrel:
    curl -fsSL https://squirrelscan.com/install | bash
  2. Add to PATH:
    export PATH="$HOME/.local/bin:$PATH"
  3. Verify:
    squirrel --version
如果出现此错误,说明squirrel未安装或未添加到PATH中。
解决方案:
  1. 安装squirrel:
    curl -fsSL https://squirrelscan.com/install | bash
  2. 添加到PATH:
    export PATH="$HOME/.local/bin:$PATH"
  3. 验证:
    squirrel --version

Permission denied

权限被拒绝

If squirrel is not executable:
bash
chmod +x ~/.local/bin/squirrel
如果squirrel不可执行:
bash
chmod +x ~/.local/bin/squirrel

Crawl timeout or slow performance

抓取超时或性能缓慢

For very large sites, the audit may take several minutes. Use
--verbose
to see progress:
bash
squirrel audit https://example.com --format llm --verbose
对于非常大的网站,审核可能需要几分钟。使用
--verbose
查看进度:
bash
squirrel audit https://example.com --format llm --verbose

Invalid URL

无效URL

Ensure the URL includes the protocol (http:// or https://):
bash
undefined
确保URL包含协议(http://或https://):
bash
undefined

✗ Wrong

✗ 错误

squirrel audit example.com
squirrel audit example.com

✓ Correct

✓ 正确

squirrel audit https://example.com
undefined
squirrel audit https://example.com
undefined

How It Works

工作原理

  1. Crawl: Discovers and fetches pages starting from the base URL
  2. Analyze: Runs audit rules on each page
  3. External Links: Checks external links for availability
  4. Report: Generates LLM-optimized report with findings
The audit is stored in a local database and can be retrieved later with
squirrel report
commands.
  1. 抓取:从基础URL开始发现并获取页面
  2. 分析:针对每个页面运行审核规则
  3. 外部链接:检查外部链接的可用性
  4. 报告:生成经过LLM优化的报告,包含发现的问题
审核结果存储在本地数据库中,之后可以使用
squirrel report
命令检索。

Additional Resources

其他资源