security-audit-owasp-top-10

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OWASP Top 10 2025 Security Audit

OWASP Top 10 2025 安全审计

Goal

目标

Systematic codebase audit against the OWASP Top 10 2025 framework. Produces a structured severity-rated report with evidence-backed findings. Emphasizes semantic code understanding over regex pattern matching — grep patterns are starting points, real analysis happens by reading and reasoning about code context.
基于OWASP Top 10 2025框架对代码库进行系统性审计。生成带有证据支撑的结构化严重程度评级报告。相较于正则表达式模式匹配,更注重代码语义理解——grep模式仅作为起点,真正的分析需要通过阅读和推理代码上下文来完成。

When to use

适用场景

  • "Run an OWASP audit on this codebase"
  • "Check for OWASP Top 10 vulnerabilities"
  • "Security audit against OWASP 2025"
  • "Audit A01 and A03 only"
  • "Check this repo for common vulnerabilities"
  • "OWASP security review"
  • "对该代码库执行OWASP审计"
  • "检查是否存在OWASP Top 10漏洞"
  • "针对OWASP 2025进行安全审计"
  • "仅审计A01和A03类别"
  • "检查该仓库是否存在常见漏洞"
  • "OWASP安全审查"

When not to use

不适用场景

  • PR code review (use review skill)
  • npm audit
    /
    pip audit
    / dependency scanning only
  • SOC2 / ISO 27001 compliance checks (use drata-api skill)
  • General security questions without audit intent
  • Threat modeling (use security-threat-model skill)
  • Single-file code review
  • PR代码审查(使用代码审查skill)
  • 仅执行
    npm audit
    /
    pip audit
    /依赖项扫描
  • SOC2/ISO 27001合规检查(使用drata-api skill)
  • 无审计意图的一般性安全问题
  • 威胁建模(使用security-threat-model skill)
  • 单文件代码审查

Inputs

输入项

  • Codebase accessible via Glob/Grep/Read
  • Optional: specific OWASP categories (e.g. "A01 and A03 only")
  • Optional: focus areas or known concerns
  • 可通过Glob/Grep/Read访问的代码库
  • 可选:特定OWASP类别(例如“仅A01和A03”)
  • 可选:重点关注领域或已知问题

Outputs

输出项

  • Markdown report inline (write to file only if user requests)
  • Executive summary table
  • Per-category findings with severity/confidence/evidence/remediation
  • Summary statistics and next steps
  • 内嵌Markdown报告(仅在用户要求时写入文件)
  • 执行摘要表格
  • 按类别划分的审计结果,包含严重程度/可信度/证据/修复建议
  • 汇总统计数据及后续步骤

Workflow

工作流程

Phase 1: Reconnaissance

阶段1:侦察

Classify the project before auditing. Use Glob/Read on key files:
Glob: package.json, requirements.txt, go.mod, Cargo.toml, *.tf, *.csproj, pom.xml
Glob: Dockerfile*, docker-compose*, .github/workflows/*
Glob: **/routes/*, **/api/*, **/controllers/*, **/handlers/*
Read: README.md (first 100 lines), main entry points
Determine:
  • Project type: web-app | api | iac | library | cli | mobile | monorepo
  • Languages/frameworks: e.g. Node/Express, Python/Django, Go/Gin, Terraform/AWS
  • Deployment model: serverless, containers, VMs, static hosting
  • Data sensitivity signals: auth, PII, payments, healthcare, crypto
在审计前对项目进行分类。对关键文件使用Glob/Read操作:
Glob: package.json, requirements.txt, go.mod, Cargo.toml, *.tf, *.csproj, pom.xml
Glob: Dockerfile*, docker-compose*, .github/workflows/*
Glob: **/routes/*, **/api/*, **/controllers/*, **/handlers/*
Read: README.md (first 100 lines), main entry points
确定以下信息:
  • 项目类型:web-app | api | iac | library | cli | mobile | monorepo
  • 语言/框架:例如Node/Express、Python/Django、Go/Gin、Terraform/AWS
  • 部署模型:无服务器、容器、虚拟机、静态托管
  • 数据敏感度标识:认证、PII(个人可识别信息)、支付、医疗、加密货币

Phase 2: Relevance Filtering

阶段2:相关性过滤

Use project type to set audit depth per category. Prevents nonsensical checks (e.g. SQL injection on Terraform).
Categoryweb-appapiiaclibraryclimobile
A01 Broken Access ControlFullFullFullLightLightFull
A02 Security MisconfigurationFullFullFullLightLightFull
A03 Supply Chain FailuresFullFullLightFullFullFull
A04 Cryptographic FailuresFullFullLightFullLightFull
A05 InjectionFullFullSkipLightFullFull
A06 Insecure DesignFullFullLightLightLightFull
A07 Authentication FailuresFullFullSkipLightSkipFull
A08 Data Integrity FailuresFullFullLightFullLightFull
A09 Logging & AlertingFullFullLightLightLightFull
A10 Exceptional ConditionsFullFullLightFullFullFull
  • Full: run all grep patterns + semantic analysis
  • Light: grep patterns only, flag but don't deep-dive
  • Skip: mention as not applicable in report, move on
根据项目类型设置每个类别的审计深度。避免无意义的检查(例如对Terraform进行SQL注入检查)。
类别web-appapiiaclibraryclimobile
A01 Broken Access Control(访问控制失效)全面全面全面轻度轻度全面
A02 Security Misconfiguration(安全配置错误)全面全面全面轻度轻度全面
A03 Supply Chain Failures(供应链故障)全面全面轻度全面全面全面
A04 Cryptographic Failures(加密机制失效)全面全面轻度全面轻度全面
A05 Injection(注入攻击)全面全面跳过轻度全面全面
A06 Insecure Design(不安全设计)全面全面轻度轻度轻度全面
A07 Authentication Failures(认证机制失效)全面全面跳过轻度跳过全面
A08 Data Integrity Failures(数据完整性失效)全面全面轻度全面轻度全面
A09 Logging & Alerting(日志与告警不足)全面全面轻度轻度轻度全面
A10 Exceptional Conditions(异常处理不当)全面全面轻度全面全面全面
  • 全面:执行所有grep模式匹配+语义分析
  • 轻度:仅执行grep模式匹配,标记问题但不深入分析
  • 跳过:在报告中注明不适用,直接跳过

Phase 3: Evidence Gathering

阶段3:证据收集

For each relevant category, run Grep/Glob patterns from
CATEGORIES.md
. Use parallel tool calls for independent categories. Collect file paths and matching lines as evidence.
针对每个相关类别,使用
CATEGORIES.md
中的Grep/Glob模式。对独立类别使用并行工具调用。收集文件路径和匹配行作为证据。

Phase 4: Semantic Analysis

阶段4:语义分析

For each flagged file/pattern:
  1. Read surrounding code context (not just matched line)
  2. Determine if finding is true positive or false positive
  3. Check for existing mitigations (guards, validators, middleware)
  4. Assign severity: Critical / High / Medium / Low
  5. Assign confidence: High / Medium / Low
  6. Note specific remediation
Severity criteria:
  • Critical: exploitable without authentication, leads to data breach or RCE
  • High: exploitable with low-privilege access, significant data exposure
  • Medium: requires specific conditions, limited exposure
  • Low: defense-in-depth issue, minimal direct impact
针对每个标记的文件/模式:
  1. 读取周边代码上下文(不仅是匹配行)
  2. 判断发现的问题是真阳性还是假阳性
  3. 检查是否存在现有缓解措施(防护机制、验证器、中间件)
  4. 分配严重程度:Critical(关键)/ High(高)/ Medium(中)/ Low(低)
  5. 分配可信度:High(高)/ Medium(中)/ Low(低)
  6. 记录具体修复建议
严重程度判定标准:
  • 关键:无需认证即可利用,可能导致数据泄露或远程代码执行(RCE)
  • :低权限用户可利用,可能导致大量数据暴露
  • :需要特定条件才能利用,影响范围有限
  • :属于纵深防御层面的问题,直接影响极小

Phase 5: Report Generation

阶段5:报告生成

Use this template:
markdown
undefined
使用以下模板:
markdown
undefined

OWASP Top 10 2025 Security Audit Report

OWASP Top 10 2025 安全审计报告

Project: <name> Type: <project-type> | Languages: <langs> | Date: <date> Scope: <full audit | partial: categories listed>
项目:<名称> 类型:<项目类型> | 语言:<语言/框架> | 日期:<日期> 范围:<全面审计 | 部分审计:列出指定类别>

Executive Summary

执行摘要

SeverityCount
CriticalN
HighN
MediumN
LowN
InfoN
<1-3 sentence summary of key findings and overall posture>
严重程度数量
关键N
N
N
N
信息N
<1-3句话总结关键发现及整体安全状况>

Findings

审计结果

A0X: <Category Name> — <PASS | FINDINGS | N/A>

A0X: <类别名称> — <通过 | 存在问题 | 不适用>

Relevance: Full | Light | Skip
相关性:全面 | 轻度 | 跳过

Finding X.1: <title>

发现X.1: <标题>

  • Severity: Critical | High | Medium | Low
  • Confidence: High | Medium | Low
  • Location:
    path/to/file:line
  • Evidence: <code snippet or pattern match>
  • Issue: <what's wrong and why it matters>
  • Remediation: <specific fix>
(repeat per finding)

  • 严重程度:关键 | 高 | 中 | 低
  • 可信度:高 | 中 | 低
  • 位置
    path/to/file:line
  • 证据:<代码片段或模式匹配结果>
  • 问题:<问题描述及影响>
  • 修复建议:<具体修复方案>
(每个发现重复上述结构)

Summary Table

汇总表格

CategoryStatusCriticalHighMediumLow
A01...............
(all 10 categories)
类别状态关键
A01...............
(所有10个类别)

Methodology

方法论

  • Automated pattern matching via Grep/Glob
  • Semantic code analysis of flagged locations
  • False positive filtering based on code context
  • Project-type relevance filtering applied
  • 通过Grep/Glob进行自动化模式匹配
  • 对标记位置进行代码语义分析
  • 根据代码上下文过滤假阳性结果
  • 应用了基于项目类型的相关性过滤

Next Steps

后续步骤

  1. <prioritized remediation actions>
  2. <recommended tooling or processes>
  3. <categories needing deeper manual review>
undefined
  1. <按优先级排序的修复措施>
  2. <推荐的工具或流程>
  3. <需要深入人工审查的类别>
undefined

Partial Audit Support

部分审计支持

When user requests specific categories (e.g. "audit A01 and A03 only"):
  1. Skip Phase 2 (relevance filtering)
  2. Minimal Phase 1 — detect language/framework only
  3. Run only requested categories through Phases 3-5
  4. Report covers only requested categories, notes others as out-of-scope
当用户要求审计特定类别(例如“仅审计A01和A03”)时:
  1. 跳过阶段2(相关性过滤)
  2. 简化阶段1 — 仅检测语言/框架
  3. 仅对指定类别执行阶段3-5
  4. 报告仅涵盖指定类别,其他类别标记为超出范围

Decision Points

决策要点

  • Large codebase (>500 files): Focus on entry points, auth boundaries, and config files. Note coverage limitations in report.
  • Monorepo: Run Phase 1 per sub-project. Produce per-project section or single unified report based on user preference.
  • No findings: Report as clean audit with methodology notes. Don't fabricate issues.
  • 大型代码库(超过500个文件):重点关注入口点、认证边界和配置文件。在报告中注明覆盖范围限制。
  • 单体仓库(Monorepo):对每个子项目执行阶段1。根据用户偏好生成每个子项目的独立报告部分或统一报告。
  • 未发现问题:报告为清洁审计,并注明方法论细节。不得编造问题。

Validation Checklist

验证清单

  • All 10 categories addressed (or noted as N/A/out-of-scope)
  • Every finding has severity + confidence + location + evidence + remediation
  • False positives filtered (not just raw grep output)
  • Project type correctly identified and relevance matrix applied
  • Report uses consistent severity definitions
  • Executive summary accurately reflects findings
  • 所有10个类别均已处理(或标记为不适用/超出范围)
  • 每个发现均包含严重程度+可信度+位置+证据+修复建议
  • 已过滤假阳性结果(并非直接使用grep输出)
  • 已正确识别项目类型并应用相关性矩阵
  • 报告使用统一的严重程度定义
  • 执行摘要准确反映审计结果

References

参考资料

  • Category details, grep patterns, semantic checks:
    CATEGORIES.md
  • Evaluation prompts:
    EVALUATIONS.md
  • 类别详情、grep模式、语义检查:
    CATEGORIES.md
  • 评估提示:
    EVALUATIONS.md