owasp-llm-top-10

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

OWASP Top 10 for LLM Applications

OWASP大语言模型应用Top10安全风险

This skill encodes the OWASP Top 10 for Large Language Model Applications for secure LLM/GenAI design and review. References are loaded per risk. Based on OWASP Top 10 for LLM Applications 2025.
本Skill整合了OWASP大语言模型应用Top10安全风险内容,用于指导LLM/生成式AI应用的安全设计与评审。每种风险都配有对应的参考资料,基于2025版OWASP大语言模型应用Top10安全风险制定。

When to Read Which Reference

各风险对应的参考资料查阅指南

RiskRead
LLM01 Prompt Injectionreferences/llm01-prompt-injection.md
LLM02 Sensitive Information Disclosurereferences/llm02-sensitive-information-disclosure.md
LLM03 Training Data & Supply Chainreferences/llm03-training-data-supply-chain.md
LLM04 Data and Model Poisoningreferences/llm04-data-model-poisoning.md
LLM05 Improper Output Handlingreferences/llm05-improper-output-handling.md
LLM06 Excessive Agencyreferences/llm06-excessive-agency.md
LLM07 System Prompt Leakagereferences/llm07-system-prompt-leakage.md
LLM08 Vector and Embedding Weaknessesreferences/llm08-vector-embedding-weaknesses.md
LLM09 Misinformationreferences/llm09-misinformation.md
LLM10 Unbounded Consumptionreferences/llm10-unbounded-consumption.md
风险项查阅链接
LLM01 提示注入references/llm01-prompt-injection.md
LLM02 敏感信息泄露references/llm02-sensitive-information-disclosure.md
LLM03 训练数据与供应链references/llm03-training-data-supply-chain.md
LLM04 数据与模型投毒references/llm04-data-model-poisoning.md
LLM05 输出处理不当references/llm05-improper-output-handling.md
LLM06 过度代理references/llm06-excessive-agency.md
LLM07 系统提示泄露references/llm07-system-prompt-leakage.md
LLM08 向量与嵌入缺陷references/llm08-vector-embedding-weaknesses.md
LLM09 虚假信息references/llm09-misinformation.md
LLM10 无限制资源消耗references/llm10-unbounded-consumption.md

Quick Patterns

快速安全实践模式

  • Treat all user and external input as untrusted; validate and sanitize LLM outputs before use (XSS, SSRF, RCE). Limit agency and tool use; protect system prompts and RAG data. Apply rate limits and cost controls.
  • 将所有用户输入和外部输入视为不可信内容;在使用LLM输出前进行验证和清理(防范XSS、SSRF、RCE攻击)。限制代理权限与工具使用范围;保护系统提示词与RAG数据。应用速率限制与成本控制措施。

Quick Reference / Examples

快速参考/示例

TaskApproach
Prevent prompt injectionUse delimiters, validate input, separate system/user context. See LLM01.
Protect sensitive dataFilter PII from training/prompts, apply output guards. See LLM02.
Validate LLM outputSanitize before rendering (XSS) or executing (RCE). See LLM05.
Limit agencyRequire human approval for destructive actions; scope tool permissions. See LLM06.
Control costsApply token limits, rate limiting, and budget caps. See LLM10.
Safe - delimiter and input validation:
python
system_prompt = """You are a helpful assistant.
<user_input>
{sanitized_user_input}
</user_input>
Answer based only on the user input above."""
Unsafe - direct concatenation (injection risk):
python
prompt = f"Answer this question: {user_input}"  # User can inject instructions
Output sanitization before rendering:
python
import html
safe_output = html.escape(llm_response)  # Prevent XSS if rendering in browser
任务实施方法
防范提示注入使用分隔符、验证输入、分离系统与用户上下文。详见LLM01
保护敏感数据从训练数据/提示词中过滤PII信息,应用输出防护机制。详见LLM02
验证LLM输出在渲染(防范XSS)或执行(防范RCE)前进行清理。详见LLM05
限制代理权限破坏性操作需人工审批;限定工具权限范围。详见LLM06
成本控制应用令牌限制、速率限制与预算上限。详见LLM10
安全示例 - 分隔符与输入验证:
python
system_prompt = """You are a helpful assistant.
<user_input>
{sanitized_user_input}
</user_input>
Answer based only on the user input above."""
不安全示例 - 直接拼接(存在注入风险):
python
prompt = f"Answer this question: {user_input}"  # User can inject instructions
输出渲染前的清理处理:
python
import html
safe_output = html.escape(llm_response)  # Prevent XSS if rendering in browser

Workflow

工作流程

Load the reference for the risk you are addressing. See OWASP Top 10 for LLM Applications and genai.owasp.org for the official list.
针对你正在处理的风险项,加载对应的参考资料。官方完整列表可查阅OWASP大语言模型应用Top10安全风险genai.owasp.org