Loading...
Loading...
Security guidelines for LLM applications based on OWASP Top 10 for LLM 2025. Use when building LLM apps, reviewing AI security, implementing RAG systems, or asking about LLM vulnerabilities like "prompt injection" or "check LLM security".
npx skill4agent add semgrep/skills llm-securityrules/rules/prompt-injection.mdrules/sensitive-disclosure.mdrules/supply-chain.mdrules/data-poisoning.mdrules/output-handling.mdrules/excessive-agency.mdrules/system-prompt-leakage.mdrules/vector-embedding.mdrules/misinformation.mdrules/unbounded-consumption.mdrules/_sections.md| Vulnerability | Key Prevention |
|---|---|
| Prompt Injection | Input validation, output filtering, privilege separation |
| Sensitive Disclosure | Data sanitization, access controls, encryption |
| Supply Chain | Verify models, SBOM, trusted sources only |
| Data Poisoning | Data validation, anomaly detection, sandboxing |
| Output Handling | Treat LLM as untrusted, encode outputs, parameterize queries |
| Excessive Agency | Least privilege, human-in-the-loop, minimize extensions |
| System Prompt Leakage | No secrets in prompts, external guardrails |
| Vector/Embedding | Access controls, data validation, monitoring |
| Misinformation | RAG, fine-tuning, human oversight, cross-verification |
| Unbounded Consumption | Rate limiting, input validation, resource monitoring |