Loading...
Loading...
AI/LLM application security testing — prompt injection, jailbreaking, data exfiltration, and insecure output handling per OWASP LLM Top 10.
npx skill4agent add jd-opensource/joysafeter pentest-ai-llm-security| Category | Test Focus | Status |
|---|---|---|
| LLM01 Prompt Injection | Direct and indirect injection | ✅ |
| LLM02 Sensitive Information Disclosure | Data exfiltration, PII leakage | ✅ |
| LLM03 Supply Chain | Model provenance, plugin trust | ✅ |
| LLM04 Data and Model Poisoning | Training data integrity | ✅ |
| LLM05 Improper Output Handling | XSS/SQLi via LLM output | ✅ |
| LLM06 Excessive Agency | Unauthorized tool use | ✅ |
| LLM07 System Prompt Leakage | System prompt extraction | ✅ |
| LLM08 Vector and Embedding Weaknesses | RAG poisoning | ✅ |
| LLM09 Misinformation | Hallucination exploitation | ✅ |
| LLM10 Unbounded Consumption | Resource exhaustion | ✅ |
| Category | Tools | Purpose |
|---|---|---|
| LLM Scanning | Garak, rebuff | Automated prompt injection testing |
| API Interception | Burp Suite, mitmproxy | LLM API request/response capture |
| Prompt Fuzzing | Custom Python scripts | Payload generation and testing |
| Output Analysis | Browser DevTools, Burp | Insecure output rendering detection |
references/tools.mdreferences/workflows.md