Loading...
Loading...
Detect crisis signals in user content using NLP, mental health sentiment analysis, and safe intervention protocols. Implements suicide ideation detection, automated escalation, and crisis resource integration. Use for mental health apps, recovery platforms, support communities. Activate on "crisis detection", "suicide prevention", "mental health NLP", "intervention protocol". NOT for general sentiment analysis, medical diagnosis, or replacing professional help.
npx skill4agent add erichowens/some_claude_skills crisis-detection-intervention-aiDetected concerning content?
├── Immediate danger? → Escalate to crisis counselor + show 988
├── Suicidal ideation? → Flag for review + show resources
├── Substance relapse? → Connect to sponsor + resources
├── Self-harm mention? → Gentle check-in + resources
└── General distress? → Supportive response + resources| Model | Best For | Accuracy | Latency |
|---|---|---|---|
| MentalBERT | Mental health text | 89% | 50ms |
| GPT-4 + Few-shot | Crisis detection | 92% | 200ms |
| RoBERTa-Mental | Depression detection | 87% | 40ms |
| Custom Fine-tuned BERT | Domain-specific | 90%+ | 60ms |
// ❌ Generic sentiment misses mental health signals
const sentiment = analyzeSentiment(text);
if (sentiment.score < -0.5) {
alertCrisis(); // Too broad!
}// ✅ Mental health-specific model
import { pipeline } from '@huggingface/transformers';
const detector = await pipeline('text-classification', 'mental/bert-base-uncased');
const result = await detector(text, {
labels: ['suicidal_ideation', 'self_harm', 'substance_relapse', 'safe']
});
if (result[0].label === 'suicidal_ideation' && result[0].score > 0.8) {
await escalateToCrisisCounselor({
text,
confidence: result[0].score,
timestamp: Date.now()
});
// IMMEDIATELY show crisis resources
showCrisisResources({
phone: '988',
text: 'Text "HELLO" to 741741',
chat: 'https://988lifeline.org/chat'
});
}// ❌ AI auto-responds to crisis
if (isCrisis(text)) {
await sendMessage(userId, "I'm concerned about you. Are you okay?");
}// ✅ Flag for human review, show resources
if (isCrisis(text)) {
// 1. Flag for counselor review
await flagForReview({
userId,
text,
severity: 'high',
detectedAt: Date.now(),
requiresImmediate: true
});
// 2. Notify on-call counselor
await notifyOnCallCounselor({
userId,
summary: 'Suicidal ideation detected',
urgency: 'immediate'
});
// 3. Show resources (no AI message)
await showInAppResources({
type: 'crisis_support',
resources: [
{ name: '988 Suicide & Crisis Lifeline', link: 'tel:988' },
{ name: 'Crisis Text Line', link: 'sms:741741' },
{ name: 'Chat Now', link: 'https://988lifeline.org/chat' }
]
});
// 4. DO NOT send automated "are you okay" message
}AI Detection → Flag → On-call counselor notified → Human reaches out// ❌ Just flags, no immediate help
if (isCrisis(text)) {
await logCrisisEvent(userId, text);
// User left with no resources
}// ✅ Immediate resources + escalation
if (isCrisis(text)) {
// Show resources IMMEDIATELY (blocking modal)
await showCrisisModal({
title: 'Resources Available',
resources: [
{
name: '988 Suicide & Crisis Lifeline',
description: 'Free, confidential support 24/7',
action: 'tel:988',
type: 'phone'
},
{
name: 'Crisis Text Line',
description: 'Text support with trained counselor',
action: 'sms:741741',
message: 'HELLO',
type: 'text'
},
{
name: 'Chat with counselor',
description: 'Online chat support',
action: 'https://988lifeline.org/chat',
type: 'web'
}
],
dismissible: true, // User can close, but resources shown first
analytics: { event: 'crisis_resources_shown', source: 'ai_detection' }
});
// Then flag for follow-up
await flagForReview({ userId, text, severity: 'high' });
}// ❌ Plain text storage
await db.logs.insert({
userId: user.id,
type: 'crisis',
content: text, // Stored in plain text!
timestamp: Date.now()
});// ✅ Encrypted, access-logged, auto-deleted
import { encrypt, decrypt } from './encryption';
await db.crisisEvents.insert({
id: generateId(),
userId: hashUserId(user.id), // Hash, not plain ID
contentHash: hashContent(text), // For deduplication only
encryptedContent: encrypt(text, process.env.CRISIS_DATA_KEY),
detectedAt: Date.now(),
reviewedAt: null,
reviewedBy: null,
autoDeleteAt: Date.now() + (30 * 24 * 60 * 60 * 1000), // 30 days
accessLog: []
});
// Log all access
await logAccess({
eventId: crisisEvent.id,
accessedBy: counselorId,
accessedAt: Date.now(),
reason: 'Review for follow-up',
ipAddress: hashedIp
});
// Auto-delete after retention period
schedule.daily(() => {
db.crisisEvents.deleteMany({
autoDeleteAt: { $lt: Date.now() }
});
});// ❌ Flags crisis but no escalation process
if (isCrisis(text)) {
await db.flags.insert({ userId, text, flaggedAt: Date.now() });
// Now what? Who responds?
}// ✅ Clear escalation protocol
enum CrisisSeverity {
LOW = 'low', // Distress, no immediate danger
MEDIUM = 'medium', // Self-harm thoughts, no plan
HIGH = 'high', // Suicidal ideation with plan
IMMEDIATE = 'immediate' // Imminent danger
}
async function escalateCrisis(detection: CrisisDetection): Promise<void> {
const severity = assessSeverity(detection);
switch (severity) {
case CrisisSeverity.IMMEDIATE:
// Notify on-call counselor (push notification)
await notifyOnCall({
userId: detection.userId,
severity,
requiresResponse: 'immediate',
text: detection.text
});
// Send SMS to backup on-call if no response in 5 min
setTimeout(async () => {
if (!await hasResponded(detection.id)) {
await notifyBackupOnCall(detection);
}
}, 5 * 60 * 1000);
// Show 988 modal (blocking)
await show988Modal(detection.userId);
break;
case CrisisSeverity.HIGH:
// Notify on-call counselor (email + push)
await notifyOnCall({ severity, requiresResponse: '1 hour' });
// Show crisis resources
await showCrisisResources(detection.userId);
break;
case CrisisSeverity.MEDIUM:
// Add to review queue for next business day
await addToReviewQueue({ priority: 'high' });
// Suggest self-help resources
await suggestResources(detection.userId, 'coping_strategies');
break;
case CrisisSeverity.LOW:
// Add to review queue
await addToReviewQueue({ priority: 'normal' });
break;
}
// Always log for audit
await logEscalation({
detectionId: detection.id,
severity,
actions: ['notified_on_call', 'showed_resources'],
timestamp: Date.now()
});
}interface CrisisSignal {
type: 'suicidal_ideation' | 'self_harm' | 'substance_relapse' | 'severe_distress';
confidence: number;
evidence: string[];
}
async function detectCrisisSignals(text: string): Promise<CrisisSignal[]> {
const signals: CrisisSignal[] = [];
// Signal 1: NLP model
const nlpResult = await mentalHealthNLP(text);
if (nlpResult.score > 0.75) {
signals.push({
type: nlpResult.label,
confidence: nlpResult.score,
evidence: ['NLP model detection']
});
}
// Signal 2: Keyword matching (backup)
const keywords = detectKeywords(text);
if (keywords.length > 0) {
signals.push({
type: 'suicidal_ideation',
confidence: 0.6,
evidence: keywords
});
}
// Signal 3: Sentiment + context
const sentiment = await sentimentAnalysis(text);
const hasHopelessness = /no (hope|point|reason|future)/i.test(text);
if (sentiment.score < -0.8 && hasHopelessness) {
signals.push({
type: 'severe_distress',
confidence: 0.7,
evidence: ['Extreme negative sentiment + hopelessness language']
});
}
return signals;
}const CRISIS_KEYWORDS = {
suicidal_ideation: [
/\b(kill|end|take)\s+(my|own)\s+life\b/i,
/\bsuicide\b/i,
/\bdon'?t\s+want\s+to\s+(live|be here|exist)\b/i,
/\bbetter off dead\b/i
],
self_harm: [
/\b(cut|cutting|hurt)\s+(myself|me)\b/i,
/\bself[- ]harm\b/i
],
substance_relapse: [
/\b(relapsed|used|drank)\s+(again|today)\b/i,
/\bback on\s+(drugs|alcohol)\b/i
]
};
function detectKeywords(text: string): string[] {
const matches: string[] = [];
for (const [type, patterns] of Object.entries(CRISIS_KEYWORDS)) {
for (const pattern of patterns) {
if (pattern.test(text)) {
matches.push(type);
}
}
}
return [...new Set(matches)]; // Deduplicate
}import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
async function detectWithClaude(text: string): Promise<CrisisDetection> {
const response = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 200,
messages: [{
role: 'user',
content: `You are a mental health crisis detection system. Analyze this text for crisis signals.
Text: "${text}"
Respond in JSON:
{
"is_crisis": boolean,
"severity": "none" | "low" | "medium" | "high" | "immediate",
"signals": ["suicidal_ideation" | "self_harm" | "substance_relapse"],
"confidence": 0.0-1.0,
"reasoning": "brief explanation"
}
Examples:
- "I'm thinking about ending it all" → { "is_crisis": true, "severity": "high", "signals": ["suicidal_ideation"], "confidence": 0.95 }
- "I relapsed today, feeling ashamed" → { "is_crisis": true, "severity": "medium", "signals": ["substance_relapse"], "confidence": 0.9 }
- "Had a tough day at work" → { "is_crisis": false, "severity": "none", "signals": [], "confidence": 0.95 }`
}]
});
const result = JSON.parse(response.content[0].text);
return result;
}□ Mental health-specific NLP model (not generic sentiment)
□ Human review required before automated action
□ Crisis resources shown IMMEDIATELY (988, text line)
□ Clear escalation protocol (severity-based)
□ Encrypted storage of crisis content
□ Access logging for all crisis data access
□ Auto-deletion after retention period (30 days)
□ On-call counselor notification system
□ Backup notification if no response
□ False positive tracking (improve model)
□ Regular model evaluation with experts
□ Ethics review board approval| Scenario | Appropriate? |
|---|---|
| Journaling app for recovery | ✅ Yes - monitor for relapses |
| Support group chat | ✅ Yes - flag concerning posts |
| Therapy platform messages | ✅ Yes - assist therapists |
| Public social media | ❌ No - privacy concerns |
| Replace human counselors | ❌ Never - AI assists, doesn't replace |
| Medical diagnosis | ❌ Never - not qualified |
/references/mental-health-nlp.md/references/intervention-protocols.md/references/crisis-resources.mdscripts/crisis_detector.tsscripts/model_evaluator.ts