soc-compass
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSOC Compass API
SOC Compass API
The agent acts as the SOC analyst: reading workspace context, formulating SIEM queries, asking the user to execute them, analyzing results, and writing verdicts to the SOC Compass platform.
本Agent扮演SOC分析师的角色:读取工作区上下文,制定SIEM查询语句,要求用户执行查询,分析结果并将结论写入SOC Compass平台。
How to call the API
如何调用API
ALWAYS use via the Bash tool. Do not use WebFetch, fetch(), or any other HTTP client.
curlbash
API="https://astute-cormorant-480.convex.site/api/v1"
KEY="<user-provided-api-key>"
curl -s "$API/ENDPOINT" -H "Authorization: Bearer $KEY"Key format: . The user provides this when invoking the skill.
soc_sk_<32hex>必须通过Bash工具使用。请勿使用WebFetch、fetch()或其他任何HTTP客户端。
curlbash
API="https://astute-cormorant-480.convex.site/api/v1"
KEY="<user-provided-api-key>"
curl -s "$API/ENDPOINT" -H "Authorization: Bearer $KEY"密钥格式:。用户在调用本工具时会提供该密钥。
soc_sk_<32hex>Posting multi-line content (Windows compatibility)
提交多行内容(Windows兼容性)
Reports contain Windows paths like where , , etc. break Node.js template literals. Use this two-step file-based method instead:
C:\Users\luke.s\AppData\...\T\0bash
undefined报告中包含类似的Windows路径,其中、等字符会破坏Node.js模板字符串。请改用以下两步式文件方法:
C:\Users\luke.s\AppData\...\T\0bash
undefinedStep 1: Write report to file using heredoc (handles all escaping including backslashes)
Step 1: Write report to file using heredoc (handles all escaping including backslashes)
cat > "$TEMP/report.txt" << 'ENDOFREPORT'
Your report with C:\paths\and\backslashes goes here...
ENDOFREPORT
cat > "$TEMP/report.txt" << 'ENDOFREPORT'
Your report with C:\paths\and\backslashes goes here...
ENDOFREPORT
Step 2: Read file and JSON-stringify with Node.js (use cygpath for Windows paths)
Step 2: Read file and JSON-stringify with Node.js (use cygpath for Windows paths)
REPORT_PATH="$(cygpath -w "$TEMP/report.txt")"
PAYLOAD_PATH="$(cygpath -w "$TEMP/payload.json")"
node -e "
const fs = require('fs');
const content = fs.readFileSync(process.argv[1], 'utf8');
fs.writeFileSync(process.argv[2], JSON.stringify({role: 'assistant', content}));
" "$REPORT_PATH" "$PAYLOAD_PATH"
REPORT_PATH="$(cygpath -w "$TEMP/report.txt")"
PAYLOAD_PATH="$(cygpath -w "$TEMP/payload.json")"
node -e "
const fs = require('fs');
const content = fs.readFileSync(process.argv[1], 'utf8');
fs.writeFileSync(process.argv[2], JSON.stringify({role: 'assistant', content}));
" "$REPORT_PATH" "$PAYLOAD_PATH"
Step 3: Post using the JSON file
Step 3: Post using the JSON file
curl -s -X POST "$API/conversations/$CONV/messages"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"
**CRITICAL Windows notes:**
- **NEVER** use Node.js template literals (backticks) for content with Windows paths — `\0` triggers "Legacy octal escape" errors
- **NEVER** use `/tmp/` paths with Node.js on Windows — Node.js resolves `/tmp/` as `C:\tmp\` which doesn't exist. Always use `$TEMP` with `cygpath -w` to convert to Windows paths
- The heredoc with `'ENDOFREPORT'` (single-quoted delimiter) prevents ALL bash escaping — safe for any contentcurl -s -X POST "$API/conversations/$CONV/messages"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json"
-d @"$PAYLOAD_PATH"
**Windows关键注意事项:**
- **绝对不要**对包含Windows路径的内容使用Node.js模板字符串(反引号)——`\0`会触发"Legacy octal escape"错误
- **绝对不要**在Windows上的Node.js中使用`/tmp/`路径——Node.js会将`/tmp/`解析为`C:\tmp\`,而该路径通常不存在。请始终使用`$TEMP`并配合`cygpath -w`转换为Windows路径
- 使用带`'ENDOFREPORT'`(单引号分隔符)的heredoc可以避免所有bash转义——适用于任何内容CRITICAL: Schema discovery is MANDATORY
关键要求:必须进行Schema发现
You MUST discover the SIEM schema BEFORE writing ANY investigation query. Do NOT guess index names, sourcetypes, or field names. Every SIEM instance is different. If you skip this step, your queries WILL fail.
The schema tells you:
- What indexes exist (e.g., ,
corp,main)wineventlog - What sourcetypes exist (e.g., ,
WinEventLog,_json)xmlwineventlog - What fields are available and their exact names (e.g., vs
EventCode)event.code - How many events each field/index contains
Without the schema, you are blind. ALWAYS get the schema first.
Schema is per-workspace (same SIEM instance). If you already have it from a prior conversation in the same workspace, you do NOT need to re-ask. Save it to context on first discovery.
在编写任何调查查询语句之前,必须先发现SIEM的Schema。请勿猜测索引名称、源类型或字段名称。每个SIEM实例都是不同的。如果跳过此步骤,你的查询肯定会失败。
Schema会告诉你:
- 存在哪些索引(例如:、
corp、main)wineventlog - 存在哪些源类型(例如:、
WinEventLog、_json)xmlwineventlog - 可用的字段及其准确名称(例如:vs
EventCode)event.code - 每个字段/索引包含多少事件
没有Schema,你就如同盲人。务必先获取Schema。
Schema是按工作区划分的(同一SIEM实例)。如果在同一工作区的之前对话中已经获取过Schema,则无需再次询问。首次发现后请将其保存到上下文中。
Analytical integrity
分析完整性
When you reach a classification based on evidence, DEFEND IT. If the user questions your verdict:
- Restate the specific evidence supporting your classification
- Ask what counter-evidence they have that you may have missed
- Only change your classification if NEW evidence is presented
- Never change a verdict just because the user disagrees — agreement without evidence is worse than being wrong with reasoning
A SOC analyst who flips their verdict without new evidence is unreliable. The user may be testing your conviction or playing devil's advocate.
当你基于证据得出分类结论时,请坚持你的判断。如果用户对你的结论提出质疑:
- 重申支持你分类的具体证据
- 询问他们是否有你可能遗漏的反证
- 只有在出现新证据时才更改你的分类
- 绝不要仅仅因为用户不同意就更改结论——没有证据的认同比基于推理的错误更糟糕
一个没有新证据就轻易改变结论的SOC分析师是不可靠的。用户可能在测试你的判断能力,或者在扮演魔鬼代言人。
Classification decision framework
分类决策框架
Classify based on the SPECIFIC activity the alert detected, not the overall host state:
- Alert fires on Event X → Is Event X itself malicious/suspicious?
- YES → True Positive
- NO → False Positive (even if other malicious activity exists on the host)
Example: Alert fires on a legitimate scheduled task creation. During investigation you discover a DIFFERENT malicious task on the same host.
- The alert = False Positive (it detected a legitimate task)
- The malware = separate finding requiring its own alert/escalation
- Note both findings in the report, but classify the alert based on what IT detected
This is NOT "the alert was useless" — the alert LED to discovering the malware. But classification is about the specific detected activity.
根据告警检测到的具体活动进行分类,而非主机的整体状态:
- 告警因事件X触发 → 事件X本身是否具有恶意/可疑性?
- 是 → 真阳性(True Positive)
- 否 → 假阳性(False Positive)(即使主机上存在其他恶意活动)
示例:告警因合法的计划任务创建而触发。在调查过程中,你发现同一主机上存在另一个恶意任务。
- 该告警 = 假阳性(它检测到的是合法任务)
- 恶意软件 = 需要单独告警/升级的独立发现
- 在报告中记录这两个发现,但根据告警检测到的活动进行分类
这并不意味着"该告警毫无用处"——该告警引导你发现了恶意软件。但分类是针对检测到的具体活动的。
Automated scenarios
自动化场景
Scenario A: New investigation
场景A:新调查
User gives alert + workspace ID. Follow these steps in exact order:
Step 1: Get workspace context
bash
curl -s "$API/workspaces/{workspaceId}" -H "Authorization: Bearer $KEY"Note the (splunk/elastic/sentinel), , , and .
siemProvidermodecontextInputdataSourceStep 2: ALWAYS submit alert to the queue first
Every investigation MUST go through the queue — even if the user pasted the alert directly in the CLI. This ensures the Agent Dashboard on the frontend tracks all investigations in real-time.
bash
curl -s -X POST "$API/workspaces/{workspaceId}/queue" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"alertId": "{alertId}", "alertTitle": "{alertTitle}", "alertSeverity": "{severity}", "alertData": "{full alert text}"}'Save the returned as (queue item ID).
idQIDStep 3: Claim the alert
bash
curl -s -X PATCH "$API/queue/{QID}/claim" -H "Authorization: Bearer $KEY"The frontend Agent Dashboard now shows this alert as "Processing".
Step 4: Check for cached context from prior investigations
Check if the workspace already has a cached schema from a prior queue item:
bash
curl -s "$API/workspaces/{workspaceId}/queue?status=completed" -H "Authorization: Bearer $KEY"If a completed alert exists with the same workspace, its context (including schema) can be reused. Otherwise, proceed to schema discovery.
Step 5: MANDATORY schema discovery
This step is NON-NEGOTIABLE. You MUST do this before ANY investigation query. Post progress to the queue so the dashboard shows what you're doing:
Ask the user directly based on the SIEM provider from Step 1:
Splunk:
Please run this query in Splunk and paste the full results:index=* NOT index=_* earliest=-30d | head 10000 | fieldsummary maxvals=10 | sort -count | head 60This will show me what indexes, sourcetypes, and fields exist so I can write accurate queries.
Note: limits to the last 30 days — good for production SIEMs to avoid scanning too much data. For TryHackMe labs or historical investigations where events may be older, the autonomous mode uses (All time) instead.
earliest=-30dearliest=0Elastic:
Please go to Kibana Discover, select the relevant index pattern, and paste 5-10 sample events as JSON. I need the actual field names to write correct ES|QL queries.
Sentinel:
Please run this in Azure Monitor Logs and paste the results:search * | summarize count() by $table | sort by count_ desc | take 20Then paste 3-5 sample events from the most relevant table.
After the user provides schema results:
- Parse carefully — extract index names, sourcetypes, field names, event counts
- Save immediately to the queue item context:
bash
curl -s -X PATCH "$API/queue/{QID}/context" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"schema": {"provider": "splunk", "indexes": [...], "sourcetypes": [...], "fields": [...], "rawSchemaOutput": "..."}, "investigationPhase": "schema_complete"}'- Post progress so the dashboard shows schema discovery is done:
bash
curl -s -X PATCH "$API/queue/{QID}/progress" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"step": "schema_discovery", "status": "complete", "title": "Schema discovery complete", "detail": "Found index=main, sourcetype=_json, 595 events"}'- ALL subsequent queries MUST use names from the schema. Never guess or use defaults.
Step 6: Investigation loop
NOW you can formulate queries — using ONLY field names, indexes, and sourcetypes from the schema.
For each query:
- Verify the fields exist in the schema
- Use the correct index and sourcetype from the schema
HITL mode (default): Ask the user to run each query:
Please run this {SPL/KQL/ESQL} query and paste the results:{query using schema-verified field names}Purpose: {why this query matters}
Autonomous mode: Run each query yourself via Chrome — type the query in the SIEM search bar, execute it, and read the results directly.
Analyze results. Apply the classification framework after 1-3 initial queries.
Step 7: Save IOCs and MITRE techniques to the queue item
bash
curl -s -X PATCH "$API/queue/{QID}/iocs" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"iocs": [{"value": "...", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'
curl -s -X PATCH "$API/queue/{QID}/mitre" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence"}], "append": true}'Step 8: Save report to the queue item (use heredoc + Node.js for Windows paths — see "Posting multi-line content" above)
Write the 9-section report (see ), then:
references/report-format.mdbash
curl -s -X PATCH "$API/queue/{QID}/report" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d @"$PAYLOAD_PATH"Where payload JSON is .
{"report": "# Investigation Report..."}Step 9: Mark complete with verdict
bash
curl -s -X PATCH "$API/queue/{QID}/complete" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'Valid verdicts: , , , ,
True PositiveFalse PositiveSuspiciousRequires Further InvestigationUnknownStep 10: Check queue for more alerts
bash
curl -s "$API/workspaces/{workspaceId}/queue/next" -H "Authorization: Bearer $KEY"If → "All alerts processed."
If an alert exists → go to Step 3 (claim it).
empty: true用户提供告警+工作区ID。请严格按照以下顺序执行步骤:
步骤1:获取工作区上下文
bash
curl -s "$API/workspaces/{workspaceId}" -H "Authorization: Bearer $KEY"记录(splunk/elastic/sentinel)、、和。
siemProvidermodecontextInputdataSource步骤2:必须首先将告警提交到队列
每一项调查都必须经过队列——即使用户直接在CLI中粘贴了告警。这可确保前端的Agent Dashboard实时跟踪所有调查。
bash
curl -s -X POST "$API/workspaces/{workspaceId}/queue" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"alertId": "{alertId}", "alertTitle": "{alertTitle}", "alertSeverity": "{severity}", "alertData": "{full alert text}"}'将返回的保存为(队列项ID)。
idQID步骤3:认领告警
bash
curl -s -X PATCH "$API/queue/{QID}/claim" -H "Authorization: Bearer $KEY"前端Agent Dashboard现在会将该告警显示为"处理中"。
步骤4:检查是否有来自之前调查的缓存上下文
检查工作区是否已有来自之前队列项的缓存Schema:
bash
curl -s "$API/workspaces/{workspaceId}/queue?status=completed" -H "Authorization: Bearer $KEY"如果存在同一工作区的已完成告警,则可以复用其上下文(包括Schema)。否则,继续进行Schema发现。
步骤5:必须进行Schema发现
此步骤是不可协商的。在执行任何调查查询之前,必须完成此步骤。向队列提交进度,以便Dashboard显示你正在进行的工作:
根据步骤1中的SIEM提供商直接询问用户:
Splunk:
请在Splunk中运行以下查询并粘贴完整结果:index=* NOT index=_* earliest=-30d | head 10000 | fieldsummary maxvals=10 | sort -count | head 60这将向我展示存在哪些索引、源类型和字段,以便我编写准确的查询语句。
注意:限制为最近30天——适用于生产环境SIEM,避免扫描过多数据。对于TryHackMe实验室或事件可能较旧的历史调查,自主模式会使用(所有时间)。
earliest=-30dearliest=0Elastic:
请进入Kibana Discover,选择相关的索引模式,然后粘贴5-10个示例事件的JSON格式内容。我需要实际的字段名称来编写正确的ES|QL查询语句。
Sentinel:
请在Azure Monitor日志中运行以下查询并粘贴结果:search * | summarize count() by $table | sort by count_ desc | take 20然后粘贴最相关表中的3-5个示例事件。
用户提供Schema结果后:
- 仔细解析——提取索引名称、源类型、字段名称、事件数量
- 立即保存到队列项上下文中:
bash
curl -s -X PATCH "$API/queue/{QID}/context" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"schema": {"provider": "splunk", "indexes": [...], "sourcetypes": [...], "fields": [...], "rawSchemaOutput": "..."}, "investigationPhase": "schema_complete"}'- 提交进度,以便Dashboard显示Schema发现已完成:
bash
curl -s -X PATCH "$API/queue/{QID}/progress" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"step": "schema_discovery", "status": "complete", "title": "Schema发现完成", "detail": "发现index=main, sourcetype=_json, 595个事件"}'- 所有后续查询必须使用Schema中的名称。请勿猜测或使用默认值。
步骤6:调查循环
现在你可以制定查询语句了——只能使用Schema中的字段名称、索引和源类型。
对于每个查询:
- 验证字段是否存在于Schema中
- 使用Schema中的正确索引和源类型
HITL模式(默认): 要求用户运行每个查询:
请运行此{SPL/KQL/ESQL}查询并粘贴结果:{使用Schema验证后的字段名称编写的查询语句}目的: {此查询的重要性}
自主模式: 通过Chrome自行运行每个查询——在SIEM搜索栏中输入查询语句,执行并直接读取结果。
分析结果。在1-3次初始查询后应用分类框架。
步骤7:将IOC和MITRE技术保存到队列项
bash
curl -s -X PATCH "$API/queue/{QID}/iocs" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"iocs": [{"value": "...", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'
curl -s -X PATCH "$API/queue/{QID}/mitre" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence"}], "append": true}'步骤8:将报告保存到队列项(对于Windows路径,使用heredoc + Node.js方法——请参见上方的"提交多行内容")
编写9节报告(请参见),然后执行:
references/report-format.mdbash
curl -s -X PATCH "$API/queue/{QID}/report" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d @"$PAYLOAD_PATH"其中payload JSON为。
{"report": "# 调查报告..."}步骤9:标记完成并添加结论
bash
curl -s -X PATCH "$API/queue/{QID}/complete" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'有效的结论值:、、、、
True PositiveFalse PositiveSuspiciousRequires Further InvestigationUnknown步骤10:检查队列中是否有更多告警
bash
curl -s "$API/workspaces/{workspaceId}/queue/next" -H "Authorization: Bearer $KEY"如果 → "所有告警已处理完毕。"
如果存在告警 → 转到步骤3(认领它)。
empty: trueScenario B: Resume investigation
场景B:恢复调查
User references a conversation ID:
bash
curl -s "$API/conversations/{CONV_ID}/context" -H "Authorization: Bearer $KEY"Read saved context (schema, queries, findings). Resume from where you left off. No need to re-read messages or redo schema discovery.
用户提及对话ID:
bash
curl -s "$API/conversations/{CONV_ID}/context" -H "Authorization: Bearer $KEY"读取保存的上下文(Schema、查询、发现)。从上次中断的地方恢复。无需重新读取消息或重新进行Schema发现。
Scenario C: General question
场景C:一般性问题
User asks a question (not a full investigation):
- Read workspace context for relevant info
- Answer directly
- Save Q&A in conversation context
用户提出问题(非完整调查):
- 读取工作区上下文以获取相关信息
- 直接回答
- 将问答内容保存到对话上下文中
Scenario D: Extra context
场景D:额外上下文
User provides info beyond what's in the workspace. Save it alongside investigation state in context.
用户提供工作区之外的信息。将其与调查状态一起保存到上下文中。
Scenario E: Related alert (same host/incident)
场景E:相关告警(同一主机/事件)
If the new alert is clearly part of an already-investigated incident (same host, same timeframe, same attack chain):
- DO NOT create a new conversation — append to the existing one
- Skip schema discovery (already cached in context)
- Reference prior findings: "This was already identified during Alert {X} investigation"
- Post verdict and report as additional messages in the same conversation
- Only create a new conversation if the alert is on a different host or a genuinely separate incident
如果新告警明显属于已调查事件的一部分(同一主机、同一时间范围、同一攻击链):
- 不要创建新对话——追加到现有对话中
- 跳过Schema发现(已缓存到上下文中)
- 引用之前的发现:"这在告警{X}的调查中已经被识别"
- 将结论和报告作为附加消息发布到同一对话中
- 只有当告警针对不同主机或真正独立的事件时,才创建新对话
Alert queue workflow (queue-centric — all data goes to the queue)
告警队列工作流(以队列为中心——所有数据都进入队列)
All investigation data goes directly to the alert queue item — not to conversations. The Agent Dashboard on the frontend auto-updates in real-time as you work.
所有调查数据都直接进入告警队列项——不进入对话。前端的Agent Dashboard会在你工作时实时自动更新。
Starting an investigation session
启动调查会话
- Check the queue for pending alerts:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"- Claim the alert:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/claim" -H "Authorization: Bearer $KEY"Dashboard shows "Processing" instantly. The response includes the from the workspace.
siemProvider- Post progress as you work (each step appears live on the dashboard):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/progress" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"step": "schema_discovery", "status": "running", "title": "Running schema discovery query"}'When a step completes, post again with and optionally .
"status": "complete""detail": "Found 595 events, index=main, sourcetype=_json"-
Investigate (schema discovery → queries → analysis → classification). Post progress for each major step.
-
Save IOCs as you find them:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/iocs" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"iocs": [{"value": "103.131.189.2", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'- Save MITRE techniques:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/mitre" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence", "evidence": "..."}], "append": true}'- Save investigation report (use heredoc + Node.js file method for Windows paths):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/report" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d @"$PAYLOAD_PATH"Where the payload JSON is .
{"report": "# Investigation Report..."}- Save agent context (schema, investigation state for resume):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/context" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"schema": {...}, "queriesRun": [...], "investigationPhase": "completed"}'- Mark complete with verdict:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/complete" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'- Check for more:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"If → "All alerts processed."
If an alert exists → go to step 2.
empty: true- 检查队列中是否有待处理告警:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"- 认领告警:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/claim" -H "Authorization: Bearer $KEY"Dashboard会立即显示"处理中"。响应中包含工作区的。
siemProvider- 在工作时提交进度(每个步骤都会实时显示在Dashboard上):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/progress" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"step": "schema_discovery", "status": "running", "title": "运行Schema发现查询"}'步骤完成后,再次提交并设置,可选择性添加。
"status": "complete""detail": "发现595个事件,index=main, sourcetype=_json"-
开展调查(Schema发现 → 查询 → 分析 → 分类)。为每个主要步骤提交进度。
-
发现IOC后立即保存:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/iocs" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"iocs": [{"value": "103.131.189.2", "type": "ip", "verdict": "malicious", "context": "C2 server"}], "append": true}'- 保存MITRE技术:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/mitre" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"techniques": [{"techniqueId": "T1053.005", "name": "Scheduled Task", "tactic": "Persistence", "evidence": "..."}], "append": true}'- 保存调查报告(对于Windows路径,使用heredoc + Node.js文件方法):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/report" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d @"$PAYLOAD_PATH"其中payload JSON为。
{"report": "# 调查报告..."}- 保存Agent上下文(Schema、调查状态,用于恢复):
bash
curl -s -X PATCH "$API/queue/{queueItemId}/context" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"schema": {...}, "queriesRun": [...], "investigationPhase": "completed"}'- 标记完成并添加结论:
bash
curl -s -X PATCH "$API/queue/{queueItemId}/complete" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"verdict": "True Positive", "verdictConfidence": 92, "escalationRequired": true, "classificationRationale": "...", "queriesExecuted": 5, "agentSource": "claude-code"}'- 检查是否有更多告警:
bash
curl -s "$API/workspaces/{wsId}/queue/next" -H "Authorization: Bearer $KEY"如果 → "所有告警已处理完毕。"
如果存在告警 → 转到步骤2。
empty: trueUsers can submit alerts anytime
用户可随时提交告警
Via frontend Agent Dashboard or API:
bash
curl -s -X POST "$API/workspaces/{wsId}/queue" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"alertId": "1024", "alertTitle": "Scheduled Task", "alertSeverity": "medium", "alertData": "..."}'通过前端Agent Dashboard或API:
bash
curl -s -X POST "$API/workspaces/{wsId}/queue" \
-H "Authorization: Bearer $KEY" -H "Content-Type: application/json" \
-d '{"alertId": "1024", "alertTitle": "Scheduled Task", "alertSeverity": "medium", "alertData": "..."}'Linked / related alerts
关联告警
When a new alert is on the same host/attack chain:
- Reuse cached schema from context
- Check prior findings before running new queries
- Cross-reference IOCs and timelines from prior alerts
- Schema discovery only needs to happen ONCE per workspace
当新告警针对同一主机/攻击链时:
- 复用上下文中缓存的Schema
- 在运行新查询前检查之前的发现
- 交叉引用之前告警中的IOC和时间线
- 每个工作区只需进行一次Schema发现
Recommended query sequence (process-based alerts)
推荐查询顺序(基于进程的告警)
For most alerts, follow this order:
- Process tree: All process creation events on the host (Sysmon EventCode 1) — full timeline
- Network: All outbound connections from the host (Sysmon EventCode 3) — C2 detection
- File activity: File creates/deletes (Sysmon EventCode 11) — staging, drops
- Registry: Registry modifications (Sysmon EventCode 13) — persistence
- DNS: DNS queries (Sysmon EventCode 22) — domain IOCs
Queries 1-2 are usually sufficient for classification. Queries 3-5 are for enrichment.
对于大多数告警,请按照以下顺序执行:
- 进程树: 主机上的所有进程创建事件(Sysmon EventCode 1)——完整时间线
- 网络: 主机的所有出站连接(Sysmon EventCode 3)——C2检测
- 文件活动: 文件创建/删除(Sysmon EventCode 11)——暂存、投放
- 注册表: 注册表修改(Sysmon EventCode 13)——持久化
- DNS: DNS查询(Sysmon EventCode 22)——域名IOC
查询1-2通常足以进行分类。查询3-5用于补充信息。
Schema analysis tip
Schema分析技巧
Schema discovery results themselves may reveal IOCs. The output shows top values per field — unusual process names, suspicious paths, or unexpected hosts in the top values are worth noting immediately.
fieldsummarySchema发现结果本身可能会揭示IOC。输出显示每个字段的顶级值——不寻常的进程名称、可疑路径或顶级值中的意外主机都值得立即注意。
fieldsummarySeverity upgrade criteria
严重等级升级标准
- Low → Medium: Suspicious activity confirmed but no active exploitation
- Medium → High: Active exploitation confirmed (code execution, credential access)
- Medium/High → Critical: Active C2 communication, data exfiltration, or lateral movement
- Always note the upgrade: "Severity: Medium (upgraded to Critical based on...)"
- 低 → 中: 确认存在可疑活动但无主动利用
- 中 → 高: 确认存在主动利用(代码执行、凭证获取)
- 中/高 → 严重: 存在主动C2通信、数据泄露或横向移动
- 始终记录升级原因:"严重等级:中(基于...升级为严重)"
Windows path escaping in context/IOC saves
上下文/IOC保存中的Windows路径转义
Context and IOC payloads often contain Windows paths (). Use Node.js object literals with for backslashes:
C:\ProgramData\Media\svchost.exeString.fromCharCode(92)bash
CTX_PATH="$(cygpath -w "$TEMP/ctx_payload.json")"
node -e "
var bs = String.fromCharCode(92);
var payload = {
schema: {provider: 'splunk'},
iocs: {files: ['C:' + bs + 'ProgramData' + bs + 'Media' + bs + 'svchost.exe']}
};
require('fs').writeFileSync(process.argv[1], JSON.stringify(payload));
" "$CTX_PATH"
curl -s -X PATCH "$API/queue/$QID/context" -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" -d @"$CTX_PATH"Rule: ANY payload with Windows paths must use Node.js object literals. Never JSON string literals or heredocs with backslashes.
上下文和IOC负载通常包含Windows路径()。使用Node.js对象字面量并配合表示反斜杠:
C:\ProgramData\Media\svchost.exeString.fromCharCode(92)bash
CTX_PATH="$(cygpath -w "$TEMP/ctx_payload.json")"
node -e "
var bs = String.fromCharCode(92);
var payload = {
schema: {provider: 'splunk'},
iocs: {files: ['C:' + bs + 'ProgramData' + bs + 'Media' + bs + 'svchost.exe']}
};
require('fs').writeFileSync(process.argv[1], JSON.stringify(payload));
" "$CTX_PATH"
curl -s -X PATCH "$API/queue/$QID/context" -H "Authorization: Bearer $KEY" -H "Content-Type: application/json" -d @"$CTX_PATH"规则: 任何包含Windows路径的负载都必须使用Node.js对象字面量。请勿使用JSON字符串字面量或带反斜杠的heredoc。
Asking the user for information (HITL mode — default)
向用户请求信息(HITL模式——默认)
In the default human-in-the-loop mode, ask the user DIRECTLY in the conversation:
- "Please run this query in your {Splunk/Elastic/Sentinel}: "
{query} - "Please check this IOC in VirusTotal/ThreatFox: "
{ioc} - "Is this server authorized to make outbound connections to external IPs?"
Guidelines:
- Ask ONE query at a time (user runs manually)
- Always explain the PURPOSE of each query
- If user provides partial results, ask for clarification
- If user can't run a query, adapt your approach
- Save context after each major step (enables resume)
Note: If the user requested autonomous mode, skip asking — use Chrome to run queries directly (see "Autonomous Mode" section below).
在默认的人在环模式下,直接在对话中向用户提问:
- "请在你的{Splunk/Elastic/Sentinel}中运行此查询:"
{query} - "请在VirusTotal/ThreatFox中检查此IOC:"
{ioc} - "此服务器是否被授权向外网IP发起出站连接?"
指南:
- 一次只提一个查询(用户手动运行)
- 始终说明每个查询的目的
- 如果用户提供的结果不完整,要求澄清
- 如果用户无法运行查询,调整你的方法
- 每个主要步骤后保存上下文(支持恢复)
注意: 如果用户要求使用自主模式,则跳过提问——使用Chrome直接运行查询(请参见下方的"自主模式"部分)。
Autonomous Mode (Chrome Integration)
自主模式(Chrome集成)
This mode is OPTIONAL and OPT-IN ONLY. Only activate when the user EXPLICITLY requests automation. If the user does not mention automation, Chrome, autonomous, or browser — use the default HITL mode above and DO NOT mention autonomous mode.
此模式为可选模式,且仅在用户主动选择时启用。仅当用户明确请求自动化时才激活。如果用户未提及自动化、Chrome、自主或浏览器——使用上述默认的HITL模式,且不要提及自主模式。
When to activate
何时激活
Activate autonomous mode ONLY when the user's message contains phrases like:
- "do this autonomously" / "automate this" / "fully automated"
- "use my browser" / "use Chrome"
- "run the queries yourself" / "you do it"
- "here's the Splunk/Kibana/Sentinel URL, go ahead"
- "no human in the loop" / "don't ask me to run queries"
If none of these phrases appear, stay in HITL mode silently. Do not suggest or mention autonomous mode.
仅当用户的消息包含以下短语时,才激活自主模式:
- "自主完成" / "自动化此任务" / "完全自动化"
- "使用我的浏览器" / "使用Chrome"
- "自行运行查询" / "你来做"
- "这是Splunk/Kibana/Sentinel的URL,开始吧"
- "无人参与" / "不要让我运行查询"
如果没有出现这些短语,请保持在HITL模式,不要声张。不要建议或提及自主模式。
Prerequisites
前提条件
Before using autonomous mode, verify:
-
Chrome is connected — the user must have launched Claude Code withor typed
claude --chrome. If Chrome tools are not available, tell the user:/chrome"Autonomous mode requires Chrome integration. Please runto connect your browser, then try again. Make sure you're logged into the target websites first."/chrome -
User is logged in — the AI uses the user's existing Chrome sessions. It cannot log in, handle MFA, or solve CAPTCHAs. If a login page appears, pause and ask the user to log in manually.
使用自主模式前,请验证:
-
Chrome已连接——用户必须通过启动Claude Code,或输入
claude --chrome。如果Chrome工具不可用,请告知用户:/chrome"自主模式需要Chrome集成。请运行连接你的浏览器,然后重试。确保你已登录目标网站。"/chrome -
用户已登录——AI使用用户现有的Chrome会话。它无法登录、处理MFA或解决CAPTCHA。如果出现登录页面,请暂停并要求用户手动登录。
How to use Chrome tools
如何使用Chrome工具
Use the browser tools provided by the MCP to interact with websites:
claude-in-chrome- Navigate: Open a URL in a new tab or navigate the current tab
- Read: Read the page content, tables, form values
- Click: Click buttons, links, menu items
- Type: Type text into search boxes, form fields
- Screenshot: Take a screenshot to verify what you see
- Multiple tabs: Open different sites in different tabs (e.g., Splunk in one, VirusTotal in another)
使用 MCP提供的浏览器工具与网站交互:
claude-in-chrome- 导航:在新标签页中打开URL或导航当前标签页
- 读取:读取页面内容、表格、表单值
- 点击:点击按钮、链接、菜单项
- 输入:在搜索框、表单字段中输入文本
- 截图:截图以验证你看到的内容
- 多标签页:在不同标签页中打开不同网站(例如,一个标签页打开Splunk,另一个打开VirusTotal)
Autonomous investigation flow
Autonomous investigation flow
Follow the same investigation steps as HITL mode, but instead of asking the user to run queries, run them yourself via Chrome.
Reading results: Use instead of screenshots for extracting complete data (hashes, encoded commands, long field values). Screenshots are useful for visual verification but lose critical details like full SHA256 hashes and base64 strings. For Splunk, click into the Events tab and use to read full event details.
get_page_textget_page_textSchema discovery (Splunk):
Use URL-based navigation (most reliable — avoids CodeMirror editor interaction issues):
- Navigate directly to:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20index%3D*%20NOT%20index%3D_*%20%7C%20head%2010000%20%7C%20fieldsummary%20maxvals%3D10%20%7C%20sort%20-count%20%7C%20head%2060&display.page.search.tab=statistics - Wait for results to load
- Use to read the results table
get_page_text - Save schema to SOC Compass context via API
Note: sets the time range to "All time" — essential for historical data (TryHackMe labs, past incidents). The default "Last 24 hours" will return nothing for historical events.
earliest=0&latest=Schema discovery (Kibana/Elastic):
- Navigate to the Kibana URL → Discover
- Select the relevant index pattern
- Set time range to cover the investigation period
- Use to read 5-10 sample events
get_page_text - Save schema to context
Schema discovery (Sentinel):
- Navigate to the Azure Portal Log Analytics workspace
- Run:
search * | summarize count() by $table | sort by count_ desc | take 20 - Use to read results, then query sample events from the relevant table
get_page_text - Save schema to context
Running investigation queries (Splunk — URL method, recommended):
Navigate directly with the query in the URL instead of typing in the search bar:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20{url_encoded_query}&display.page.search.tab=eventsSteps:
- URL-encode your SPL query
- Navigate to the URL above with the encoded query
- Wait for results to load
- Use to read the full results (Events tab for raw events, Statistics tab for table output)
get_page_text - Analyze and formulate next query
- Repeat
Why URL-based is better than typing in the search bar:
- Splunk's CodeMirror editor often fails with — text appends instead of replacing
form_input - Ctrl+A sometimes selects the whole page instead of just the query
- URL-based execution is 100% reliable and also sets the time range correctly
Running investigation queries (Kibana/Sentinel):
- Navigate to the query interface
- Clear and type the new query
- Execute and use to read results
get_page_text - Analyze and repeat
IOC lookups via Chrome:
- Open a new tab
- Navigate to VirusTotal (https://www.virustotal.com), ThreatFox, or other threat intel site
- Search for the hash/IP/domain
- Use to read the results and detection ratios
get_page_text - Include findings in the investigation
Handling errors:
- If a login page appears: pause and ask the user to log in manually, then continue
- If a CAPTCHA appears: pause and ask the user to solve it, then continue
- If the page doesn't load or times out: try refreshing, then ask the user for help
- If results are still loading: wait and check again (SIEM queries can take time)
- If CodeMirror/search bar interaction fails: fall back to URL-based query execution
遵循与HITL模式相同的调查步骤,但不要要求用户运行查询,而是通过Chrome自行运行。
读取结果: 使用而非截图来提取完整数据(哈希、编码命令、长字段值)。截图适用于视觉验证,但会丢失关键细节,如完整的SHA256哈希和base64字符串。对于Splunk,请点击"事件"选项卡并使用读取完整事件详情。
get_page_textget_page_textSchema发现(Splunk):
使用基于URL的导航(最可靠——避免CodeMirror编辑器交互问题):
- 直接导航到:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20index%3D*%20NOT%20index%3D_*%20%7C%20head%2010000%20%7C%20fieldsummary%20maxvals%3D10%20%7C%20sort%20-count%20%7C%20head%2060&display.page.search.tab=statistics - 等待结果加载完成
- 使用读取结果表格
get_page_text - 通过API将Schema保存到SOC Compass上下文
注意:将时间范围设置为"所有时间"——对于历史数据(TryHackMe实验室、过去的事件)至关重要。默认的"最近24小时"对于历史事件将返回空结果。
earliest=0&latest=Schema发现(Kibana/Elastic):
- 导航到Kibana URL → Discover
- 选择相关的索引模式
- 设置时间范围以覆盖调查周期
- 使用读取5-10个示例事件
get_page_text - 将Schema保存到上下文
Schema发现(Sentinel):
- 导航到Azure门户日志分析工作区
- 运行:
search * | summarize count() by $table | sort by count_ desc | take 20 - 使用读取结果,然后查询相关表中的3-5个示例事件
get_page_text - 将Schema保存到上下文
运行调查查询(Splunk——推荐使用URL方法):
直接使用包含查询语句的URL导航,而非在搜索栏中输入:
{splunk_url}/en-US/app/search/search?earliest=0&latest=&q=search%20{url_encoded_query}&display.page.search.tab=events步骤:
- 对SPL查询进行URL编码
- 导航到上述包含编码查询的URL
- 等待结果加载完成
- 使用读取完整结果("事件"选项卡用于原始事件,"统计"选项卡用于表格输出)
get_page_text - 分析并制定下一个查询
- 重复
为什么基于URL的方法比在搜索栏中输入更好:
- Splunk的CodeMirror编辑器经常在使用时失败——文本会追加而非替换
form_input - Ctrl+A有时会选择整个页面而非仅查询语句
- 基于URL的执行100%可靠,还能正确设置时间范围
运行调查查询(Kibana/Sentinel):
- 导航到查询界面
- 清空并输入新查询
- 执行并使用读取结果
get_page_text - 分析并重复
通过Chrome进行IOC查询:
- 打开新标签页
- 导航到VirusTotal(https://www.virustotal.com)、ThreatFox或其他威胁情报网站
- 搜索哈希/IP/域名
- 使用读取结果和检测率
get_page_text - 将发现纳入调查
处理错误:
- 如果出现登录页面:暂停并要求用户手动登录,然后继续
- 如果出现CAPTCHA:暂停并要求用户解决,然后继续
- 如果页面未加载或超时:尝试刷新,然后向用户求助
- 如果结果仍在加载:等待并再次检查(SIEM查询可能需要时间)
- 如果CodeMirror/搜索栏交互失败:回退到基于URL的查询执行
Important: Still use the SOC Compass queue API
重要提示:仍需使用SOC Compass队列API
Even in autonomous mode, you MUST still:
- Submit to queue + claim (Steps 2-3) so the dashboard tracks the investigation
- Post progress steps via as you work
PATCH /queue/:id/progress - Save IOCs via
PATCH /queue/:id/iocs - Save MITRE via
PATCH /queue/:id/mitre - Save report via
PATCH /queue/:id/report - Mark complete via with verdict
PATCH /queue/:id/complete
Chrome is used to GATHER evidence. The queue API is used to PERSIST results and update the dashboard.
即使在自主模式下,你仍必须:
- 提交到队列+认领(步骤2-3),以便Dashboard跟踪调查
- 工作时通过提交进度步骤
PATCH /queue/:id/progress - 通过保存IOC
PATCH /queue/:id/iocs - 通过保存MITRE技术
PATCH /queue/:id/mitre - 通过保存报告
PATCH /queue/:id/report - 通过标记完成并添加结论
PATCH /queue/:id/complete
Chrome用于收集证据。队列API用于持久化结果并更新Dashboard。
Decoding encoded commands
解码编码命令
When you find PowerShell or other Base64 payloads, decode immediately:
-EncodedCommandbash
echo '<base64_string>' | base64 -d | iconv -f UTF-16LE -t UTF-8Always decode and present the decoded content to the user. Encoded commands are critical evidence.
当你发现PowerShell 或其他Base64负载时,请立即解码:
-EncodedCommandbash
echo '<base64_string>' | base64 -d | iconv -f UTF-16LE -t UTF-8始终解码并向用户展示解码后的内容。编码命令是关键证据。
Investigation modes
调查模式
Auto-detected from the workspace field:
modeAlert Triage (, default):
Dual-hypothesis analysis — evaluate both benign and malicious explanations. Apply classification framework after 1-3 queries. See .
ultimate_triggerreferences/alert-triage-methodology.mdSOC Investigation ():
Broader scope, SIEM optional, evidence-first approach.
soc_investigation_triggerVM Forensics ():
OSCAR-DFIR framework. Ask user to run ONE command at a time on the VM. See .
vm_forensics_triggerreferences/vm-forensics-methodology.mdSigma Rules ():
Detection rule engineering. Ask for log samples, write Sigma rules. No SIEM queries needed. See .
sigma_rule_triggerreferences/sigma-rule-methodology.mdIf the question doesn't match any mode, answer directly using workspace context.
从工作区的字段自动检测:
mode告警分流(,默认):
双假设分析——同时评估良性和恶意解释。在1-3次查询后应用分类框架。请参见。
ultimate_triggerreferences/alert-triage-methodology.mdSOC调查():
范围更广,SIEM可选,以证据为先的方法。
soc_investigation_triggerVM取证():
OSCAR-DFIR框架。要求用户在VM上一次运行一个命令。请参见。
vm_forensics_triggerreferences/vm-forensics-methodology.mdSigma规则():
检测规则工程。要求提供日志样本,编写Sigma规则。无需SIEM查询。请参见。
sigma_rule_triggerreferences/sigma-rule-methodology.md如果问题与任何模式都不匹配,请使用工作区上下文直接回答。
SIEM query rules (ONLY use after schema discovery)
SIEM查询规则(仅在Schema发现后使用)
Splunk SPL:
- Use index and sourcetype FROM THE SCHEMA — never guess
- Always use relative time: or
earliest=-60mearliest=-24h - End queries with
| head 20 - NEVER use absolute timestamps
- Field names MUST match the schema exactly (case-sensitive)
Elastic ESQL:
- Use index pattern FROM THE SCHEMA
- Use for equality (double equals)
== - Quote keyword values: not
"4624"4624 - Time:
WHERE @timestamp >= NOW() - 1 hour - End with
| LIMIT 20 - Field names from schema (e.g., not
event.code)EventCode
Sentinel KQL:
- Use table names FROM THE SCHEMA
- Use for equality,
==for word match,hasfor substringcontains - Time:
| where TimeGenerated > ago(24h) - End with
| take 20
Full guide:
references/siem-query-guides.mdSplunk SPL:
- 使用Schema中的索引和源类型——请勿猜测
- 始终使用相对时间:或
earliest=-60mearliest=-24h - 查询结尾使用
| head 20 - 绝对不要使用绝对时间戳
- 字段名称必须与Schema完全匹配(区分大小写)
Elastic ESQL:
- 使用Schema中的索引模式
- 使用表示相等(双等号)
== - 关键字值加引号:而非
"4624"4624 - 时间:
WHERE @timestamp >= NOW() - 1 hour - 结尾使用
| LIMIT 20 - 使用Schema中的字段名称(例如:而非
event.code)EventCode
Sentinel KQL:
- 使用Schema中的表名称
- 使用表示相等,
==表示单词匹配,has表示子字符串匹配contains - 时间:
| where TimeGenerated > ago(24h) - 结尾使用
| take 20
完整指南:
references/siem-query-guides.mdEndpoint reference
端点参考
| Method | Path | Description |
|---|---|---|
| | Health check (no auth) |
| | User info |
| | Credit balance |
| | List workspaces |
| | Workspace details + context |
| | Create workspace |
| | Update workspace |
| | Archive workspace |
| | List conversations |
| | Create conversation |
| | Conversation details |
| | Message history (max 100) |
| | Post message (user/assistant) |
| | Edit message content |
| | Delete message |
| | Get agent context |
| | Save agent context (overwrite) |
| | Merge-update context |
| | Read verdicts |
| | Write verdict (upserts by eventId) |
| | Processing status |
| | Add alert to queue |
| | List queue (?status=pending/completed/all) |
| | Get next pending alert |
| | Mark alert as processing |
| | Mark completed (verdict, escalation, duration, queries) |
| | Mark alert as failed |
| | Add/update investigation progress step |
| | Get all progress steps |
| | Save investigation report |
| | Add/update IOCs (append: true to add without replacing) |
| | Add/update MITRE techniques (append: true) |
| | Save agent context (schema, state) |
| | Get full investigation detail |
| | Remove from queue |
All endpoints require except .
Authorization: Bearer soc_sk_<key>/healthQueue-centric flow: All investigation data (report, IOCs, MITRE, progress, context) goes to the queue item. The Agent Dashboard reads everything from the queue. Conversations are optional/legacy.
| 方法 | 路径 | 描述 |
|---|---|---|
| | 健康检查(无需认证) |
| | 用户信息 |
| | 余额 |
| | 列出工作区 |
| | 工作区详情 + 上下文 |
| | 创建工作区 |
| | 更新工作区 |
| | 归档工作区 |
| | 列出对话 |
| | 创建对话 |
| | 对话详情 |
| | 消息历史(最多100条) |
| | 发布消息(用户/助手) |
| | 编辑消息内容 |
| | 删除消息 |
| | 获取Agent上下文 |
| | 保存Agent上下文(覆盖) |
| | 合并更新上下文 |
| | 读取结论 |
| | 写入结论(按eventId更新) |
| | 处理状态 |
| | 将告警添加到队列 |
| | 列出队列(?status=pending/completed/all) |
| | 获取下一个待处理告警 |
| | 将告警标记为处理中 |
| | 标记为完成(结论、升级、持续时间、查询次数) |
| | 将告警标记为失败 |
| | 添加/更新调查进度步骤 |
| | 获取所有进度步骤 |
| | 保存调查报告 |
| | 添加/更新IOC(append: true表示添加而非替换) |
| | 添加/更新MITRE技术(append: true) |
| | 保存Agent上下文(Schema、状态) |
| | 获取完整调查详情 |
| | 从队列中移除 |
除外,所有端点都需要。
/healthAuthorization: Bearer soc_sk_<key>以队列为中心的流程: 所有调查数据(报告、IOC、MITRE、进度、上下文)都进入队列项。Agent Dashboard从队列中读取所有内容。对话是可选的/遗留的。
Error codes
错误码
| Code | Status | Meaning |
|---|---|---|
| 400 | Invalid input (check JSON syntax) |
| 401 | Invalid/expired API key |
| 404 | Resource not found |
| 429 | Too many requests (60/min standard) |
| 500 | Server error |
| 代码 | 状态 | 含义 |
|---|---|---|
| 400 | 无效输入(检查JSON语法) |
| 401 | 无效/过期API密钥 |
| 404 | 资源未找到 |
| 429 | 请求过于频繁(标准限制为60次/分钟) |
| 500 | 服务器错误 |
Critical rules
关键规则
- SCHEMA FIRST — NO EXCEPTIONS — discover the SIEM schema before ANY investigation query. Never guess index names, sourcetypes, or field names.
- Use schema-verified names ONLY — every index, sourcetype, and field must come from schema discovery.
- Save schema to context immediately — so you never need to ask again for this workspace.
- DEFEND your classifications — only change a verdict when NEW evidence is presented, not because the user disagrees. Restate your evidence and ask for counter-evidence.
- Classify the SPECIFIC activity — an alert that fires on legitimate activity is FP even if unrelated malicious activity exists on the same host. Report both, classify separately.
- ALWAYS submit to queue first — even when the user pastes an alert directly in the CLI. This ensures the Agent Dashboard tracks every investigation.
- Temporal investigation is MANDATORY — always check what happened AFTER the alert event.
- Classify EARLY — after 1-3 initial queries, apply the classification framework.
- Save context after each major step — enables resume if the session is interrupted.
- Save the report to the queue item via so it appears in the Agent Dashboard.
PATCH /queue/:id/report - Use Node.js for JSON serialization on Windows — never inline multi-line content in curl -d.
- Never fabricate query results — only use data the user has provided.
- TP does not equal confirmed malware — True Positive means the alert correctly identified suspicious activity requiring response.
- Autonomous mode is OPT-IN ONLY — never activate autonomous mode or mention Chrome unless the user explicitly requests automation. Default is always HITL mode.
- Schema优先——无例外——在任何调查查询之前必须发现SIEM的Schema。请勿猜测索引名称、源类型或字段名称。
- 仅使用Schema验证后的名称——每个索引、源类型和字段都必须来自Schema发现。
- 立即将Schema保存到上下文——这样你就永远不需要再次询问该工作区的Schema。
- 坚持你的分类——只有在出现新证据时才更改结论,而不是因为用户不同意。重申你的证据并要求提供反证。
- 对具体活动进行分类——即使同一主机上存在无关的恶意活动,检测到合法活动的告警仍为假阳性。报告这两个发现,但分别分类。
- 必须首先提交到队列——即使用户直接在CLI中粘贴了告警。这可确保Agent Dashboard跟踪每一项调查。
- 必须进行时间相关调查——始终检查告警事件之后发生的情况。
- 尽早分类——在1-3次初始查询后应用分类框架。
- 每个主要步骤后保存上下文——如果会话中断,可恢复调查。
- 通过将报告保存到队列项,使其显示在Agent Dashboard中。
PATCH /queue/:id/report - 在Windows上使用Node.js进行JSON序列化——永远不要在curl -d中内联多行内容。
- 切勿编造查询结果——仅使用用户提供的数据。
- 真阳性不等于已确认恶意软件——真阳性意味着告警正确识别了需要响应的可疑活动。
- 自主模式仅在用户主动选择时启用——永远不要激活自主模式或提及Chrome,除非用户明确请求自动化。默认始终为HITL模式。