analyze-logs
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseAnalyze application logs
分析应用日志
Read and analyze structured wide-event logs from the local directory to debug errors, investigate performance issues, and understand application behavior.
.evlog/logs/读取并分析本地目录下的结构化宽事件日志,以调试错误、排查性能问题并了解应用行为。
.evlog/logs/When to Use
适用场景
- User asks to debug an error, investigate a bug, or understand why something failed
- User asks about request patterns, slow endpoints, or error rates
- User asks "what happened" or "what's going on" with their application
- User asks to analyze logs, check recent errors, or review application behavior
- User mentions a specific error message or status code they're seeing
- 用户要求调试错误、排查Bug或了解故障原因
- 用户询问请求模式、慢端点或错误率相关问题
- 用户询问应用“发生了什么”或“当前状态”
- 用户要求分析日志、检查近期错误或查看应用行为
- 用户提到了他们遇到的特定错误消息或状态码
Finding the logs
查找日志
Logs are written by evlog's file system drain as files, organized by date.
.jsonlFormat detection: The drain supports two modes:
- NDJSON (default, ): One compact JSON object per line. Parse line-by-line.
pretty: false - Pretty (): Multi-line indented JSON per event. Parse by reading the entire file and splitting on top-level objects (e.g.
pretty: true) or use a streaming JSON parser.JSON.parse('[' + content.replace(/\}\n\{/g, '},{') + ']')
Always check the first few bytes of the file to detect the format: if the second character is a newline or , it's NDJSON; if it's a space or newline followed by spaces, it's pretty-printed.
"Search order — check these locations relative to the project root:
- (default)
.evlog/logs/ - Any inside app directories (monorepos:
.evlog/logs/)apps/*/.evlog/logs/
Use glob to find log files:
.evlog/logs/*.jsonl
*/.evlog/logs/*.jsonl
apps/*/.evlog/logs/*.jsonlFiles are named by date: . Start with the most recent file.
2026-03-14.jsonl日志由evlog的文件系统输出器(drain)写入为文件,按日期组织。
.jsonl格式检测:输出器支持两种模式:
- NDJSON(默认,):每行一个紧凑的JSON对象。逐行解析。
pretty: false - 美化格式():每个事件为多行缩进的JSON。可通过读取整个文件并按顶级对象拆分(例如
pretty: true)或使用流式JSON解析器来解析。JSON.parse('[' + content.replace(/\}\n\{/g, '},{') + ']')
始终检查文件的前几个字节以检测格式:如果第二个字符是换行符或,则为NDJSON;如果是空格或换行符后跟空格,则为美化格式。
"搜索顺序 — 相对于项目根目录,检查以下位置:
- (默认)
.evlog/logs/ - 应用目录内的任何(单仓库多项目结构:
.evlog/logs/)apps/*/.evlog/logs/
使用通配符查找日志文件:
.evlog/logs/*.jsonl
*/.evlog/logs/*.jsonl
apps/*/.evlog/logs/*.jsonl文件按日期命名:。从最新的文件开始查看。
2026-03-14.jsonlIf no logs are found
未找到日志的情况
The file system drain may not be enabled. Guide the user to set it up:
typescript
import { createFsDrain } from 'evlog/fs'
// Nuxt / Nitro: server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', createFsDrain())
})
// Hono / Express / Elysia: pass in middleware options
app.use(evlog({ drain: createFsDrain() }))
// Fastify: pass in plugin options
await app.register(evlog, { drain: createFsDrain() })
// NestJS: pass in module options
EvlogModule.forRoot({ drain: createFsDrain() })
// Standalone: pass to initLogger
initLogger({ drain: createFsDrain() })After setup, the user needs to trigger some requests to generate logs, then re-analyze.
文件系统输出器可能未启用。引导用户进行设置:
typescript
import { createFsDrain } from 'evlog/fs'
// Nuxt / Nitro: server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
nitroApp.hooks.hook('evlog:drain', createFsDrain())
})
// Hono / Express / Elysia: 传入中间件选项
app.use(evlog({ drain: createFsDrain() }))
// Fastify: 传入插件选项
await app.register(evlog, { drain: createFsDrain() })
// NestJS: 传入模块选项
EvlogModule.forRoot({ drain: createFsDrain() })
// 独立模式: 传入initLogger
initLogger({ drain: createFsDrain() })设置完成后,用户需要触发一些请求以生成日志,然后重新进行分析。
Log format
日志格式
Each line is a self-contained JSON object (wide event). Key fields:
| Field | Type | Description |
|---|---|---|
| | ISO 8601 timestamp |
| | |
| | Service name |
| | |
| | HTTP method ( |
| | Request path ( |
| | HTTP response status code |
| | Request duration ( |
| | Unique request identifier |
| | Error details: |
| | Human-readable explanation of what went wrong |
| | Suggested fix for the error |
| | |
| | Parsed browser/OS/device info |
All other fields are application-specific context added via (e.g. , , ).
log.set()usercartpayment每行是一个独立的JSON对象(宽事件)。关键字段:
| 字段 | 类型 | 描述 |
|---|---|---|
| | ISO 8601 时间戳 |
| | |
| | 服务名称 |
| | |
| | HTTP方法( |
| | 请求路径( |
| | HTTP响应状态码 |
| | 请求耗时( |
| | 唯一请求标识符 |
| | 错误详情: |
| | 错误原因的人性化说明 |
| | 建议的错误修复方案 |
| | 浏览器日志标记为 |
| | 解析后的浏览器/操作系统/设备信息 |
所有其他字段是通过添加的应用特定上下文(例如、、)。
log.set()usercartpaymentHow to analyze
分析方法
Step 1: Read the most recent log file
步骤1:读取最新的日志文件
Read the latest file. Each line is one JSON event. Parse each line independently.
.jsonl读取最新的文件。每行对应一个JSON事件。独立解析每一行。
.jsonlStep 2: Identify the relevant events
步骤2:识别相关事件
Filter based on the user's question:
- Errors: look for or
"level":"error"status >= 400 - Specific endpoint: match on
path - Slow requests: parse (e.g.
duration) and filter high values"706ms" - Specific user/action: match on application-specific fields
- Client-side issues: filter by
"source":"client" - Time range: compare values
timestamp
根据用户的问题进行筛选:
- 错误:查找或
"level":"error"的事件status >= 400 - 特定端点:匹配字段
path - 慢请求:解析字段(例如
duration)并筛选耗时较长的请求"706ms" - 特定用户/操作:匹配应用特定字段
- 客户端问题:筛选的事件
"source":"client" - 时间范围:比较值
timestamp
Step 3: Analyze and explain
步骤3:分析并解释
For each relevant event:
- What happened: summarize the ,
path,method,statuslevel - Why it failed (errors): read ,
error.message, and the stack traceerror.data.why - How to fix: check for suggested remediation
error.data.fix - Context: examine application-specific fields for business context (user info, payment details, etc.)
- Patterns: look for recurring errors, degrading performance, or correlated failures
对于每个相关事件:
- 发生了什么:总结、
path、method、status信息level - 故障原因(错误场景):查看、
error.message和堆栈跟踪error.data.why - 修复方案:查看获取建议的修复措施
error.data.fix - 上下文:检查应用特定字段以获取业务上下文(用户信息、支付详情等)
- 模式:查找重复出现的错误、性能下降或相关故障的模式
Analysis patterns
分析模式
Find all errors
查找所有错误
Filter: level === "error"
Group by: error.message or path
Look for: recurring patterns, common failure modes筛选条件: level === "error"
分组方式: error.message 或 path
关注重点: 重复模式、常见故障类型Find slow requests
查找慢请求
Filter: parse duration string, compare > threshold (e.g. 1000ms)
Sort by: duration descending
Look for: specific endpoints, time-of-day patterns筛选条件: 解析duration字符串,筛选超过阈值的请求(例如1000ms)
排序方式: 按duration降序排列
关注重点: 特定端点、时间段模式Trace a specific request
追踪特定请求
Filter: requestId === "the-request-id"
Result: single wide event with all context for that request筛选条件: requestId === "目标请求ID"
结果: 包含该请求所有上下文的单个宽事件Error rate by endpoint
按端点统计错误率
Group events by: path
Count: total events vs error events per path
Look for: endpoints with high error ratios分组方式: 按path字段分组
统计: 每个端点的总事件数与错误事件数
关注重点: 错误率高的端点Client vs server errors
客户端与服务器错误对比
Split by: source === "client" vs no source field
Compare: error patterns between client and server
Look for: client errors that don't have corresponding server errors (network issues)拆分方式: 按`source === "client"`和无source字段拆分
对比: 客户端与服务器端的错误模式
关注重点: 没有对应服务器错误的客户端错误(可能是网络问题)Important notes
重要说明
- Each line is a complete, self-contained event. Unlike traditional logs, you don't need to correlate multiple lines — one line has all the context for one request.
- The and
error.data.whyfields are evlog-specific structured error fields. When present, they provide the most actionable information.error.data.fix - Duration values are strings with units (e.g. ). Parse the numeric part for comparisons.
"706ms" - Events with originated from browser-side logging and were sent to the server via the transport endpoint.
"source":"client" - Log files are 'd automatically — they exist only on the local machine or server where the app runs.
.gitignore
- 每行是一个完整、独立的事件。与传统日志不同,你不需要关联多行内容——一行就包含了一个请求的所有上下文。
- 和
error.data.why字段是evlog特有的结构化错误字段。如果存在,它们提供了最具可操作性的信息。error.data.fix - Duration值是带单位的字符串(例如)。比较时需要解析其中的数值部分。
"706ms" - 标记为的事件来自浏览器端日志,并通过传输端点发送到服务器。
"source":"client" - 日志文件会被自动添加到中——它们仅存在于运行应用的本地机器或服务器上。
.gitignore