analyze-logs

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Analyze application logs

分析应用日志

Read and analyze structured wide-event logs from the local
.evlog/logs/
directory to debug errors, investigate performance issues, and understand application behavior.
读取并分析本地
.evlog/logs/
目录下的结构化宽事件日志,以调试错误、排查性能问题并了解应用行为。

When to Use

适用场景

  • User asks to debug an error, investigate a bug, or understand why something failed
  • User asks about request patterns, slow endpoints, or error rates
  • User asks "what happened" or "what's going on" with their application
  • User asks to analyze logs, check recent errors, or review application behavior
  • User mentions a specific error message or status code they're seeing
  • 用户要求调试错误、排查Bug或了解故障原因
  • 用户询问请求模式、慢端点或错误率相关问题
  • 用户询问应用“发生了什么”或“当前状态”
  • 用户要求分析日志、检查近期错误或查看应用行为
  • 用户提到了他们遇到的特定错误消息或状态码

Finding the logs

查找日志

Logs are written by evlog's file system drain as
.jsonl
files, organized by date.
Format detection: The drain supports two modes:
  • NDJSON (default,
    pretty: false
    ): One compact JSON object per line. Parse line-by-line.
  • Pretty (
    pretty: true
    ): Multi-line indented JSON per event. Parse by reading the entire file and splitting on top-level objects (e.g.
    JSON.parse('[' + content.replace(/\}\n\{/g, '},{') + ']')
    ) or use a streaming JSON parser.
Always check the first few bytes of the file to detect the format: if the second character is a newline or
"
, it's NDJSON; if it's a space or newline followed by spaces, it's pretty-printed.
Search order — check these locations relative to the project root:
  1. .evlog/logs/
    (default)
  2. Any
    .evlog/logs/
    inside app directories (monorepos:
    apps/*/.evlog/logs/
    )
Use glob to find log files:
.evlog/logs/*.jsonl
*/.evlog/logs/*.jsonl
apps/*/.evlog/logs/*.jsonl
Files are named by date:
2026-03-14.jsonl
. Start with the most recent file.
日志由evlog的文件系统输出器(drain)写入为
.jsonl
文件,按日期组织。
格式检测:输出器支持两种模式:
  • NDJSON(默认,
    pretty: false
    ):每行一个紧凑的JSON对象。逐行解析。
  • 美化格式
    pretty: true
    ):每个事件为多行缩进的JSON。可通过读取整个文件并按顶级对象拆分(例如
    JSON.parse('[' + content.replace(/\}\n\{/g, '},{') + ']')
    )或使用流式JSON解析器来解析。
始终检查文件的前几个字节以检测格式:如果第二个字符是换行符或
"
,则为NDJSON;如果是空格或换行符后跟空格,则为美化格式。
搜索顺序 — 相对于项目根目录,检查以下位置:
  1. .evlog/logs/
    (默认)
  2. 应用目录内的任何
    .evlog/logs/
    (单仓库多项目结构:
    apps/*/.evlog/logs/
使用通配符查找日志文件:
.evlog/logs/*.jsonl
*/.evlog/logs/*.jsonl
apps/*/.evlog/logs/*.jsonl
文件按日期命名:
2026-03-14.jsonl
。从最新的文件开始查看。

If no logs are found

未找到日志的情况

The file system drain may not be enabled. Guide the user to set it up:
typescript
import { createFsDrain } from 'evlog/fs'

// Nuxt / Nitro: server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('evlog:drain', createFsDrain())
})

// Hono / Express / Elysia: pass in middleware options
app.use(evlog({ drain: createFsDrain() }))

// Fastify: pass in plugin options
await app.register(evlog, { drain: createFsDrain() })

// NestJS: pass in module options
EvlogModule.forRoot({ drain: createFsDrain() })

// Standalone: pass to initLogger
initLogger({ drain: createFsDrain() })
After setup, the user needs to trigger some requests to generate logs, then re-analyze.
文件系统输出器可能未启用。引导用户进行设置:
typescript
import { createFsDrain } from 'evlog/fs'

// Nuxt / Nitro: server/plugins/evlog-drain.ts
export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('evlog:drain', createFsDrain())
})

// Hono / Express / Elysia: 传入中间件选项
app.use(evlog({ drain: createFsDrain() }))

// Fastify: 传入插件选项
await app.register(evlog, { drain: createFsDrain() })

// NestJS: 传入模块选项
EvlogModule.forRoot({ drain: createFsDrain() })

// 独立模式: 传入initLogger
initLogger({ drain: createFsDrain() })
设置完成后,用户需要触发一些请求以生成日志,然后重新进行分析。

Log format

日志格式

Each line is a self-contained JSON object (wide event). Key fields:
FieldTypeDescription
timestamp
string
ISO 8601 timestamp
level
string
info
,
warn
,
error
,
debug
service
string
Service name
environment
string
development
,
production
, etc.
method
string
HTTP method (
GET
,
POST
, etc.)
path
string
Request path (
/api/checkout
)
status
number
HTTP response status code
duration
string
Request duration (
"234ms"
)
requestId
string
Unique request identifier
error
object
Error details:
name
,
message
,
stack
,
statusCode
,
data
error.data.why
string
Human-readable explanation of what went wrong
error.data.fix
string
Suggested fix for the error
source
string
client
for browser logs, absent for server logs
userAgent
object
Parsed browser/OS/device info
All other fields are application-specific context added via
log.set()
(e.g.
user
,
cart
,
payment
).
每行是一个独立的JSON对象(宽事件)。关键字段:
字段类型描述
timestamp
string
ISO 8601 时间戳
level
string
info
warn
error
debug
service
string
服务名称
environment
string
development
production
method
string
HTTP方法(
GET
POST
等)
path
string
请求路径(
/api/checkout
status
number
HTTP响应状态码
duration
string
请求耗时(
"234ms"
requestId
string
唯一请求标识符
error
object
错误详情:
name
message
stack
statusCode
data
error.data.why
string
错误原因的人性化说明
error.data.fix
string
建议的错误修复方案
source
string
浏览器日志标记为
client
,服务器日志无此字段
userAgent
object
解析后的浏览器/操作系统/设备信息
所有其他字段是通过
log.set()
添加的应用特定上下文(例如
user
cart
payment
)。

How to analyze

分析方法

Step 1: Read the most recent log file

步骤1:读取最新的日志文件

Read the latest
.jsonl
file. Each line is one JSON event. Parse each line independently.
读取最新的
.jsonl
文件。每行对应一个JSON事件。独立解析每一行。

Step 2: Identify the relevant events

步骤2:识别相关事件

Filter based on the user's question:
  • Errors: look for
    "level":"error"
    or
    status >= 400
  • Specific endpoint: match on
    path
  • Slow requests: parse
    duration
    (e.g.
    "706ms"
    ) and filter high values
  • Specific user/action: match on application-specific fields
  • Client-side issues: filter by
    "source":"client"
  • Time range: compare
    timestamp
    values
根据用户的问题进行筛选:
  • 错误:查找
    "level":"error"
    status >= 400
    的事件
  • 特定端点:匹配
    path
    字段
  • 慢请求:解析
    duration
    字段(例如
    "706ms"
    )并筛选耗时较长的请求
  • 特定用户/操作:匹配应用特定字段
  • 客户端问题:筛选
    "source":"client"
    的事件
  • 时间范围:比较
    timestamp

Step 3: Analyze and explain

步骤3:分析并解释

For each relevant event:
  1. What happened: summarize the
    path
    ,
    method
    ,
    status
    ,
    level
  2. Why it failed (errors): read
    error.message
    ,
    error.data.why
    , and the stack trace
  3. How to fix: check
    error.data.fix
    for suggested remediation
  4. Context: examine application-specific fields for business context (user info, payment details, etc.)
  5. Patterns: look for recurring errors, degrading performance, or correlated failures
对于每个相关事件:
  1. 发生了什么:总结
    path
    method
    status
    level
    信息
  2. 故障原因(错误场景):查看
    error.message
    error.data.why
    和堆栈跟踪
  3. 修复方案:查看
    error.data.fix
    获取建议的修复措施
  4. 上下文:检查应用特定字段以获取业务上下文(用户信息、支付详情等)
  5. 模式:查找重复出现的错误、性能下降或相关故障的模式

Analysis patterns

分析模式

Find all errors

查找所有错误

Filter: level === "error"
Group by: error.message or path
Look for: recurring patterns, common failure modes
筛选条件: level === "error"
分组方式: error.message 或 path
关注重点: 重复模式、常见故障类型

Find slow requests

查找慢请求

Filter: parse duration string, compare > threshold (e.g. 1000ms)
Sort by: duration descending
Look for: specific endpoints, time-of-day patterns
筛选条件: 解析duration字符串,筛选超过阈值的请求(例如1000ms)
排序方式: 按duration降序排列
关注重点: 特定端点、时间段模式

Trace a specific request

追踪特定请求

Filter: requestId === "the-request-id"
Result: single wide event with all context for that request
筛选条件: requestId === "目标请求ID"
结果: 包含该请求所有上下文的单个宽事件

Error rate by endpoint

按端点统计错误率

Group events by: path
Count: total events vs error events per path
Look for: endpoints with high error ratios
分组方式: 按path字段分组
统计: 每个端点的总事件数与错误事件数
关注重点: 错误率高的端点

Client vs server errors

客户端与服务器错误对比

Split by: source === "client" vs no source field
Compare: error patterns between client and server
Look for: client errors that don't have corresponding server errors (network issues)
拆分方式: 按`source === "client"`和无source字段拆分
对比: 客户端与服务器端的错误模式
关注重点: 没有对应服务器错误的客户端错误(可能是网络问题)

Important notes

重要说明

  • Each line is a complete, self-contained event. Unlike traditional logs, you don't need to correlate multiple lines — one line has all the context for one request.
  • The
    error.data.why
    and
    error.data.fix
    fields are evlog-specific structured error fields. When present, they provide the most actionable information.
  • Duration values are strings with units (e.g.
    "706ms"
    ). Parse the numeric part for comparisons.
  • Events with
    "source":"client"
    originated from browser-side logging and were sent to the server via the transport endpoint.
  • Log files are
    .gitignore
    'd automatically — they exist only on the local machine or server where the app runs.
  • 每行是一个完整、独立的事件。与传统日志不同,你不需要关联多行内容——一行就包含了一个请求的所有上下文。
  • error.data.why
    error.data.fix
    字段是evlog特有的结构化错误字段。如果存在,它们提供了最具可操作性的信息。
  • Duration值是带单位的字符串(例如
    "706ms"
    )。比较时需要解析其中的数值部分。
  • 标记为
    "source":"client"
    的事件来自浏览器端日志,并通过传输端点发送到服务器。
  • 日志文件会被自动添加到
    .gitignore
    中——它们仅存在于运行应用的本地机器或服务器上。