zero-script-qa
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseZero Script QA Expert Knowledge
零脚本QA专家知识库
Overview
概述
Zero Script QA is a methodology that verifies features through structured logs and real-time monitoring without writing test scripts.
Traditional: Write test code → Execute → Check results → Maintain
Zero Script: Build log infrastructure → Manual UX test → AI log analysis → Auto issue detection零脚本QA是一种无需编写测试脚本,通过结构化日志和实时监控来验证功能的方法论。
传统方式:编写测试代码 → 执行 → 检查结果 → 维护
零脚本方式:搭建日志基础设施 → 手动UX测试 → AI日志分析 → 自动问题检测Core Principles
核心原则
1. Log Everything
1. 全面日志记录
- All API calls (including 200 OK)
- All errors
- All important business events
- Entire flow trackable via Request ID
- 所有API调用(包括200 OK请求)
- 所有错误信息
- 所有重要业务事件
- 通过Request ID追踪完整流程
2. Structured JSON Logs
2. 结构化JSON日志
- Parseable JSON format
- Consistent fields (timestamp, level, request_id, message, data)
- Different log levels per environment
- 可解析的JSON格式
- 统一字段(timestamp、level、request_id、message、data)
- 不同环境使用不同日志级别
3. Real-time Monitoring
3. 实时监控
- Docker log streaming
- Claude Code analyzes in real-time
- Immediate issue detection and documentation
- Docker日志流式传输
- Claude Code实时分析
- 即时检测并记录问题
Logging Architecture
日志架构
JSON Log Format Standard
JSON日志格式标准
json
{
"timestamp": "2026-01-08T10:30:00.000Z",
"level": "INFO",
"service": "api",
"request_id": "req_abc123",
"message": "API Request completed",
"data": {
"method": "POST",
"path": "/api/users",
"status": 200,
"duration_ms": 45
}
}json
{
"timestamp": "2026-01-08T10:30:00.000Z",
"level": "INFO",
"service": "api",
"request_id": "req_abc123",
"message": "API Request completed",
"data": {
"method": "POST",
"path": "/api/users",
"status": 200,
"duration_ms": 45
}
}Required Log Fields
必填日志字段
| Field | Type | Description |
|---|---|---|
| timestamp | ISO 8601 | Time of occurrence |
| level | string | DEBUG, INFO, WARNING, ERROR |
| service | string | Service name (api, web, worker, etc.) |
| request_id | string | Request tracking ID |
| message | string | Log message |
| data | object | Additional data (optional) |
| 字段 | 类型 | 描述 |
|---|---|---|
| timestamp | ISO 8601 | 事件发生时间 |
| level | string | DEBUG、INFO、WARNING、ERROR |
| service | string | 服务名称(api、web、worker等) |
| request_id | string | 请求追踪ID |
| message | string | 日志消息 |
| data | object | 附加数据(可选) |
Log Level Policy
日志级别策略
| Environment | Minimum Level | Purpose |
|---|---|---|
| Local | DEBUG | Development and QA |
| Staging | DEBUG | QA and integration testing |
| Production | INFO | Operations monitoring |
| 环境 | 最低级别 | 用途 |
|---|---|---|
| 本地开发 | DEBUG | 开发与测试 |
| 预发布环境 | DEBUG | 测试与集成验证 |
| 生产环境 | INFO | 运维监控 |
Request ID Propagation
Request ID 传播
Concept
概念
Client → API Gateway → Backend → Database
↓ ↓ ↓ ↓
req_abc req_abc req_abc req_abc
Trackable with same Request ID across all layers客户端 → API网关 → 后端服务 → 数据库
↓ ↓ ↓ ↓
req_abc req_abc req_abc req_abc
通过统一Request ID跨全链路追踪Implementation Patterns
实现模式
1. Request ID Generation (Entry Point)
1. Request ID 生成(入口层)
typescript
// middleware.ts
import { v4 as uuidv4 } from 'uuid';
export function generateRequestId(): string {
return `req_${uuidv4().slice(0, 8)}`;
}
// Propagate via header
headers['X-Request-ID'] = requestId;typescript
// middleware.ts
import { v4 as uuidv4 } from 'uuid';
export function generateRequestId(): string {
return `req_${uuidv4().slice(0, 8)}`;
}
// 通过请求头传播
headers['X-Request-ID'] = requestId;2. Request ID Extraction and Propagation
2. Request ID 提取与传播
typescript
// API client
const requestId = headers['X-Request-ID'] || generateRequestId();
// Include in all logs
logger.info('Processing request', { request_id: requestId });
// Include in header when calling downstream services
await fetch(url, {
headers: { 'X-Request-ID': requestId }
});typescript
// API客户端
const requestId = headers['X-Request-ID'] || generateRequestId();
// 在所有日志中包含Request ID
logger.info('Processing request', { request_id: requestId });
// 调用下游服务时在请求头中携带
await fetch(url, {
headers: { 'X-Request-ID': requestId }
});Backend Logging (FastAPI)
后端日志(FastAPI)
Logging Middleware
日志中间件
python
undefinedpython
undefinedmiddleware/logging.py
middleware/logging.py
import logging
import time
import uuid
import json
from fastapi import Request
class JsonFormatter(logging.Formatter):
def format(self, record):
log_record = {
"timestamp": self.formatTime(record),
"level": record.levelname,
"service": "api",
"request_id": getattr(record, 'request_id', 'N/A'),
"message": record.getMessage(),
}
if hasattr(record, 'data'):
log_record["data"] = record.data
return json.dumps(log_record)
class LoggingMiddleware:
async def call(self, request: Request, call_next):
request_id = request.headers.get('X-Request-ID', f'req_{uuid.uuid4().hex[:8]}')
request.state.request_id = request_id
start_time = time.time()
# Request logging
logger.info(
f"Request started",
extra={
'request_id': request_id,
'data': {
'method': request.method,
'path': request.url.path,
'query': str(request.query_params)
}
}
)
response = await call_next(request)
duration = (time.time() - start_time) * 1000
# Response logging (including 200 OK!)
logger.info(
f"Request completed",
extra={
'request_id': request_id,
'data': {
'status': response.status_code,
'duration_ms': round(duration, 2)
}
}
)
response.headers['X-Request-ID'] = request_id
return responseundefinedimport logging
import time
import uuid
import json
from fastapi import Request
class JsonFormatter(logging.Formatter):
def format(self, record):
log_record = {
"timestamp": self.formatTime(record),
"level": record.levelname,
"service": "api",
"request_id": getattr(record, 'request_id', 'N/A'),
"message": record.getMessage(),
}
if hasattr(record, 'data'):
log_record["data"] = record.data
return json.dumps(log_record)
class LoggingMiddleware:
async def call(self, request: Request, call_next):
request_id = request.headers.get('X-Request-ID', f'req_{uuid.uuid4().hex[:8]}')
request.state.request_id = request_id
start_time = time.time()
# 请求日志
logger.info(
f"Request started",
extra={
'request_id': request_id,
'data': {
'method': request.method,
'path': request.url.path,
'query': str(request.query_params)
}
}
)
response = await call_next(request)
duration = (time.time() - start_time) * 1000
# 响应日志(包括200 OK!)
logger.info(
f"Request completed",
extra={
'request_id': request_id,
'data': {
'status': response.status_code,
'duration_ms': round(duration, 2)
}
}
)
response.headers['X-Request-ID'] = request_id
return responseundefinedBusiness Logic Logging
业务逻辑日志
python
undefinedpython
undefinedservices/user_service.py
services/user_service.py
def create_user(data: dict, request_id: str):
logger.info("Creating user", extra={
'request_id': request_id,
'data': {'email': data['email']}
})
# Business logic
user = User(**data)
db.add(user)
db.commit()
logger.info("User created", extra={
'request_id': request_id,
'data': {'user_id': user.id}
})
return user
---def create_user(data: dict, request_id: str):
logger.info("Creating user", extra={
'request_id': request_id,
'data': {'email': data['email']}
})
# 业务逻辑
user = User(**data)
db.add(user)
db.commit()
logger.info("User created", extra={
'request_id': request_id,
'data': {'user_id': user.id}
})
return user
---Frontend Logging (Next.js)
前端日志(Next.js)
Logger Module
日志模块
typescript
// lib/logger.ts
type LogLevel = 'DEBUG' | 'INFO' | 'WARNING' | 'ERROR';
interface LogData {
request_id?: string;
[key: string]: any;
}
const LOG_LEVELS: Record<LogLevel, number> = {
DEBUG: 0,
INFO: 1,
WARNING: 2,
ERROR: 3,
};
const MIN_LEVEL = process.env.NODE_ENV === 'production' ? 'INFO' : 'DEBUG';
function log(level: LogLevel, message: string, data?: LogData) {
if (LOG_LEVELS[level] < LOG_LEVELS[MIN_LEVEL]) return;
const logEntry = {
timestamp: new Date().toISOString(),
level,
service: 'web',
request_id: data?.request_id || 'N/A',
message,
data: data ? { ...data, request_id: undefined } : undefined,
};
console.log(JSON.stringify(logEntry));
}
export const logger = {
debug: (msg: string, data?: LogData) => log('DEBUG', msg, data),
info: (msg: string, data?: LogData) => log('INFO', msg, data),
warning: (msg: string, data?: LogData) => log('WARNING', msg, data),
error: (msg: string, data?: LogData) => log('ERROR', msg, data),
};typescript
// lib/logger.ts
type LogLevel = 'DEBUG' | 'INFO' | 'WARNING' | 'ERROR';
interface LogData {
request_id?: string;
[key: string]: any;
}
const LOG_LEVELS: Record<LogLevel, number> = {
DEBUG: 0,
INFO: 1,
WARNING: 2,
ERROR: 3,
};
const MIN_LEVEL = process.env.NODE_ENV === 'production' ? 'INFO' : 'DEBUG';
function log(level: LogLevel, message: string, data?: LogData) {
if (LOG_LEVELS[level] < LOG_LEVELS[MIN_LEVEL]) return;
const logEntry = {
timestamp: new Date().toISOString(),
level,
service: 'web',
request_id: data?.request_id || 'N/A',
message,
data: data ? { ...data, request_id: undefined } : undefined,
};
console.log(JSON.stringify(logEntry));
}
export const logger = {
debug: (msg: string, data?: LogData) => log('DEBUG', msg, data),
info: (msg: string, data?: LogData) => log('INFO', msg, data),
warning: (msg: string, data?: LogData) => log('WARNING', msg, data),
error: (msg: string, data?: LogData) => log('ERROR', msg, data),
};API Client Integration
API客户端集成
typescript
// lib/api-client.ts
import { logger } from './logger';
import { v4 as uuidv4 } from 'uuid';
export async function apiClient<T>(
endpoint: string,
options: RequestInit = {}
): Promise<T> {
const requestId = `req_${uuidv4().slice(0, 8)}`;
const startTime = Date.now();
logger.info('API Request started', {
request_id: requestId,
method: options.method || 'GET',
endpoint,
});
try {
const response = await fetch(`/api${endpoint}`, {
...options,
headers: {
'Content-Type': 'application/json',
'X-Request-ID': requestId,
...options.headers,
},
});
const duration = Date.now() - startTime;
const data = await response.json();
// Log 200 OK too!
logger.info('API Request completed', {
request_id: requestId,
status: response.status,
duration_ms: duration,
});
if (!response.ok) {
logger.error('API Request failed', {
request_id: requestId,
status: response.status,
error: data.error,
});
throw new ApiError(data.error);
}
return data;
} catch (error) {
logger.error('API Request error', {
request_id: requestId,
error: error instanceof Error ? error.message : 'Unknown error',
});
throw error;
}
}typescript
// lib/api-client.ts
import { logger } from './logger';
import { v4 as uuidv4 } from 'uuid';
export async function apiClient<T>(
endpoint: string,
options: RequestInit = {}
): Promise<T> {
const requestId = `req_${uuidv4().slice(0, 8)}`;
const startTime = Date.now();
logger.info('API Request started', {
request_id: requestId,
method: options.method || 'GET',
endpoint,
});
try {
const response = await fetch(`/api${endpoint}`, {
...options,
headers: {
'Content-Type': 'application/json',
'X-Request-ID': requestId,
...options.headers,
},
});
const duration = Date.now() - startTime;
const data = await response.json();
// 同样记录200 OK响应
logger.info('API Request completed', {
request_id: requestId,
status: response.status,
duration_ms: duration,
});
if (!response.ok) {
logger.error('API Request failed', {
request_id: requestId,
status: response.status,
error: data.error,
});
throw new ApiError(data.error);
}
return data;
} catch (error) {
logger.error('API Request error', {
request_id: requestId,
error: error instanceof Error ? error.message : 'Unknown error',
});
throw error;
}
}Nginx JSON Logging
Nginx JSON日志
nginx.conf Configuration
nginx.conf 配置
nginx
http {
log_format json_combined escape=json '{'
'"timestamp":"$time_iso8601",'
'"level":"INFO",'
'"service":"nginx",'
'"request_id":"$http_x_request_id",'
'"message":"HTTP Request",'
'"data":{'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}'
'}';
access_log /var/log/nginx/access.log json_combined;
}nginx
http {
log_format json_combined escape=json '{'
'"timestamp":"$time_iso8601",'
'"level":"INFO",'
'"service":"nginx",'
'"request_id":"$http_x_request_id",'
'"message":"HTTP Request",'
'"data":{'
'"remote_addr":"$remote_addr",'
'"method":"$request_method",'
'"uri":"$request_uri",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"upstream_response_time":"$upstream_response_time",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent"'
'}'
'}';
access_log /var/log/nginx/access.log json_combined;
}Docker-Based QA Workflow
基于Docker的QA工作流
docker-compose.yml Configuration
docker-compose.yml 配置
yaml
version: '3.8'
services:
api:
build: ./backend
environment:
- LOG_LEVEL=DEBUG
- LOG_FORMAT=json
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
web:
build: ./frontend
environment:
- NODE_ENV=development
depends_on:
- api
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
depends_on:
- api
- webyaml
version: '3.8'
services:
api:
build: ./backend
environment:
- LOG_LEVEL=DEBUG
- LOG_FORMAT=json
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
web:
build: ./frontend
environment:
- NODE_ENV=development
depends_on:
- api
nginx:
image: nginx:alpine
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
depends_on:
- api
- webReal-time Log Monitoring
实时日志监控
bash
undefinedbash
undefinedStream all service logs
流式查看所有服务日志
docker compose logs -f
docker compose logs -f
Specific service only
仅查看特定服务日志
docker compose logs -f api
docker compose logs -f api
Filter errors only
仅过滤错误日志
docker compose logs -f | grep '"level":"ERROR"'
docker compose logs -f | grep '"level":"ERROR"'
Track specific Request ID
追踪特定Request ID的日志
docker compose logs -f | grep 'req_abc123'
---docker compose logs -f | grep 'req_abc123'
---QA Automation Workflow
QA自动化工作流
1. Start Environment
1. 启动环境
bash
undefinedbash
undefinedStart development environment
启动开发环境
docker compose up -d
docker compose up -d
Start log monitoring (Claude Code monitors)
启动日志监控(Claude Code负责监控)
docker compose logs -f
undefineddocker compose logs -f
undefined2. Manual UX Testing
2. 手动UX测试
User tests actual features in browser:
1. Sign up attempt
2. Login attempt
3. Use core features
4. Test edge cases用户在浏览器中测试实际功能:
1. 注册尝试
2. 登录尝试
3. 核心功能使用
4. 边缘场景测试3. Claude Code Log Analysis
3. Claude Code日志分析
Claude Code in real-time:
1. Monitor log stream
2. Detect error patterns
3. Detect abnormal response times
4. Track entire flow via Request ID
5. Auto-document issuesClaude Code实时执行:
1. 监控日志流
2. 检测错误模式
3. 检测异常响应时间
4. 通过Request ID追踪完整流程
5. 自动记录问题4. Issue Documentation
4. 问题记录
markdown
undefinedmarkdown
undefinedQA Issue Report
QA问题报告
Issues Found
发现的问题
ISSUE-001: Insufficient error handling on login failure
ISSUE-001:登录失败时错误处理不足
- Request ID: req_abc123
- Severity: Medium
- Reproduction path: Login → Wrong password
- Log:
json
{"level":"ERROR","message":"Login failed","data":{"error":"Invalid credentials"}} - Problem: Error message not user-friendly
- Recommended fix: Add error code to message mapping
---- Request ID:req_abc123
- 严重程度:中等
- 复现路径:登录 → 输入错误密码
- 日志:
json
{"level":"ERROR","message":"Login failed","data":{"error":"Invalid credentials"}} - 问题:错误消息对用户不友好
- 修复建议:添加错误码与消息的映射
---Issue Detection Patterns
问题检测模式
1. Error Detection
1. 错误检测
json
{"level":"ERROR","message":"..."}→ Report immediately
json
{"level":"ERROR","message":"..."}→ 立即报告
2. Slow Response Detection
2. 慢响应检测
json
{"data":{"duration_ms":3000}}→ Warning when exceeding 1000ms
json
{"data":{"duration_ms":3000}}→ 超过1000ms时发出警告
3. Consecutive Failure Detection
3. 连续失败检测
3+ consecutive failures on same endpoint→ Report potential system issue
同一端点连续失败3次以上→ 报告潜在系统问题
4. Abnormal Status Codes
4. 异常状态码检测
json
{"data":{"status":500}}→ Report 5xx errors immediately
json
{"data":{"status":500}}→ 立即报告5xx错误
Phase Integration
阶段集成
| Phase | Zero Script QA Integration |
|---|---|
| Phase 4 (API) | API response logging verification |
| Phase 6 (UI) | Frontend logging verification |
| Phase 7 (Security) | Security event logging verification |
| Phase 8 (Review) | Log quality review |
| Phase 9 (Deployment) | Production log level configuration |
| 阶段 | 零脚本QA集成方式 |
|---|---|
| 阶段4(API) | API响应日志验证 |
| 阶段6(UI) | 前端日志验证 |
| 阶段7(安全) | 安全事件日志验证 |
| 阶段8(评审) | 日志质量评审 |
| 阶段9(部署) | 生产环境日志级别配置 |
Iterative Test Cycle Pattern
迭代测试周期模式
Based on bkamp.ai notification feature development:
基于bkamp.ai通知功能开发的实践:
Example: 8-Cycle Test Process
示例:8轮测试流程
| Cycle | Pass Rate | Bug Found | Fix Applied |
|---|---|---|---|
| 1st | 30% | DB schema mismatch | Schema migration |
| 2nd | 45% | NULL handling missing | Add null checks |
| 3rd | 55% | Routing error | Fix deeplinks |
| 4th | 65% | Type mismatch | Fix enum types |
| 5th | 70% | Calculation error | Fix count logic |
| 6th | 75% | Event missing | Add event triggers |
| 7th | 82% | Cache sync issue | Fix cache invalidation |
| 8th | 89% | Stable | Final polish |
| 轮次 | 通过率 | 发现的问题 | 修复措施 |
|---|---|---|---|
| 第1轮 | 30% | 数据库 schema 不匹配 | 执行 schema 迁移 |
| 第2轮 | 45% | 缺少NULL值处理 | 添加空值检查 |
| 第3轮 | 55% | 路由错误 | 修复深度链接 |
| 第4轮 | 65% | 类型不匹配 | 修复枚举类型 |
| 第5轮 | 70% | 计算错误 | 修复计数逻辑 |
| 第6轮 | 75% | 事件缺失 | 添加事件触发器 |
| 第7轮 | 82% | 缓存同步问题 | 修复缓存失效逻辑 |
| 第8轮 | 89% | 稳定 | 最终优化 |
Cycle Workflow
周期工作流
┌─────────────────────────────────────────────────────────────┐
│ Iterative Test Cycle │
├─────────────────────────────────────────────────────────────┤
│ │
│ Cycle N: │
│ 1. Run test script (E2E or manual) │
│ 2. Claude monitors logs in real-time │
│ 3. Record pass/fail results │
│ 4. Claude identifies root cause of failures │
│ 5. Fix code immediately (hot reload) │
│ 6. Document: Cycle N → Bug → Fix │
│ │
│ Repeat until acceptable pass rate (>85%) │
│ │
└─────────────────────────────────────────────────────────────┘┌─────────────────────────────────────────────────────────────┐
│ 迭代测试周期 │
├─────────────────────────────────────────────────────────────┤
│ │
│ 第N轮: │
│ 1. 运行测试脚本(端到端或手动测试) │
│ 2. Claude实时监控日志 │
│ 3. 记录通过/失败结果 │
│ 4. Claude识别失败的根本原因 │
│ 5. 立即修复代码(热重载) │
│ 6. 记录:第N轮 → 问题 → 修复 │
│ │
│ 重复执行直至达到可接受的通过率(>85%) │
│ │
└─────────────────────────────────────────────────────────────┘E2E Test Script Template
端到端测试脚本模板
bash
#!/bin/bashbash
#!/bin/bashE2E Test Script Template
端到端测试脚本模板
API_URL="http://localhost:8000"
TOKEN="your-test-token"
PASS_COUNT=0
FAIL_COUNT=0
SKIP_COUNT=0
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[0;33m'
NC='\033[0m'
test_feature_action() {
echo -n "Testing: Feature action... "
response=$(curl -s -X POST "$API_URL/api/v1/feature/action" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"param": "value"}')
if [[ "$response" == *"expected_result"* ]]; then
echo -e "${GREEN}✅ PASS${NC}"
((PASS_COUNT++))
else
echo -e "${RED}❌ FAIL${NC}"
echo "Response: $response"
((FAIL_COUNT++))
fi}
API_URL="http://localhost:8000"
TOKEN="your-test-token"
PASS_COUNT=0
FAIL_COUNT=0
SKIP_COUNT=0
GREEN='\033[0;32m'
RED='\033[0;31m'
YELLOW='\033[0;33m'
NC='\033[0m'
test_feature_action() {
echo -n "测试:功能操作... "
response=$(curl -s -X POST "$API_URL/api/v1/feature/action" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"param": "value"}')
if [[ "$response" == *"expected_result"* ]]; then
echo -e "${GREEN}✅ 通过${NC}"
((PASS_COUNT++))
else
echo -e "${RED}❌ 失败${NC}"
echo "响应内容: $response"
((FAIL_COUNT++))
fi}
Run all tests
运行所有测试
test_feature_action
test_feature_action
... more tests
... 更多测试用例
Summary
测试总结
echo ""
echo "═══════════════════════════════════════"
echo "Test Results:"
echo -e " ${GREEN}✅ PASS: $PASS_COUNT${NC}"
echo -e " ${RED}❌ FAIL: $FAIL_COUNT${NC}"
echo -e " ${YELLOW}⏭️ SKIP: $SKIP_COUNT${NC}"
echo "═══════════════════════════════════════"
undefinedecho ""
echo "═══════════════════════════════════════"
echo "测试结果:"
echo -e " ${GREEN}✅ 通过: $PASS_COUNT${NC}"
echo -e " ${RED}❌ 失败: $FAIL_COUNT${NC}"
echo -e " ${YELLOW}⏭️ 跳过: $SKIP_COUNT${NC}"
echo "═══════════════════════════════════════"
undefinedTest Cycle Documentation Template
测试周期记录模板
markdown
undefinedmarkdown
undefinedFeature Test Results - Cycle N
功能测试结果 - 第N轮
Summary
总结
- Date: YYYY-MM-DD
- Feature: {feature name}
- Pass Rate: N%
- Tests: X passed / Y total
- 日期:YYYY-MM-DD
- 功能:{功能名称}
- 通过率:N%
- 测试用例:X通过 / Y总计
Results
测试结果
| Test Case | Status | Notes |
|---|---|---|
| Test 1 | ✅ | |
| Test 2 | ❌ | {error description} |
| Test 3 | ⏭️ | {skip reason} |
| 测试用例 | 状态 | 备注 |
|---|---|---|
| 测试1 | ✅ | |
| 测试2 | ❌ | {错误描述} |
| 测试3 | ⏭️ | {跳过原因} |
Bugs Found
发现的Bug
BUG-001: {Title}
BUG-001:{标题}
- Root Cause: {description}
- Fix: {what was changed}
- Files:
path/to/file.py:123
- 根本原因:{描述}
- 修复方案:{修改内容}
- 涉及文件:
path/to/file.py:123
Next Cycle Plan
下一轮测试计划
- {what to test next}
---- {后续测试重点}
---Checklist
检查清单
Logging Infrastructure
日志基础设施
- JSON log format applied
- Request ID generation and propagation
- Log level settings per environment
- Docker logging configuration
- 已应用JSON日志格式
- 已实现Request ID生成与传播
- 已配置分环境日志级别
- 已配置Docker日志
Backend Logging
后端日志
- Logging middleware implemented
- All API calls logged (including 200 OK)
- Business logic logging
- Detailed error logging
- 已实现日志中间件
- 已记录所有API调用(包括200 OK)
- 已添加业务逻辑日志
- 已记录详细错误信息
Frontend Logging
前端日志
- Logger module implemented
- API client integration
- Error boundary logging
- 已实现日志模块
- 已集成API客户端日志
- 已添加错误边界日志
QA Workflow
QA工作流
- Docker Compose configured
- Real-time monitoring ready
- Issue documentation template ready
- 已配置Docker Compose
- 已准备好实时监控
- 已准备好问题记录模板
Auto-Apply Rules
自动应用规则
When Building Logging Infrastructure
搭建日志基础设施时
When implementing API/Backend:
- Suggest logging middleware creation
- Suggest JSON format logger setup
- Add Request ID generation/propagation logic
When implementing Frontend:
- Suggest Logger module creation
- Suggest logging integration with API client
- Suggest including Request ID header
实现API/后端服务时:
- 建议创建日志中间件
- 建议设置JSON格式日志
- 添加Request ID生成/传播逻辑
实现前端服务时:
- 建议创建日志模块
- 建议与API客户端集成日志
- 建议添加Request ID请求头
When Performing QA
执行QA测试时
On test request:
- Guide to run
docker compose logs -f - Request manual UX testing from user
- Real-time log monitoring
- Document issues immediately when detected
- Provide fix suggestions
收到测试请求时:
- 指导执行
docker compose logs -f - 要求用户进行手动UX测试
- 实时监控日志
- 检测到问题时立即记录
- 提供修复建议
Issue Detection Thresholds
问题检测阈值
| Severity | Condition | Action |
|---|---|---|
| Critical | | Immediate report |
| Critical | | Immediate report |
| Critical | 3+ consecutive failures | Immediate report |
| Warning | | Warning report |
| Warning | | Warning report |
| Info | Missing log fields | Note for improvement |
| Info | Request ID not propagated | Note for improvement |
| 严重程度 | 条件 | 操作 |
|---|---|---|
| 严重 | | 立即报告 |
| 严重 | | 立即报告 |
| 严重 | 连续失败3次以上 | 立即报告 |
| 警告 | | 警告报告 |
| 警告 | | 警告报告 |
| 信息 | 缺少日志字段 | 记录改进建议 |
| 信息 | Request ID未传播 | 记录改进建议 |
Required Logging Locations
必选日志记录位置
Backend (FastAPI/Express)
后端(FastAPI/Express)
✅ Request start (method, path, params)
✅ Request complete (status, duration_ms)
✅ Major business logic steps
✅ Detailed info on errors
✅ Before/after external API calls
✅ DB queries (in development)✅ 请求开始(方法、路径、参数)
✅ 请求完成(状态码、响应时长)
✅ 主要业务逻辑步骤
✅ 详细错误信息
✅ 外部API调用前后
✅ 数据库查询(开发环境)Frontend (Next.js/React)
前端(Next.js/React)
✅ API call start
✅ API response received (status, duration)
✅ Detailed info on errors
✅ Important user actions✅ API调用开始
✅ API响应接收(状态码、时长)
✅ 详细错误信息
✅ 重要用户操作