go-concurrency-web
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseGo Concurrency for Web Applications
面向Web应用的Go并发编程
Quick Reference
快速参考
| Topic | Reference |
|---|---|
| Worker Pools & errgroup | references/worker-pools.md |
| Rate Limiting | references/rate-limiting.md |
| Race Detection & Fixes | references/race-detection.md |
| 主题 | 参考文档 |
|---|---|
| 工作池与errgroup | references/worker-pools.md |
| 速率限制 | references/rate-limiting.md |
| 竞争检测与修复 | references/race-detection.md |
Core Rules
核心原则
- Goroutines are cheap but not free — each goroutine consumes ~2-8 KB of stack. Unbounded spawning under load leads to OOM.
- Always have a shutdown path — every goroutine you start must have a way to exit. Use , channel closing, or
context.Context.sync.WaitGroup - Prefer channels for communication — use channels to coordinate work between goroutines and signal completion.
- Use mutexes for state protection — when goroutines share mutable state, protect it with ,
sync.Mutex, orsync.RWMutex.sync/atomic - Never spawn raw goroutines in HTTP handlers — use worker pools, , or other bounded concurrency primitives.
errgroup
- Goroutines成本低廉但并非无代价 — 每个Goroutine占用约2-8KB的栈空间。在高负载下无限制创建会导致内存不足(OOM)。
- 必须提供关闭路径 — 启动的每个Goroutine都要有退出方式。可使用、关闭通道或
context.Context。sync.WaitGroup - 优先使用通道进行通信 — 利用通道协调Goroutine间的工作并传递完成信号。
- 使用互斥锁保护状态 — 当Goroutines共享可变状态时,使用、
sync.Mutex或sync.RWMutex进行保护。sync/atomic - 绝不在HTTP处理器中直接创建裸Goroutine — 使用工作池、或其他有界并发原语。
errgroup
Worker Pool Pattern
工作池模式
Use worker pools for background tasks dispatched from HTTP handlers. This bounds concurrency and provides graceful shutdown.
go
// Worker pool for background tasks (e.g., sending emails)
type WorkerPool struct {
jobs chan Job
wg sync.WaitGroup
logger *slog.Logger
}
type Job struct {
ID string
Execute func(ctx context.Context) error
}
func NewWorkerPool(numWorkers int, queueSize int, logger *slog.Logger) *WorkerPool {
wp := &WorkerPool{
jobs: make(chan Job, queueSize),
logger: logger,
}
for i := 0; i < numWorkers; i++ {
wp.wg.Add(1)
go wp.worker(i)
}
return wp
}
func (wp *WorkerPool) worker(id int) {
defer wp.wg.Done()
for job := range wp.jobs {
wp.logger.Info("processing job", "worker", id, "job_id", job.ID)
if err := job.Execute(context.Background()); err != nil {
wp.logger.Error("job failed", "worker", id, "job_id", job.ID, "err", err)
}
}
}
func (wp *WorkerPool) Submit(job Job) {
wp.jobs <- job
}
func (wp *WorkerPool) Shutdown() {
close(wp.jobs)
wp.wg.Wait()
}在HTTP处理器中分发后台任务时可使用工作池,它能限制并发数并支持优雅关闭。
go
// 用于后台任务的工作池(例如发送邮件)
type WorkerPool struct {
jobs chan Job
wg sync.WaitGroup
logger *slog.Logger
}
type Job struct {
ID string
Execute func(ctx context.Context) error
}
func NewWorkerPool(numWorkers int, queueSize int, logger *slog.Logger) *WorkerPool {
wp := &WorkerPool{
jobs: make(chan Job, queueSize),
logger: logger,
}
for i := 0; i < numWorkers; i++ {
wp.wg.Add(1)
go wp.worker(i)
}
return wp
}
func (wp *WorkerPool) worker(id int) {
defer wp.wg.Done()
for job := range wp.jobs {
wp.logger.Info("processing job", "worker", id, "job_id", job.ID)
if err := job.Execute(context.Background()); err != nil {
wp.logger.Error("job failed", "worker", id, "job_id", job.ID, "err", err)
}
}
}
func (wp *WorkerPool) Submit(job Job) {
wp.jobs <- job
}
func (wp *WorkerPool) Shutdown() {
close(wp.jobs)
wp.wg.Wait()
}Usage in HTTP Handler
在HTTP处理器中的使用
go
func (s *Server) handleCreateUser(w http.ResponseWriter, r *http.Request) {
user, err := s.userService.Create(r.Context(), decodeUser(r))
if err != nil {
handleError(w, r, err)
return
}
// Dispatch background task — never spawn raw goroutines in handlers
s.workers.Submit(Job{
ID: "welcome-email-" + user.ID,
Execute: func(ctx context.Context) error {
return s.emailService.SendWelcome(ctx, user)
},
})
writeJSON(w, http.StatusCreated, user)
}See references/worker-pools.md for sizing guidance, backpressure, error handling, retry patterns, and as a simpler alternative.
errgroupgo
func (s *Server) handleCreateUser(w http.ResponseWriter, r *http.Request) {
user, err := s.userService.Create(r.Context(), decodeUser(r))
if err != nil {
handleError(w, r, err)
return
}
// 分发后台任务——绝不在处理器中直接创建裸Goroutine
s.workers.Submit(Job{
ID: "welcome-email-" + user.ID,
Execute: func(ctx context.Context) error {
return s.emailService.SendWelcome(ctx, user)
},
})
writeJSON(w, http.StatusCreated, user)
}有关容量规划、背压处理、错误处理、重试模式以及作为更简单替代方案的,请参阅references/worker-pools.md。
errgroupRate Limiting
速率限制
Use for token bucket rate limiting. Apply as middleware for global limits or per-IP/per-user limits.
golang.org/x/time/rateKey points:
- Global rate limiting protects overall service capacity
- Per-IP rate limiting prevents individual clients from monopolizing resources
- Always return with a
429 Too Many RequestsheaderRetry-After
See references/rate-limiting.md for middleware implementation, per-IP limiting, stale limiter cleanup, and API key-based limiting.
使用实现令牌桶速率限制,可作为中间件应用于全局限制或按IP/按用户的限制。
golang.org/x/time/rate关键点:
- 全局速率限制可保护服务整体容量
- 按IP速率限制可防止单个客户端独占资源
- 应始终返回状态码并携带
429 Too Many Requests响应头Retry-After
有关中间件实现、按IP限制、过期限制器清理以及基于API密钥的限制,请参阅references/rate-limiting.md。
Race Detection
竞争检测
Run the race detector in development and CI:
bash
go test -race ./...
go build -race -o myserver ./cmd/serverThe race detector catches concurrent reads and writes to shared memory. It does not catch logical races (e.g., TOCTOU bugs) or deadlocks.
See references/race-detection.md for common web handler races, fixing strategies, and CI integration.
在开发环境和CI中运行竞争检测器:
bash
go test -race ./...
go build -race -o myserver ./cmd/server竞争检测器可捕获对共享内存的并发读写操作,但无法捕获逻辑竞争(例如TOCTOU漏洞)或死锁。
有关Web处理器中常见的竞争问题、修复策略以及CI集成,请参阅references/race-detection.md。
Handler Safety
处理器安全性
Every incoming HTTP request runs in its own goroutine. Any shared mutable state on the server struct is a potential data race.
go
// BAD — shared state without protection
type Server struct {
requestCount int // data race!
}
func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
s.requestCount++ // concurrent writes = race condition
}
// GOOD — use atomic or mutex
type Server struct {
requestCount atomic.Int64
}
func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
s.requestCount.Add(1)
}
// GOOD — use mutex for complex state
type Server struct {
mu sync.RWMutex
cache map[string]*CachedItem
}
func (s *Server) handleGetCached(w http.ResponseWriter, r *http.Request) {
s.mu.RLock()
item, ok := s.cache[r.PathValue("key")]
s.mu.RUnlock()
// ...
}每个传入的HTTP请求都在独立的Goroutine中运行。服务器结构体上的任何共享可变状态都可能引发数据竞争。
go
// 错误示例——无保护的共享状态
type Server struct {
requestCount int // 存在数据竞争!
}
func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
s.requestCount++ // 并发写入会导致竞争条件
}
// 正确示例——使用atomic或互斥锁
type Server struct {
requestCount atomic.Int64
}
func (s *Server) handleRequest(w http.ResponseWriter, r *http.Request) {
s.requestCount.Add(1)
}
// 正确示例——对复杂状态使用互斥锁
type Server struct {
mu sync.RWMutex
cache map[string]*CachedItem
}
func (s *Server) handleGetCached(w http.ResponseWriter, r *http.Request) {
s.mu.RLock()
item, ok := s.cache[r.PathValue("key")]
s.mu.RUnlock()
// ...
}Rules for Handler Safety
处理器安全规则
- Request-scoped data is safe — , request body, URL params are isolated per request.
r.Context() - Server struct fields are shared — any field on accessed by handlers needs synchronization.
*Server - Database connections are safe — manages its own connection pool with internal locking.
*sql.DB - Maps are not safe — use or protect with a mutex.
sync.Map - Slices are not safe — concurrent append or read/write requires a mutex.
- 请求作用域的数据是安全的 — 、请求体、URL参数在每个请求中都是隔离的。
r.Context() - 服务器结构体字段是共享的 — 处理器访问的上的任何字段都需要同步机制。
*Server - 数据库连接是安全的 — 通过内部锁管理自身的连接池。
*sql.DB - Map不是线程安全的 — 使用或通过互斥锁保护。
sync.Map - Slice不是线程安全的 — 并发追加或读写需要互斥锁。
Anti-Patterns
反模式
Unbounded goroutine spawning
无限制创建Goroutine
go
// BAD — no limit on concurrent goroutines
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
go func() {
// What if 10,000 requests arrive at once?
s.processWebhook(r.Context(), decodeWebhook(r))
}()
w.WriteHeader(http.StatusAccepted)
}
// GOOD — use a worker pool
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
webhook := decodeWebhook(r)
s.workers.Submit(Job{
ID: "webhook-" + webhook.ID,
Execute: func(ctx context.Context) error {
return s.processWebhook(ctx, webhook)
},
})
w.WriteHeader(http.StatusAccepted)
}go
// 错误示例——对并发Goroutine数量无限制
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
go func() {
// 如果同时到达10000个请求会怎样?
s.processWebhook(r.Context(), decodeWebhook(r))
}()
w.WriteHeader(http.StatusAccepted)
}
// 正确示例——使用工作池
func (s *Server) handleWebhook(w http.ResponseWriter, r *http.Request) {
webhook := decodeWebhook(r)
s.workers.Submit(Job{
ID: "webhook-" + webhook.ID,
Execute: func(ctx context.Context) error {
return s.processWebhook(ctx, webhook)
},
})
w.WriteHeader(http.StatusAccepted)
}Forgetting to propagate context
忘记传递上下文
go
// BAD — loses cancellation signal
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
results, err := s.search(context.Background(), r.URL.Query().Get("q"))
// ...
}
// GOOD — use request context
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
results, err := s.search(r.Context(), r.URL.Query().Get("q"))
// ...
}go
// 错误示例——丢失取消信号
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
results, err := s.search(context.Background(), r.URL.Query().Get("q"))
// ...
}
// 正确示例——使用请求上下文
func (s *Server) handleSearch(w http.ResponseWriter, r *http.Request) {
results, err := s.search(r.Context(), r.URL.Query().Get("q"))
// ...
}Goroutine leak from missing channel receiver
因缺少通道接收者导致Goroutine泄漏
go
// BAD — goroutine blocks forever if nobody reads the channel
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
ch := make(chan *Response)
go func() {
resp, _ := http.Get(url) // blocks forever if ctx cancels
ch <- resp // stuck here if nobody reads
}()
select {
case resp := <-ch:
return resp, nil
case <-ctx.Done():
return nil, ctx.Err() // goroutine leaked!
}
}
// GOOD — use buffered channel so goroutine can exit
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
ch := make(chan *Response, 1) // buffered — goroutine can always send
go func() {
resp, _ := http.Get(url)
ch <- resp
}()
select {
case resp := <-ch:
return resp, nil
case <-ctx.Done():
return nil, ctx.Err()
}
}go
// 错误示例——如果无人读取通道,Goroutine会永远阻塞
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
ch := make(chan *Response)
go func() {
resp, _ := http.Get(url) // 如果ctx取消,会永远阻塞
ch <- resp // 如果无人读取,会卡在这儿
}()
select {
case resp := <-ch:
return resp, nil
case <-ctx.Done():
return nil, ctx.Err() // Goroutine泄漏!
}
}
// 正确示例——使用带缓冲的通道,让Goroutine可以退出
func fetchWithTimeout(ctx context.Context, url string) (*Response, error) {
ch := make(chan *Response, 1) // 带缓冲——Goroutine总能发送数据
go func() {
resp, _ := http.Get(url)
ch <- resp
}()
select {
case resp := <-ch:
return resp, nil
case <-ctx.Done():
return nil, ctx.Err()
}
}Using time.Sleep
for coordination
time.Sleep使用time.Sleep
进行协调
time.Sleepgo
// BAD — sleeping to wait for goroutines
go doWork()
time.Sleep(5 * time.Second) // hoping it finishes
// GOOD — use sync primitives
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
doWork()
}()
wg.Wait()go
// 错误示例——通过休眠等待Goroutines完成
go doWork()
time.Sleep(5 * time.Second) // 寄希望于它能完成
// 正确示例——使用同步原语
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
doWork()
}()
wg.Wait()",