sql-database-assistant
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSQL Database Assistant - POWERFUL Tier Skill
SQL数据库助手 - 高阶技能
Overview
概述
The operational companion to database design. While database-designer focuses on schema architecture and database-schema-designer handles ERD modeling, this skill covers the day-to-day: writing queries, optimizing performance, generating migrations, and bridging the gap between application code and database engines.
数据库设计的实用配套技能。database-designer专注于架构设计,database-schema-designer负责ERD建模,而本技能覆盖日常操作:编写查询语句、优化性能、生成迁移脚本,以及搭建应用代码与数据库引擎之间的桥梁。
Core Capabilities
核心能力
- Natural Language to SQL — translate requirements into correct, performant queries
- Schema Exploration — introspect live databases across PostgreSQL, MySQL, SQLite, SQL Server
- Query Optimization — EXPLAIN analysis, index recommendations, N+1 detection, rewrite patterns
- Migration Generation — up/down scripts, zero-downtime strategies, rollback plans
- ORM Integration — Prisma, Drizzle, TypeORM, SQLAlchemy patterns and escape hatches
- Multi-Database Support — dialect-aware SQL with compatibility guidance
- 自然语言转SQL — 将需求转换为正确、高性能的查询语句
- 架构探索 — 对PostgreSQL、MySQL、SQLite、SQL Server等实时数据库进行自省分析
- 查询优化 — EXPLAIN分析、索引推荐、N+1问题检测、查询重写模式
- 迁移脚本生成 — 升级/回滚脚本、零停机策略、回滚方案
- ORM集成 — Prisma、Drizzle、TypeORM、SQLAlchemy的使用模式与应急方案
- 多数据库支持 — 支持方言适配的SQL及兼容性指导
Tools
工具
| Script | Purpose |
|---|---|
| Static analysis of SQL queries for performance issues |
| Generate migration file templates from change descriptions |
| Generate schema documentation from introspection queries |
| 脚本 | 用途 |
|---|---|
| 对SQL查询语句进行静态分析以排查性能问题 |
| 根据变更描述生成迁移文件模板 |
| 通过自省查询生成架构文档 |
Natural Language to SQL
自然语言转SQL
Translation Patterns
转换流程
When converting requirements to SQL, follow this sequence:
- Identify entities — map nouns to tables
- Identify relationships — map verbs to JOINs or subqueries
- Identify filters — map adjectives/conditions to WHERE clauses
- Identify aggregations — map "total", "average", "count" to GROUP BY
- Identify ordering — map "top", "latest", "highest" to ORDER BY + LIMIT
将需求转换为SQL时,请遵循以下步骤:
- 识别实体 — 将名词映射为数据表
- 识别关系 — 将动词映射为JOIN或子查询
- 识别筛选条件 — 将形容词/条件映射为WHERE子句
- 识别聚合操作 — 将“总计”“平均”“计数”映射为GROUP BY
- 识别排序规则 — 将“前N个”“最新”“最高”映射为ORDER BY + LIMIT
Common Query Templates
常用查询模板
Top-N per group (window function)
sql
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY department_id ORDER BY salary DESC) AS rn
FROM employees
) ranked WHERE rn <= 3;Running totals
sql
SELECT date, amount,
SUM(amount) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total
FROM transactions;Gap detection
sql
SELECT curr.id, curr.seq_num, prev.seq_num AS prev_seq
FROM records curr
LEFT JOIN records prev ON prev.seq_num = curr.seq_num - 1
WHERE prev.id IS NULL AND curr.seq_num > 1;UPSERT (PostgreSQL)
sql
INSERT INTO settings (key, value, updated_at)
VALUES ('theme', 'dark', NOW())
ON CONFLICT (key) DO UPDATE SET value = EXCLUDED.value, updated_at = EXCLUDED.updated_at;UPSERT (MySQL)
sql
INSERT INTO settings (key_name, value, updated_at)
VALUES ('theme', 'dark', NOW())
ON DUPLICATE KEY UPDATE value = VALUES(value), updated_at = VALUES(updated_at);See references/query_patterns.md for JOINs, CTEs, window functions, JSON operations, and more.
分组取前N条(窗口函数)
sql
SELECT * FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY department_id ORDER BY salary DESC) AS rn
FROM employees
) ranked WHERE rn <= 3;累计求和
sql
SELECT date, amount,
SUM(amount) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total
FROM transactions;间隙检测
sql
SELECT curr.id, curr.seq_num, prev.seq_num AS prev_seq
FROM records curr
LEFT JOIN records prev ON prev.seq_num = curr.seq_num - 1
WHERE prev.id IS NULL AND curr.seq_num > 1;UPSERT(PostgreSQL)
sql
INSERT INTO settings (key, value, updated_at)
VALUES ('theme', 'dark', NOW())
ON CONFLICT (key) DO UPDATE SET value = EXCLUDED.value, updated_at = EXCLUDED.updated_at;UPSERT(MySQL)
sql
INSERT INTO settings (key_name, value, updated_at)
VALUES ('theme', 'dark', NOW())
ON DUPLICATE KEY UPDATE value = VALUES(value), updated_at = VALUES(updated_at);更多JOIN、CTE、窗口函数、JSON操作等内容,请查看references/query_patterns.md。
Schema Exploration
架构探索
Introspection Queries
自省查询语句
PostgreSQL — list tables and columns
sql
SELECT table_name, column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position;PostgreSQL — foreign keys
sql
SELECT tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table, ccu.column_name AS foreign_column
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage ccu ON tc.constraint_name = ccu.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY';MySQL — table sizes
sql
SELECT table_name, table_rows,
ROUND(data_length / 1024 / 1024, 2) AS data_mb,
ROUND(index_length / 1024 / 1024, 2) AS index_mb
FROM information_schema.tables
WHERE table_schema = DATABASE()
ORDER BY data_length DESC;SQLite — schema dump
sql
SELECT name, sql FROM sqlite_master WHERE type = 'table' ORDER BY name;SQL Server — columns with types
sql
SELECT t.name AS table_name, c.name AS column_name,
ty.name AS data_type, c.max_length, c.is_nullable
FROM sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
JOIN sys.types ty ON c.user_type_id = ty.user_type_id
ORDER BY t.name, c.column_id;PostgreSQL — 列出表和列
sql
SELECT table_name, column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position;PostgreSQL — 外键
sql
SELECT tc.table_name, kcu.column_name,
ccu.table_name AS foreign_table, ccu.column_name AS foreign_column
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu ON tc.constraint_name = kcu.constraint_name
JOIN information_schema.constraint_column_usage ccu ON tc.constraint_name = ccu.constraint_name
WHERE tc.constraint_type = 'FOREIGN KEY';MySQL — 表大小
sql
SELECT table_name, table_rows,
ROUND(data_length / 1024 / 1024, 2) AS data_mb,
ROUND(index_length / 1024 / 1024, 2) AS index_mb
FROM information_schema.tables
WHERE table_schema = DATABASE()
ORDER BY data_length DESC;SQLite — 架构导出
sql
SELECT name, sql FROM sqlite_master WHERE type = 'table' ORDER BY name;SQL Server — 带类型的列
sql
SELECT t.name AS table_name, c.name AS column_name,
ty.name AS data_type, c.max_length, c.is_nullable
FROM sys.columns c
JOIN sys.tables t ON c.object_id = t.object_id
JOIN sys.types ty ON c.user_type_id = ty.user_type_id
ORDER BY t.name, c.column_id;Generating Documentation from Schema
从架构生成文档
Use to produce markdown or JSON documentation:
scripts/schema_explorer.pybash
python scripts/schema_explorer.py --dialect postgres --tables all --format md
python scripts/schema_explorer.py --dialect mysql --tables users,orders --format json --json使用生成Markdown或JSON格式的文档:
scripts/schema_explorer.pybash
python scripts/schema_explorer.py --dialect postgres --tables all --format md
python scripts/schema_explorer.py --dialect mysql --tables users,orders --format json --jsonQuery Optimization
查询优化
EXPLAIN Analysis Workflow
EXPLAIN分析流程
- Run EXPLAIN ANALYZE (PostgreSQL) or EXPLAIN FORMAT=JSON (MySQL)
- Identify the costliest node — Seq Scan on large tables, Nested Loop with high row estimates
- Check for missing indexes — sequential scans on filtered columns
- Look for estimation errors — planned vs actual rows divergence signals stale statistics
- Evaluate JOIN order — ensure the smallest result set drives the join
- 运行EXPLAIN ANALYZE(PostgreSQL)或EXPLAIN FORMAT=JSON(MySQL)
- 识别成本最高的节点 — 大表上的Seq Scan、行数预估过高的Nested Loop
- 检查缺失的索引 — 筛选列上的顺序扫描
- 查找预估误差 — 计划行数与实际行数差异过大表示统计信息过时
- 评估JOIN顺序 — 确保最小的结果集驱动JOIN操作
Index Recommendation Checklist
索引推荐清单
- Columns in WHERE clauses with high selectivity
- Columns in JOIN conditions (foreign keys)
- Columns in ORDER BY when combined with LIMIT
- Composite indexes matching multi-column WHERE predicates (most selective column first)
- Partial indexes for queries with constant filters (e.g., )
WHERE status = 'active' - Covering indexes to avoid table lookups for read-heavy queries
- WHERE子句中高选择性的列
- JOIN条件中的列(外键)
- 结合LIMIT使用的ORDER BY列
- 匹配多列WHERE谓词的复合索引(选择性最高的列在前)
- 针对带常量筛选条件查询的部分索引(例如)
WHERE status = 'active' - 用于避免读密集型查询中表查找的覆盖索引
Query Rewriting Patterns
查询重写模式
| Anti-Pattern | Rewrite |
|---|---|
| |
| |
| Correlated subquery in SELECT | LEFT JOIN with aggregation |
| |
| |
| Full-text search index (GIN/FULLTEXT) |
| Application-side random sampling or |
| 反模式 | 重写方案 |
|---|---|
| |
| |
| SELECT中的关联子查询 | 使用LEFT JOIN结合聚合操作 |
含NULL的 | |
无需去重时使用 | |
| 全文搜索索引(GIN/FULLTEXT) |
| 应用端随机采样或 |
N+1 Detection
N+1问题检测
Symptoms:
- Application loop that executes one query per parent row
- ORM lazy-loading related entities inside a loop
- Query log shows hundreds of identical SELECT patterns with different IDs
Fixes:
- Use eager loading (in Prisma,
includein SQLAlchemy)joinedload - Batch queries with
WHERE id IN (...) - Use DataLoader pattern for GraphQL resolvers
症状:
- 应用循环中为每个父行执行一次查询
- ORM在循环内懒加载关联实体
- 查询日志显示数百条相同SELECT模式但ID不同的语句
修复方案:
- 使用预加载(Prisma中的,SQLAlchemy中的
include)joinedload - 使用进行批量查询
WHERE id IN (...) - 为GraphQL解析器使用DataLoader模式
Static Analysis Tool
静态分析工具
bash
python scripts/query_optimizer.py --query "SELECT * FROM orders WHERE status = 'pending'" --dialect postgres
python scripts/query_optimizer.py --query queries.sql --dialect mysql --jsonSee references/optimization_guide.md for EXPLAIN plan reading, index types, and connection pooling.
bash
python scripts/query_optimizer.py --query "SELECT * FROM orders WHERE status = 'pending'" --dialect postgres
python scripts/query_optimizer.py --query queries.sql --dialect mysql --json更多EXPLAIN计划解读、索引类型、连接池相关内容,请查看references/optimization_guide.md。
Migration Generation
迁移脚本生成
Zero-Downtime Migration Patterns
零停机迁移模式
Adding a column (safe)
sql
-- Up
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Down
ALTER TABLE users DROP COLUMN phone;Renaming a column (expand-contract)
sql
-- Step 1: Add new column
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
-- Step 2: Backfill
UPDATE users SET full_name = name;
-- Step 3: Deploy app reading both columns
-- Step 4: Deploy app writing only new column
-- Step 5: Drop old column
ALTER TABLE users DROP COLUMN name;Adding a NOT NULL column (safe sequence)
sql
-- Step 1: Add nullable
ALTER TABLE orders ADD COLUMN region VARCHAR(50);
-- Step 2: Backfill with default
UPDATE orders SET region = 'unknown' WHERE region IS NULL;
-- Step 3: Add constraint
ALTER TABLE orders ALTER COLUMN region SET NOT NULL;
ALTER TABLE orders ALTER COLUMN region SET DEFAULT 'unknown';Index creation (non-blocking, PostgreSQL)
sql
CREATE INDEX CONCURRENTLY idx_orders_status ON orders (status);添加列(安全)
sql
-- 升级
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- 回滚
ALTER TABLE users DROP COLUMN phone;重命名列(扩展-收缩法)
sql
-- 步骤1:添加新列
ALTER TABLE users ADD COLUMN full_name VARCHAR(255);
-- 步骤2:回填数据
UPDATE users SET full_name = name;
-- 步骤3:部署读取两列的应用版本
-- 步骤4:部署仅写入新列的应用版本
-- 步骤5:删除旧列
ALTER TABLE users DROP COLUMN name;添加NOT NULL列(安全流程)
sql
-- 步骤1:添加可空列
ALTER TABLE orders ADD COLUMN region VARCHAR(50);
-- 步骤2:使用默认值回填
UPDATE orders SET region = 'unknown' WHERE region IS NULL;
-- 步骤3:添加约束
ALTER TABLE orders ALTER COLUMN region SET NOT NULL;
ALTER TABLE orders ALTER COLUMN region SET DEFAULT 'unknown';创建索引(非阻塞,PostgreSQL)
sql
CREATE INDEX CONCURRENTLY idx_orders_status ON orders (status);Data Backfill Strategies
数据回填策略
- Batch updates — process in chunks of 1000-10000 rows to avoid lock contention
- Background jobs — run backfills asynchronously with progress tracking
- Dual-write — write to old and new columns during transition period
- Validation queries — verify row counts and data integrity after each batch
- 批量更新 — 按1000-10000行的块处理,避免锁竞争
- 后台任务 — 异步运行回填并跟踪进度
- 双写 — 过渡期间同时写入旧列和新列
- 验证查询 — 每个批次后验证行数和数据完整性
Rollback Strategies
回滚策略
Every migration must have a reversible down script. For irreversible changes:
- Backup before execution — the affected tables
pg_dump - Feature flags — application can switch between old/new schema reads
- Shadow tables — keep a copy of the original table during migration window
每个迁移必须有可逆的回滚脚本。对于不可逆变更:
- 执行前备份 — 使用备份受影响的表
pg_dump - 功能开关 — 应用可在新旧架构读取之间切换
- 影子表 — 迁移期间保留原表的副本
Migration Generator Tool
迁移生成工具
bash
python scripts/migration_generator.py --change "add email_verified boolean to users" --dialect postgres --format sql
python scripts/migration_generator.py --change "rename column name to full_name in customers" --dialect mysql --format alembic --jsonbash
python scripts/migration_generator.py --change "add email_verified boolean to users" --dialect postgres --format sql
python scripts/migration_generator.py --change "rename column name to full_name in customers" --dialect mysql --format alembic --jsonMulti-Database Support
多数据库支持
Dialect Differences
方言差异
| Feature | PostgreSQL | MySQL | SQLite | SQL Server |
|---|---|---|---|---|
| UPSERT | | | | |
| Boolean | Native | | | |
| Auto-increment | | | | |
| JSON | | | Text (ext) | |
| Array | Native | Not supported | Not supported | Not supported |
| CTE (recursive) | Full support | 8.0+ | 3.8.3+ | Full support |
| Window functions | Full support | 8.0+ | 3.25.0+ | Full support |
| Full-text search | | | FTS5 extension | Full-text catalog |
| LIMIT/OFFSET | | | | |
| 特性 | PostgreSQL | MySQL | SQLite | SQL Server |
|---|---|---|---|---|
| UPSERT | | | | |
| 布尔类型 | 原生 | | | |
| 自增 | | | | |
| JSON | | | 文本(扩展) | |
| 数组 | 原生 | 不支持 | 不支持 | 不支持 |
| 递归CTE | 完全支持 | 8.0+ | 3.8.3+ | 完全支持 |
| 窗口函数 | 完全支持 | 8.0+ | 3.25.0+ | 完全支持 |
| 全文搜索 | | | FTS5扩展 | 全文目录 |
| LIMIT/OFFSET | | | | |
Compatibility Tips
兼容性提示
- Always use parameterized queries — prevents SQL injection across all dialects
- Avoid dialect-specific functions in shared code — wrap in adapter layer
- Test migrations on target engine — varies between engines
information_schema - Use ISO date format — works everywhere
'YYYY-MM-DD' - Quote identifiers — use double quotes (SQL standard) or backticks (MySQL)
- 始终使用参数化查询 — 防止所有方言中的SQL注入
- 共享代码中避免方言特定函数 — 用适配器层封装
- 在目标引擎上测试迁移 — 在不同引擎间存在差异
information_schema - 使用ISO日期格式 — 在所有引擎中有效
'YYYY-MM-DD' - 引用标识符 — 使用双引号(SQL标准)或反引号(MySQL)
ORM Patterns
ORM模式
Prisma
Prisma
Schema definition
prisma
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
posts Post[]
createdAt DateTime @default(now())
}
model Post {
id Int @id @default(autoincrement())
title String
author User @relation(fields: [authorId], references: [id])
authorId Int
}Migrations:
Query API:
Raw SQL escape hatch: SELECT * FROM users WHERE id = ${userId}``
npx prisma migrate dev --name add_user_emailprisma.user.findMany({ where: { email: { contains: '@' } }, include: { posts: true } })prisma.$queryRaw\架构定义
prisma
model User {
id Int @id @default(autoincrement())
email String @unique
name String?
posts Post[]
createdAt DateTime @default(now())
}
model Post {
id Int @id @default(autoincrement())
title String
author User @relation(fields: [authorId], references: [id])
authorId Int
}迁移:
查询API:
原生SQL应急方案:SELECT * FROM users WHERE id = ${userId}``
npx prisma migrate dev --name add_user_emailprisma.user.findMany({ where: { email: { contains: '@' } }, include: { posts: true } })prisma.$queryRaw\Drizzle
Drizzle
Schema-first definition
typescript
export const users = pgTable('users', {
id: serial('id').primaryKey(),
email: varchar('email', { length: 255 }).notNull().unique(),
name: text('name'),
createdAt: timestamp('created_at').defaultNow(),
});Query builder:
Migrations: then
db.select().from(users).where(eq(users.email, email))npx drizzle-kit generate:pgnpx drizzle-kit push:pg架构优先定义
typescript
export const users = pgTable('users', {
id: serial('id').primaryKey(),
email: varchar('email', { length: 255 }).notNull().unique(),
name: text('name'),
createdAt: timestamp('created_at').defaultNow(),
});查询构建器:
迁移: 然后
db.select().from(users).where(eq(users.email, email))npx drizzle-kit generate:pgnpx drizzle-kit push:pgTypeORM
TypeORM
Entity decorators
typescript
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: number;
@Column({ unique: true })
email: string;
@OneToMany(() => Post, post => post.author)
posts: Post[];
}Repository pattern:
Migrations:
userRepo.find({ where: { email }, relations: ['posts'] })npx typeorm migration:generate -n AddUserEmail实体装饰器
typescript
@Entity()
export class User {
@PrimaryGeneratedColumn()
id: number;
@Column({ unique: true })
email: string;
@OneToMany(() => Post, post => post.author)
posts: Post[];
}仓库模式:
迁移:
userRepo.find({ where: { email }, relations: ['posts'] })npx typeorm migration:generate -n AddUserEmailSQLAlchemy
SQLAlchemy
Declarative models
python
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
email = Column(String(255), unique=True, nullable=False)
name = Column(String(255))
posts = relationship('Post', back_populates='author')Session management: Always use context manager
Alembic migrations:
with Session() as session:alembic revision --autogenerate -m "add user email"See references/orm_patterns.md for side-by-side comparisons and migration workflows per ORM.
声明式模型
python
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
email = Column(String(255), unique=True, nullable=False)
name = Column(String(255))
posts = relationship('Post', back_populates='author')会话管理:始终使用上下文管理器
Alembic迁移:
with Session() as session:alembic revision --autogenerate -m "add user email"更多各ORM的对比和迁移流程,请查看references/orm_patterns.md。
Data Integrity
数据完整性
Constraint Strategy
约束策略
- Primary keys — every table must have one; prefer surrogate keys (serial/UUID)
- Foreign keys — enforce referential integrity; define ON DELETE behavior explicitly
- UNIQUE constraints — for business-level uniqueness (email, slug, API key)
- CHECK constraints — validate ranges, enums, and business rules at the DB level
- NOT NULL — default to NOT NULL; make nullable only when genuinely optional
- 主键 — 每个表必须有一个;优先使用代理键(自增/UUID)
- 外键 — 强制引用完整性;显式定义ON DELETE行为
- UNIQUE约束 — 用于业务级唯一性(邮箱、slug、API密钥)
- CHECK约束 — 在数据库层验证范围、枚举和业务规则
- NOT NULL — 默认设置为NOT NULL;仅在确实可选时设为可空
Transaction Isolation Levels
事务隔离级别
| Level | Dirty Read | Non-Repeatable Read | Phantom Read | Use Case |
|---|---|---|---|---|
| READ UNCOMMITTED | Yes | Yes | Yes | Never recommended |
| READ COMMITTED | No | Yes | Yes | Default for PostgreSQL, general OLTP |
| REPEATABLE READ | No | No | Yes (InnoDB: No) | Financial calculations |
| SERIALIZABLE | No | No | No | Critical consistency (billing, inventory) |
| 级别 | 脏读 | 不可重复读 | 幻读 | 使用场景 |
|---|---|---|---|---|
| READ UNCOMMITTED | 是 | 是 | 是 | 绝不推荐 |
| READ COMMITTED | 否 | 是 | 是 | PostgreSQL默认,通用OLTP场景 |
| REPEATABLE READ | 否 | 否 | 是(InnoDB:否) | 财务计算 |
| SERIALIZABLE | 否 | 否 | 否 | 关键一致性场景(计费、库存) |
Deadlock Prevention
死锁预防
- Consistent lock ordering — always acquire locks in the same table/row order
- Short transactions — minimize time between first lock and commit
- Advisory locks — use for application-level coordination
pg_advisory_lock() - Retry logic — catch deadlock errors and retry with exponential backoff
- 一致的锁顺序 — 始终按相同的表/行顺序获取锁
- 短事务 — 最小化首次锁与提交之间的时间
- ** advisory锁** — 使用进行应用级协调
pg_advisory_lock() - 重试逻辑 — 捕获死锁错误并使用指数退避重试
Backup & Restore
备份与恢复
PostgreSQL
PostgreSQL
bash
undefinedbash
undefinedFull backup
全量备份
pg_dump -Fc --no-owner dbname > backup.dump
pg_dump -Fc --no-owner dbname > backup.dump
Restore
恢复
pg_restore -d dbname --clean --no-owner backup.dump
pg_restore -d dbname --clean --no-owner backup.dump
Point-in-time recovery: configure WAL archiving + restore_command
点时间恢复:配置WAL归档 + restore_command
undefinedundefinedMySQL
MySQL
bash
undefinedbash
undefinedFull backup
全量备份
mysqldump --single-transaction --routines --triggers dbname > backup.sql
mysqldump --single-transaction --routines --triggers dbname > backup.sql
Restore
恢复
mysql dbname < backup.sql
mysql dbname < backup.sql
Binary log for PITR: mysqlbinlog --start-datetime="2025-01-01 00:00:00" binlog.000001
用于PITR的二进制日志:mysqlbinlog --start-datetime="2025-01-01 00:00:00" binlog.000001
undefinedundefinedSQLite
SQLite
bash
undefinedbash
undefinedBackup (safe with concurrent reads)
备份(支持并发读取)
sqlite3 dbname ".backup backup.db"
undefinedsqlite3 dbname ".backup backup.db"
undefinedBackup Best Practices
备份最佳实践
- Automate — cron or systemd timer, never manual-only
- Test restores — untested backups are not backups
- Offsite copies — S3, GCS, or separate region
- Retention policy — daily for 7 days, weekly for 4 weeks, monthly for 12 months
- Monitor backup size and duration — sudden changes signal issues
- 自动化 — 使用cron或systemd定时器,绝不依赖手动备份
- 测试恢复 — 未测试的备份不算备份
- 异地备份 — 存储到S3、GCS或其他区域
- 保留策略 — 每日备份保留7天,每周备份保留4周,每月备份保留12个月
- 监控备份大小和时长 — 突然变化表示存在问题
Anti-Patterns
反模式
| Anti-Pattern | Problem | Fix |
|---|---|---|
| Transfers unnecessary data, breaks on schema changes | Explicit column list |
| Missing indexes on FK columns | Slow JOINs and cascading deletes | Add indexes on all foreign keys |
| N+1 queries | 1 + N round trips to database | Eager loading or batch queries |
| Implicit type coercion | | Match types in predicates |
| No connection pooling | Exhausts connections under load | PgBouncer, ProxySQL, or ORM pool |
| Unbounded queries | No LIMIT risks returning millions of rows | Always paginate |
| Storing money as FLOAT | Rounding errors | Use |
| God tables | One table with 50+ columns | Normalize or use vertical partitioning |
| Soft deletes everywhere | Complicates every query with | Archive tables or event sourcing |
| Raw string concatenation | SQL injection | Parameterized queries always |
| 反模式 | 问题 | 修复方案 |
|---|---|---|
| 传输不必要的数据,架构变更时易出错 | 显式指定列列表 |
| 外键列缺失索引 | JOIN和级联删除缓慢 | 为所有外键添加索引 |
| N+1查询 | 1+N次数据库往返 | 预加载或批量查询 |
| 隐式类型转换 | | 谓词中保持类型匹配 |
| 无连接池 | 高负载下耗尽连接 | 使用PgBouncer、ProxySQL或ORM连接池 |
| 无限制查询 | 无LIMIT可能返回数百万行 | 始终分页 |
| 用FLOAT存储金额 | 存在舍入误差 | 使用 |
| 万能表 | 一张表包含50+列 | 规范化或使用垂直分区 |
| 处处软删除 | 每个查询都需添加 | 使用归档表或事件溯源 |
| 原生字符串拼接 | SQL注入风险 | 始终使用参数化查询 |
Cross-References
交叉引用
| Skill | Relationship |
|---|---|
| database-designer | Schema architecture, normalization analysis, ERD generation |
| database-schema-designer | Visual ERD modeling, relationship mapping |
| migration-architect | Complex multi-step migration orchestration |
| api-design-reviewer | Ensuring API endpoints align with query patterns |
| observability-platform | Query performance monitoring, slow query alerts |
| 技能 | 关系 |
|---|---|
| database-designer | 架构设计、规范化分析、ERD生成 |
| database-schema-designer | 可视化ERD建模、关系映射 |
| migration-architect | 复杂多步骤迁移编排 |
| api-design-reviewer | 确保API端点与查询模式对齐 |
| observability-platform | 查询性能监控、慢查询告警 |