loom-sql-optimization
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseSQL Optimization
SQL优化
Overview
概述
This skill focuses on analyzing and optimizing SQL queries for improved performance. It covers query analysis, index optimization, execution plan interpretation, query rewriting strategies, PostgreSQL-specific optimizations, and common anti-patterns. Use this skill for slow queries, N+1 problems, join optimization, index design, and database performance tuning.
本技能专注于分析和优化SQL查询以提升性能,涵盖查询分析、索引优化、执行计划解读、查询重写策略、PostgreSQL专属优化以及常见反模式处理。适用于慢查询、N+1问题、连接优化、索引设计和数据库性能调优场景。
Instructions
操作指南
1. Analyze Query Performance
1. 分析查询性能
- Identify slow queries from logs
- Run EXPLAIN/EXPLAIN ANALYZE
- Measure query execution time
- Check resource utilization
- 从日志中识别慢查询
- 执行EXPLAIN/EXPLAIN ANALYZE
- 测量查询执行时间
- 检查资源利用率
2. Understand Execution Plans
2. 理解执行计划
- Identify scan types (Sequential Scan, Index Scan, Bitmap Scan)
- Check join algorithms (Nested Loop, Hash Join, Merge Join)
- Analyze index usage and selectivity
- Find bottleneck operations (sorts, filters, aggregations)
- Understand cost estimates vs actual rows
- Check buffer usage and I/O patterns
- 识别扫描类型(Sequential Scan、Index Scan、Bitmap Scan)
- 检查连接算法(Nested Loop、Hash Join、Merge Join)
- 分析索引使用情况与选择性
- 定位瓶颈操作(排序、过滤、聚合)
- 理解成本估算与实际行数的差异
- 检查缓冲区使用与I/O模式
3. Apply Optimizations
3. 应用优化策略
- Design appropriate indexes (B-tree, Hash, GiST, GIN)
- Rewrite inefficient queries (subqueries to JOINs, CTEs)
- Optimize join order and algorithms
- Use window functions for complex aggregations
- Leverage partial indexes and covering indexes
- Consider denormalization for read-heavy workloads
- Update table statistics (ANALYZE)
- Tune PostgreSQL configuration parameters
- 设计合适的索引(B-tree、Hash、GiST、GIN)
- 重写低效查询(将子查询改为JOIN、CTE等)
- 优化连接顺序与算法
- 使用窗口函数实现复杂聚合
- 利用部分索引与覆盖索引
- 针对读密集型工作负载考虑反规范化
- 更新表统计信息(执行ANALYZE)
- 调优PostgreSQL配置参数
4. Validate Improvements
4. 验证优化效果
- Compare before/after metrics
- Test with production-like data
- Verify correctness
- Monitor after deployment
- 对比优化前后的指标
- 使用类生产数据测试
- 验证查询正确性
- 部署后持续监控
Best Practices
最佳实践
- Index Strategically: Index columns in WHERE, JOIN, ORDER BY
- Avoid SELECT *: Select only needed columns
- Use EXPLAIN ANALYZE: Always analyze execution plans with actual timing
- Limit Results: Use pagination for large datasets
- Avoid N+1: Use JOINs or batch queries
- Prefer EXISTS over IN: For subqueries with large result sets
- Update Statistics: Run ANALYZE after bulk operations
- Use CTEs for Readability: But watch for optimization fences
- Avoid Functions on Indexed Columns: Prevents index usage
- Monitor Continuously: Track query performance over time
- 策略性创建索引:为WHERE、JOIN、ORDER BY子句中的列创建索引
- 避免SELECT *:仅选择所需列
- 使用EXPLAIN ANALYZE:始终结合实际执行时间分析执行计划
- 限制结果集:对大数据集使用分页
- 避免N+1问题:使用JOIN或批量查询
- 优先使用EXISTS而非IN:针对结果集较大的子查询
- 更新统计信息:批量操作后执行ANALYZE
- 使用CTE提升可读性:但需注意优化屏障问题
- 避免在索引列上使用函数:这会导致索引无法被使用
- 持续监控:长期跟踪查询性能
PostgreSQL-Specific Optimizations
PostgreSQL专属优化
Execution Plan Operators
执行计划算子
Scan Types:
- Sequential Scan: Full table scan (slow for large tables)
- Index Scan: Uses index + table lookups (good for low selectivity)
- Index Only Scan: Uses covering index (fastest)
- Bitmap Index Scan: Multiple index scans combined (good for OR conditions)
Join Algorithms:
- Nested Loop: Best for small tables or index lookups
- Hash Join: Best for medium-sized tables with equality joins
- Merge Join: Best for large pre-sorted tables
扫描类型:
- Sequential Scan(顺序扫描):全表扫描(对大表来说速度较慢)
- Index Scan(索引扫描):使用索引+表查询(适用于低选择性场景)
- Index Only Scan(仅索引扫描):使用覆盖索引(速度最快)
- Bitmap Index Scan(位图索引扫描):合并多个索引扫描结果(适用于OR条件)
连接算法:
- Nested Loop(嵌套循环连接):最适合小表或索引查询
- Hash Join(哈希连接):最适合中等大小表的等值连接
- Merge Join(合并连接):最适合已排序的大表
Statistics and Maintenance
统计信息与维护
sql
-- Update table statistics for better query plans
ANALYZE table_name;
-- Check statistics freshness
SELECT schemaname, tablename, last_analyze, last_autoanalyze
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY last_analyze NULLS FIRST;
-- Find bloated tables
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
n_dead_tup, n_live_tup,
round(n_dead_tup * 100.0 / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC;
-- Vacuum bloated tables
VACUUM ANALYZE table_name;sql
-- Update table statistics for better query plans
ANALYZE table_name;
-- Check statistics freshness
SELECT schemaname, tablename, last_analyze, last_autoanalyze
FROM pg_stat_user_tables
WHERE schemaname = 'public'
ORDER BY last_analyze NULLS FIRST;
-- Find bloated tables
SELECT schemaname, tablename,
pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) AS size,
n_dead_tup, n_live_tup,
round(n_dead_tup * 100.0 / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_pct
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC;
-- Vacuum bloated tables
VACUUM ANALYZE table_name;Configuration Tuning
配置调优
sql
-- Key parameters to check
SHOW shared_buffers; -- Should be 25% of RAM
SHOW effective_cache_size; -- Should be 50-75% of RAM
SHOW work_mem; -- Per-operation memory
SHOW random_page_cost; -- Lower for SSDs (1.1-2.0)sql
-- Key parameters to check
SHOW shared_buffers; -- Should be 25% of RAM
SHOW effective_cache_size; -- Should be 50-75% of RAM
SHOW work_mem; -- Per-operation memory
SHOW random_page_cost; -- Lower for SSDs (1.1-2.0)Common Anti-Patterns
常见反模式
1. SELECT * in Application Code
1. 应用代码中使用SELECT *
sql
-- BAD: Fetches unnecessary columns
SELECT * FROM users WHERE id = 1;
-- GOOD: Fetch only needed columns
SELECT id, email, name FROM users WHERE id = 1;sql
-- BAD: Fetches unnecessary columns
SELECT * FROM users WHERE id = 1;
-- GOOD: Fetch only needed columns
SELECT id, email, name FROM users WHERE id = 1;2. Implicit Type Conversion
2. 隐式类型转换
sql
-- BAD: Can't use index if id is integer
SELECT * FROM users WHERE id = '123';
-- GOOD: Match column type
SELECT * FROM users WHERE id = 123;sql
-- BAD: Can't use index if id is integer
SELECT * FROM users WHERE id = '123';
-- GOOD: Match column type
SELECT * FROM users WHERE id = 123;3. OR Conditions Without Indexes
3. 无索引的OR条件
sql
-- BAD: May not use indexes efficiently
SELECT * FROM orders WHERE status = 'pending' OR status = 'processing';
-- GOOD: Use IN or create partial index
SELECT * FROM orders WHERE status IN ('pending', 'processing');sql
-- BAD: May not use indexes efficiently
SELECT * FROM orders WHERE status = 'pending' OR status = 'processing';
-- GOOD: Use IN or create partial index
SELECT * FROM orders WHERE status IN ('pending', 'processing');4. Correlated Subqueries
4. 关联子查询
sql
-- BAD: Executes subquery for each row
SELECT p.name,
(SELECT COUNT(*) FROM order_items WHERE product_id = p.id) AS order_count
FROM products p;
-- GOOD: Use JOIN with aggregation
SELECT p.name, COUNT(oi.id) AS order_count
FROM products p
LEFT JOIN order_items oi ON oi.product_id = p.id
GROUP BY p.id, p.name;sql
-- BAD: Executes subquery for each row
SELECT p.name,
(SELECT COUNT(*) FROM order_items WHERE product_id = p.id) AS order_count
FROM products p;
-- GOOD: Use JOIN with aggregation
SELECT p.name, COUNT(oi.id) AS order_count
FROM products p
LEFT JOIN order_items oi ON oi.product_id = p.id
GROUP BY p.id, p.name;5. Missing WHERE Clauses
5. 缺失WHERE子句
sql
-- BAD: Updates entire table
UPDATE products SET updated_at = NOW();
-- GOOD: Update only what changed
UPDATE products SET updated_at = NOW()
WHERE id IN (SELECT product_id FROM price_changes);sql
-- BAD: Updates entire table
UPDATE products SET updated_at = NOW();
-- GOOD: Update only what changed
UPDATE products SET updated_at = NOW()
WHERE id IN (SELECT product_id FROM price_changes);Advanced Patterns
高级模式
CTEs (Common Table Expressions)
CTE(通用表表达式)
sql
-- CTEs for readability and reusability
WITH recent_orders AS (
SELECT customer_id, COUNT(*) AS order_count, SUM(total) AS total_spent
FROM orders
WHERE created_at > NOW() - INTERVAL '30 days'
GROUP BY customer_id
),
high_value_customers AS (
SELECT customer_id
FROM recent_orders
WHERE total_spent > 1000
)
SELECT c.name, c.email, ro.order_count, ro.total_spent
FROM customers c
INNER JOIN high_value_customers hvc ON c.id = hvc.customer_id
INNER JOIN recent_orders ro ON c.id = ro.customer_id;
-- Recursive CTEs for hierarchical data
WITH RECURSIVE category_tree AS (
SELECT id, name, parent_id, 1 AS level
FROM categories
WHERE parent_id IS NULL
UNION ALL
SELECT c.id, c.name, c.parent_id, ct.level + 1
FROM categories c
INNER JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT * FROM category_tree ORDER BY level, name;sql
-- CTEs for readability and reusability
WITH recent_orders AS (
SELECT customer_id, COUNT(*) AS order_count, SUM(total) AS total_spent
FROM orders
WHERE created_at > NOW() - INTERVAL '30 days'
GROUP BY customer_id
),
high_value_customers AS (
SELECT customer_id
FROM recent_orders
WHERE total_spent > 1000
)
SELECT c.name, c.email, ro.order_count, ro.total_spent
FROM customers c
INNER JOIN high_value_customers hvc ON c.id = hvc.customer_id
INNER JOIN recent_orders ro ON c.id = ro.customer_id;
-- Recursive CTEs for hierarchical data
WITH RECURSIVE category_tree AS (
SELECT id, name, parent_id, 1 AS level
FROM categories
WHERE parent_id IS NULL
UNION ALL
SELECT c.id, c.name, c.parent_id, ct.level + 1
FROM categories c
INNER JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT * FROM category_tree ORDER BY level, name;Window Functions
窗口函数
sql
-- Ranking and row numbers
SELECT
product_id,
category_id,
price,
ROW_NUMBER() OVER (PARTITION BY category_id ORDER BY price DESC) AS price_rank,
RANK() OVER (ORDER BY price DESC) AS overall_rank,
DENSE_RANK() OVER (PARTITION BY category_id ORDER BY price DESC) AS dense_rank
FROM products;
-- Running totals and moving averages
SELECT
date,
revenue,
SUM(revenue) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total,
AVG(revenue) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) AS moving_avg_7day
FROM daily_sales
ORDER BY date;
-- Lead/Lag for time-series analysis
SELECT
customer_id,
order_date,
total,
LAG(order_date) OVER (PARTITION BY customer_id ORDER BY order_date) AS prev_order_date,
LEAD(total) OVER (PARTITION BY customer_id ORDER BY order_date) AS next_order_total,
total - LAG(total) OVER (PARTITION BY customer_id ORDER BY order_date) AS total_diff
FROM orders;sql
-- Ranking and row numbers
SELECT
product_id,
category_id,
price,
ROW_NUMBER() OVER (PARTITION BY category_id ORDER BY price DESC) AS price_rank,
RANK() OVER (ORDER BY price DESC) AS overall_rank,
DENSE_RANK() OVER (PARTITION BY category_id ORDER BY price DESC) AS dense_rank
FROM products;
-- Running totals and moving averages
SELECT
date,
revenue,
SUM(revenue) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total,
AVG(revenue) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) AS moving_avg_7day
FROM daily_sales
ORDER BY date;
-- Lead/Lag for time-series analysis
SELECT
customer_id,
order_date,
total,
LAG(order_date) OVER (PARTITION BY customer_id ORDER BY order_date) AS prev_order_date,
LEAD(total) OVER (PARTITION BY customer_id ORDER BY order_date) AS next_order_total,
total - LAG(total) OVER (PARTITION BY customer_id ORDER BY order_date) AS total_diff
FROM orders;Examples
示例
Example 1: Query Optimization with EXPLAIN
示例1:使用EXPLAIN优化查询
sql
-- Original slow query
SELECT o.*, c.name, c.email
FROM orders o, customers c
WHERE o.customer_id = c.id
AND o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC;
-- Step 1: Analyze with EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.*, c.name, c.email
FROM orders o, customers c
WHERE o.customer_id = c.id
AND o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC;
-- Output analysis:
-- Seq Scan on orders (cost=0.00..15420.00 rows=50000)
-- Filter: (status = 'pending' AND created_at > '2024-01-01')
-- Rows Removed by Filter: 450000
-- Problem: Sequential scan on large table!
-- Step 2: Create composite index
CREATE INDEX idx_orders_status_created
ON orders(status, created_at DESC)
WHERE status IN ('pending', 'processing');
-- Step 3: Rewrite with explicit JOIN
SELECT o.id, o.total, o.created_at, c.name, c.email
FROM orders o
INNER JOIN customers c ON o.customer_id = c.id
WHERE o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC
LIMIT 100;
-- After optimization:
-- Index Scan using idx_orders_status_created (cost=0.42..125.50 rows=100)
-- 99% reduction in query time!sql
-- Original slow query
SELECT o.*, c.name, c.email
FROM orders o, customers c
WHERE o.customer_id = c.id
AND o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC;
-- Step 1: Analyze with EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.*, c.name, c.email
FROM orders o, customers c
WHERE o.customer_id = c.id
AND o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC;
-- Output analysis:
-- Seq Scan on orders (cost=0.00..15420.00 rows=50000)
-- Filter: (status = 'pending' AND created_at > '2024-01-01')
-- Rows Removed by Filter: 450000
-- Problem: Sequential scan on large table!
-- Step 2: Create composite index
CREATE INDEX idx_orders_status_created
ON orders(status, created_at DESC)
WHERE status IN ('pending', 'processing');
-- Step 3: Rewrite with explicit JOIN
SELECT o.id, o.total, o.created_at, c.name, c.email
FROM orders o
INNER JOIN customers c ON o.customer_id = c.id
WHERE o.status = 'pending'
AND o.created_at > '2024-01-01'
ORDER BY o.created_at DESC
LIMIT 100;
-- After optimization:
-- Index Scan using idx_orders_status_created (cost=0.42..125.50 rows=100)
-- 99% reduction in query time!Example 2: N+1 Query Problem
示例2:N+1查询问题
sql
-- Problem: N+1 queries
-- Application code:
-- orders = SELECT * FROM orders WHERE user_id = 1
-- for order in orders:
-- items = SELECT * FROM order_items WHERE order_id = order.id
-- Solution: Single query with JOIN
SELECT
o.id AS order_id,
o.total,
o.created_at,
oi.product_id,
oi.quantity,
oi.unit_price,
p.name AS product_name
FROM orders o
LEFT JOIN order_items oi ON o.id = oi.order_id
LEFT JOIN products p ON oi.product_id = p.id
WHERE o.user_id = 1
ORDER BY o.created_at DESC, oi.id;
-- Alternative: Batch query
SELECT * FROM orders WHERE user_id = 1;
-- Get order IDs: [1, 2, 3, 4, 5]
SELECT * FROM order_items WHERE order_id IN (1, 2, 3, 4, 5);sql
-- Problem: N+1 queries
-- Application code:
-- orders = SELECT * FROM orders WHERE user_id = 1
-- for order in orders:
-- items = SELECT * FROM order_items WHERE order_id = order.id
-- Solution: Single query with JOIN
SELECT
o.id AS order_id,
o.total,
o.created_at,
oi.product_id,
oi.quantity,
oi.unit_price,
p.name AS product_name
FROM orders o
LEFT JOIN order_items oi ON o.id = oi.order_id
LEFT JOIN products p ON oi.product_id = p.id
WHERE o.user_id = 1
ORDER BY o.created_at DESC, oi.id;
-- Alternative: Batch query
SELECT * FROM orders WHERE user_id = 1;
-- Get order IDs: [1, 2, 3, 4, 5]
SELECT * FROM order_items WHERE order_id IN (1, 2, 3, 4, 5);Example 3: Index Design Strategies
示例3:索引设计策略
sql
-- Single column index for equality checks
CREATE INDEX idx_users_email ON users(email);
-- Composite index for multiple conditions
-- Order columns: equality first, then range, then sort
CREATE INDEX idx_orders_user_status_date
ON orders(user_id, status, created_at DESC);
-- Partial index for filtered queries
CREATE INDEX idx_orders_pending
ON orders(created_at DESC)
WHERE status = 'pending';
-- Covering index to avoid table lookups
CREATE INDEX idx_orders_summary
ON orders(user_id, status)
INCLUDE (total, created_at);
-- Expression index for computed conditions
CREATE INDEX idx_users_email_lower ON users(LOWER(email));
-- Check existing indexes
SELECT
indexname,
indexdef,
pg_size_pretty(pg_relation_size(indexname::regclass)) AS size
FROM pg_indexes
WHERE tablename = 'orders';
-- Find unused indexes
SELECT
schemaname, tablename, indexname,
idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;sql
-- Single column index for equality checks
CREATE INDEX idx_users_email ON users(email);
-- Composite index for multiple conditions
-- Order columns: equality first, then range, then sort
CREATE INDEX idx_orders_user_status_date
ON orders(user_id, status, created_at DESC);
-- Partial index for filtered queries
CREATE INDEX idx_orders_pending
ON orders(created_at DESC)
WHERE status = 'pending';
-- Covering index to avoid table lookups
CREATE INDEX idx_orders_summary
ON orders(user_id, status)
INCLUDE (total, created_at);
-- Expression index for computed conditions
CREATE INDEX idx_users_email_lower ON users(LOWER(email));
-- Check existing indexes
SELECT
indexname,
indexdef,
pg_size_pretty(pg_relation_size(indexname::regclass)) AS size
FROM pg_indexes
WHERE tablename = 'orders';
-- Find unused indexes
SELECT
schemaname, tablename, indexname,
idx_scan, idx_tup_read, idx_tup_fetch
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;Example 4: Query Rewriting Patterns
示例4:查询重写模式
sql
-- Pattern 1: Replace subquery with JOIN
-- Before
SELECT * FROM orders
WHERE customer_id IN (SELECT id FROM customers WHERE region = 'US');
-- After
SELECT o.* FROM orders o
INNER JOIN customers c ON o.customer_id = c.id
WHERE c.region = 'US';
-- Pattern 2: Use EXISTS instead of IN for large subqueries
-- Before
SELECT * FROM products
WHERE id IN (SELECT product_id FROM order_items);
-- After
SELECT * FROM products p
WHERE EXISTS (
SELECT 1 FROM order_items oi WHERE oi.product_id = p.id
);
-- Pattern 3: Avoid functions on indexed columns
-- Before (can't use index)
SELECT * FROM users WHERE YEAR(created_at) = 2024;
-- After (uses index)
SELECT * FROM users
WHERE created_at >= '2024-01-01' AND created_at < '2025-01-01';
-- Pattern 4: Optimize pagination
-- Before (slow for large offsets)
SELECT * FROM products ORDER BY id LIMIT 20 OFFSET 10000;
-- After (keyset pagination)
SELECT * FROM products
WHERE id > 10000
ORDER BY id
LIMIT 20;
-- Pattern 5: Batch operations
-- Before (row-by-row)
UPDATE products SET price = price * 1.1 WHERE id = 1;
UPDATE products SET price = price * 1.1 WHERE id = 2;
-- ... repeated 1000 times
-- After (single batch)
UPDATE products SET price = price * 1.1
WHERE id = ANY(ARRAY[1, 2, 3, ..., 1000]);
-- Or use CTE for complex batches
WITH price_updates AS (
SELECT id, new_price FROM temp_price_updates
)
UPDATE products p
SET price = pu.new_price
FROM price_updates pu
WHERE p.id = pu.id;sql
-- Pattern 1: Replace subquery with JOIN
-- Before
SELECT * FROM orders
WHERE customer_id IN (SELECT id FROM customers WHERE region = 'US');
-- After
SELECT o.* FROM orders o
INNER JOIN customers c ON o.customer_id = c.id
WHERE c.region = 'US';
-- Pattern 2: Use EXISTS instead of IN for large subqueries
-- Before
SELECT * FROM products
WHERE id IN (SELECT product_id FROM order_items);
-- After
SELECT * FROM products p
WHERE EXISTS (
SELECT 1 FROM order_items oi WHERE oi.product_id = p.id
);
-- Pattern 3: Avoid functions on indexed columns
-- Before (can't use index)
SELECT * FROM users WHERE YEAR(created_at) = 2024;
-- After (uses index)
SELECT * FROM users
WHERE created_at >= '2024-01-01' AND created_at < '2025-01-01';
-- Pattern 4: Optimize pagination
-- Before (slow for large offsets)
SELECT * FROM products ORDER BY id LIMIT 20 OFFSET 10000;
-- After (keyset pagination)
SELECT * FROM products
WHERE id > 10000
ORDER BY id
LIMIT 20;
-- Pattern 5: Batch operations
-- Before (row-by-row)
UPDATE products SET price = price * 1.1 WHERE id = 1;
UPDATE products SET price = price * 1.1 WHERE id = 2;
-- ... repeated 1000 times
-- After (single batch)
UPDATE products SET price = price * 1.1
WHERE id = ANY(ARRAY[1, 2, 3, ..., 1000]);
-- Or use CTE for complex batches
WITH price_updates AS (
SELECT id, new_price FROM temp_price_updates
)
UPDATE products p
SET price = pu.new_price
FROM price_updates pu
WHERE p.id = pu.id;