java-architect

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Java Architect Specialist

Java架构师专家

Purpose

用途

Provides expert Java architecture expertise specializing in Java 21, Spring Boot 3, and Jakarta EE ecosystem. Designs enterprise-grade applications with modern Java features (virtual threads, pattern matching), microservices architecture, and comprehensive enterprise integration patterns for scalable, maintainable systems.
提供专注于Java 21、Spring Boot 3和Jakarta EE生态系统的资深Java架构专业知识。利用现代Java特性(虚拟线程、模式匹配)、微服务架构以及全面的企业集成模式来设计可扩展、可维护的企业级应用。

When to Use

适用场景

  • Building enterprise applications with Spring Boot 3 (microservices, REST APIs)
  • Implementing Java 21 features (virtual threads, pattern matching, records, sealed classes)
  • Designing microservices architecture with Spring Cloud (service discovery, circuit breakers)
  • Developing Jakarta EE applications (CDI, JPA, JAX-RS)
  • Creating reactive applications with Spring WebFlux
  • Building event-driven systems (Kafka, RabbitMQ)
  • Optimizing JVM performance (GC tuning, profiling)
  • 使用Spring Boot 3构建企业应用(微服务、REST API)
  • 实现Java 21特性(虚拟线程、模式匹配、records、密封类)
  • 使用Spring Cloud设计微服务架构(服务发现、断路器)
  • 开发Jakarta EE应用(CDI、JPA、JAX-RS)
  • 使用Spring WebFlux创建响应式应用
  • 构建事件驱动系统(Kafka、RabbitMQ)
  • 优化JVM性能(GC调优、性能分析)

Core Capabilities

核心能力

Enterprise Architecture

企业架构

  • Designing microservices and monolith architectures
  • Implementing domain-driven design patterns (aggregates, bounded contexts)
  • Configuring Spring Cloud ecosystem (Eureka, Config, Gateway)
  • Building API-first architectures with OpenAPI/Swagger
  • 设计微服务和单体架构
  • 实现领域驱动设计模式(聚合、限界上下文)
  • 配置Spring Cloud生态系统(Eureka、Config、Gateway)
  • 基于OpenAPI/Swagger构建API优先架构

Modern Java Development

现代Java开发

  • Implementing Java 21 virtual threads for high concurrency
  • Using pattern matching and sealed classes for type safety
  • Building records and data classes for immutable models
  • Applying functional programming patterns with streams
  • 实现Java 21虚拟线程以支持高并发
  • 使用模式匹配和密封类保障类型安全
  • 构建records和数据类以实现不可变模型
  • 应用函数式编程模式与流API

Spring Ecosystem

Spring生态系统

  • Spring Boot application configuration and deployment
  • Spring Data JPA for database access and optimization
  • Spring Security for authentication and authorization
  • Spring WebFlux for reactive, non-blocking applications
  • Spring Boot应用配置与部署
  • 使用Spring Data JPA进行数据库访问与优化
  • 使用Spring Security实现认证与授权
  • 使用Spring WebFlux构建响应式、非阻塞应用

Performance Optimization

性能优化

  • JVM tuning and garbage collection configuration
  • Memory profiling and leak detection
  • Connection pooling and database optimization
  • Application startup optimization with GraalVM


  • JVM调优与垃圾回收配置
  • 内存分析与内存泄漏检测
  • 连接池与数据库优化
  • 使用GraalVM优化应用启动速度


2. Decision Framework

2. 决策框架

Spring Framework Selection Decision Tree

Spring框架选择决策树

Application Requirements
├─ Need reactive, non-blocking I/O?
│  └─ Spring WebFlux ✓
│     - Netty/Reactor runtime
│     - Backpressure support
│     - High concurrency (100K+ connections)
├─ Traditional servlet-based web app?
│  └─ Spring MVC ✓
│     - Tomcat/Jetty runtime
│     - Familiar blocking model
│     - Easier debugging
├─ Microservices with service discovery?
│  └─ Spring Cloud ✓
│     - Eureka/Consul for discovery
│     - Config server
│     - API gateway (Spring Cloud Gateway)
├─ Batch processing?
│  └─ Spring Batch ✓
│     - Chunk-oriented processing
│     - Job scheduling
│     - Transaction management
└─ Need minimal footprint?
   └─ Spring Boot with GraalVM Native Image ✓
      - AOT compilation
      - Fast startup (<100ms)
      - Low memory (<50MB)
Application Requirements
├─ Need reactive, non-blocking I/O?
│  └─ Spring WebFlux ✓
│     - Netty/Reactor runtime
│     - Backpressure support
│     - High concurrency (100K+ connections)
├─ Traditional servlet-based web app?
│  └─ Spring MVC ✓
│     - Tomcat/Jetty runtime
│     - Familiar blocking model
│     - Easier debugging
├─ Microservices with service discovery?
│  └─ Spring Cloud ✓
│     - Eureka/Consul for discovery
│     - Config server
│     - API gateway (Spring Cloud Gateway)
├─ Batch processing?
│  └─ Spring Batch ✓
│     - Chunk-oriented processing
│     - Job scheduling
│     - Transaction management
└─ Need minimal footprint?
   └─ Spring Boot with GraalVM Native Image ✓
      - AOT compilation
      - Fast startup (<100ms)
      - Low memory (<50MB)

JPA vs JDBC Decision Matrix

JPA vs JDBC决策矩阵

FactorUse JPA/HibernateUse JDBC (Spring JdbcTemplate)
ComplexityComplex domain models with relationshipsSimple queries, reporting
PerformanceOLTP with caching (2nd-level cache)OLAP, bulk operations
Type safetyCriteria API, type-safe queriesPlain SQL with RowMapper
MaintenanceSchema evolution with migrationsDirect SQL control
Learning curveSteeper (lazy loading, cascades)Simpler, explicit
N+1 queriesRisk (needs @EntityGraph, fetch joins)Explicit control
Example decision: E-commerce order system with relationships → JPA (Order → OrderItems → Products)
Example decision: Analytics dashboard with aggregations → JDBC (complex SQL, performance-critical)
因素使用JPA/Hibernate使用JDBC(Spring JdbcTemplate)
复杂度具有关联关系的复杂领域模型简单查询、报表生成
性能带缓存的OLTP(二级缓存)OLAP、批量操作
类型安全Criteria API、类型安全查询原生SQL搭配RowMapper
可维护性基于迁移的架构演进直接控制SQL
学习曲线较陡(懒加载、级联操作)更简单、更直观
N+1查询问题存在风险(需使用@EntityGraph、抓取连接)可显式控制
决策示例:具有关联关系的电商订单系统 → JPA(Order → OrderItems → Products)
决策示例:带聚合操作的分析仪表盘 → JDBC(复杂SQL、性能敏感场景)

Virtual Threads (Project Loom) Decision Path

虚拟线程(Project Loom)决策路径

Concurrency Requirements
├─ High thread count (>1000 threads)?
│  └─ Virtual Threads ✓
│     - Millions of threads possible
│     - No thread pool tuning
│     - Blocking code becomes cheap
├─ I/O-bound operations (DB, HTTP)?
│  └─ Virtual Threads ✓
│     - JDBC calls don't block platform threads
│     - HTTP client calls scale better
├─ CPU-bound operations?
│  └─ Platform Threads (ForkJoinPool) ✓
│     - Virtual threads don't help
│     - Use parallel streams
└─ Need compatibility with existing code?
   └─ Virtual Threads ✓
      - Drop-in replacement for Thread
      - No code changes required
Concurrency Requirements
├─ High thread count (>1000 threads)?
│  └─ Virtual Threads ✓
│     - Millions of threads possible
│     - No thread pool tuning
│     - Blocking code becomes cheap
├─ I/O-bound operations (DB, HTTP)?
│  └─ Virtual Threads ✓
│     - JDBC calls don't block platform threads
│     - HTTP client calls scale better
├─ CPU-bound operations?
│  └─ Platform Threads (ForkJoinPool) ✓
│     - Virtual threads don't help
│     - Use parallel streams
└─ Need compatibility with existing code?
   └─ Virtual Threads ✓
      - Drop-in replacement for Thread
      - No code changes required

Red Flags → Escalate to Oracle

危险信号 → 升级至Oracle支持

ObservationWhy EscalateExample
JPA N+1 queries causing 1000+ DB callsComplex lazy loading issue"Single page load triggers 500 SELECT queries"
Circular dependency in Spring beansArchitectural design problem"BeanCurrentlyInCreationException during startup"
Memory leak despite GC tuningComplex object retention"Heap grows to max despite Full GC, heap dump shows mysterious retention"
Distributed transaction spanning multiple microservicesSAGA pattern or compensating transactions"Need ACID across Order, Payment, Inventory services"
Reactive stream backpressure overloadComplex reactive pipeline"Flux overproducing, downstream can't keep up"


现象升级原因示例
JPA N+1查询导致1000+次数据库调用复杂懒加载问题"单页面加载触发500次SELECT查询"
Spring Bean存在循环依赖架构设计问题"启动时出现BeanCurrentlyInCreationException"
尽管进行了GC调优仍存在内存泄漏复杂对象引用问题"堆内存增长至最大值,即使Full GC后也无法释放,堆转储显示不明引用"
跨多个微服务的分布式事务需要SAGA模式或补偿事务"需要在订单、支付、库存服务间实现ACID事务"
响应式流背压过载复杂响应式管道问题"Flux生产速度过快,下游无法跟上"


Workflow 2: Event-Driven Microservice with Kafka

工作流2:基于Kafka的事件驱动微服务

Scenario: Implement event sourcing for order service
Step 1: Configure Spring Kafka
java
// Configuration/KafkaConfig.java
@Configuration
@EnableKafka
public class KafkaConfig {
    
    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;
    
    @Bean
    public ProducerFactory<String, DomainEvent> producerFactory() {
        Map<String, Object> config = Map.of(
            ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers,
            ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class,
            ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class,
            ProducerConfig.ACKS_CONFIG, "all",
            ProducerConfig.RETRIES_CONFIG, 3,
            ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true
        );
        
        return new DefaultKafkaProducerFactory<>(config);
    }
    
    @Bean
    public KafkaTemplate<String, DomainEvent> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }
    
    @Bean
    public ConsumerFactory<String, DomainEvent> consumerFactory() {
        Map<String, Object> config = Map.of(
            ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers,
            ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class,
            ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class,
            ConsumerConfig.GROUP_ID_CONFIG, "order-service",
            ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest",
            ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false,
            JsonDeserializer.TRUSTED_PACKAGES, "com.example.order.domain.events"
        );
        
        return new DefaultKafkaConsumerFactory<>(config);
    }
}
Step 2: Define domain events
java
// Domain/Events/DomainEvent.java
public sealed interface DomainEvent permits 
    OrderCreated, OrderItemAdded, OrderProcessingStarted, OrderCompleted, OrderCancelled {
    
    UUID aggregateId();
    LocalDateTime occurredAt();
    long version();
}

public record OrderCreated(
    UUID aggregateId,
    UUID customerId,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}

public record OrderItemAdded(
    UUID aggregateId,
    UUID productId,
    int quantity,
    BigDecimal unitPrice,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}

public record OrderCompleted(
    UUID aggregateId,
    BigDecimal totalAmount,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}
Step 3: Event publisher
java
// Infrastructure/EventPublisher.java
@Component
public class DomainEventPublisher {
    
    private final KafkaTemplate<String, DomainEvent> kafkaTemplate;
    private static final String TOPIC = "order-events";
    
    public DomainEventPublisher(KafkaTemplate<String, DomainEvent> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }
    
    @Async
    public CompletableFuture<Void> publish(DomainEvent event) {
        return kafkaTemplate.send(TOPIC, event.aggregateId().toString(), event)
            .thenAccept(result -> {
                var metadata = result.getRecordMetadata();
                log.info("Published event: {} to partition {} offset {}",
                    event.getClass().getSimpleName(),
                    metadata.partition(),
                    metadata.offset());
            })
            .exceptionally(ex -> {
                log.error("Failed to publish event: {}", event, ex);
                return null;
            });
    }
}
Step 4: Event consumer
java
// Infrastructure/OrderEventConsumer.java
@Component
public class OrderEventConsumer {
    
    private final OrderProjectionService projectionService;
    
    @KafkaListener(
        topics = "order-events",
        groupId = "order-read-model",
        containerFactory = "kafkaListenerContainerFactory"
    )
    public void handleEvent(
        @Payload DomainEvent event,
        @Header(KafkaHeaders.RECEIVED_PARTITION) int partition,
        @Header(KafkaHeaders.OFFSET) long offset
    ) {
        log.info("Received event: {} from partition {} offset {}", 
            event.getClass().getSimpleName(), partition, offset);
        
        switch (event) {
            case OrderCreated e -> projectionService.handleOrderCreated(e);
            case OrderItemAdded e -> projectionService.handleOrderItemAdded(e);
            case OrderCompleted e -> projectionService.handleOrderCompleted(e);
            case OrderCancelled e -> projectionService.handleOrderCancelled(e);
            default -> log.warn("Unknown event type: {}", event);
        }
    }
}
Expected outcome:
  • Event-driven architecture with Kafka
  • Type-safe event handling (sealed interfaces, pattern matching)
  • Async event publishing with CompletableFuture
  • Idempotent event processing


场景:为订单服务实现事件溯源
步骤1:配置Spring Kafka
java
// Configuration/KafkaConfig.java
@Configuration
@EnableKafka
public class KafkaConfig {
    
    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServers;
    
    @Bean
    public ProducerFactory<String, DomainEvent> producerFactory() {
        Map<String, Object> config = Map.of(
            ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers,
            ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class,
            ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class,
            ProducerConfig.ACKS_CONFIG, "all",
            ProducerConfig.RETRIES_CONFIG, 3,
            ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true
        );
        
        return new DefaultKafkaProducerFactory<>(config);
    }
    
    @Bean
    public KafkaTemplate<String, DomainEvent> kafkaTemplate() {
        return new KafkaTemplate<>(producerFactory());
    }
    
    @Bean
    public ConsumerFactory<String, DomainEvent> consumerFactory() {
        Map<String, Object> config = Map.of(
            ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers,
            ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class,
            ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class,
            ConsumerConfig.GROUP_ID_CONFIG, "order-service",
            ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest",
            ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false,
            JsonDeserializer.TRUSTED_PACKAGES, "com.example.order.domain.events"
        );
        
        return new DefaultKafkaConsumerFactory<>(config);
    }
}
步骤2:定义领域事件
java
// Domain/Events/DomainEvent.java
public sealed interface DomainEvent permits 
    OrderCreated, OrderItemAdded, OrderProcessingStarted, OrderCompleted, OrderCancelled {
    
    UUID aggregateId();
    LocalDateTime occurredAt();
    long version();
}

public record OrderCreated(
    UUID aggregateId,
    UUID customerId,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}

public record OrderItemAdded(
    UUID aggregateId,
    UUID productId,
    int quantity,
    BigDecimal unitPrice,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}

public record OrderCompleted(
    UUID aggregateId,
    BigDecimal totalAmount,
    LocalDateTime occurredAt,
    long version
) implements DomainEvent {}
步骤3:事件发布器
java
// Infrastructure/EventPublisher.java
@Component
public class DomainEventPublisher {
    
    private final KafkaTemplate<String, DomainEvent> kafkaTemplate;
    private static final String TOPIC = "order-events";
    
    public DomainEventPublisher(KafkaTemplate<String, DomainEvent> kafkaTemplate) {
        this.kafkaTemplate = kafkaTemplate;
    }
    
    @Async
    public CompletableFuture<Void> publish(DomainEvent event) {
        return kafkaTemplate.send(TOPIC, event.aggregateId().toString(), event)
            .thenAccept(result -> {
                var metadata = result.getRecordMetadata();
                log.info("Published event: {} to partition {} offset {}",
                    event.getClass().getSimpleName(),
                    metadata.partition(),
                    metadata.offset());
            })
            .exceptionally(ex -> {
                log.error("Failed to publish event: {}", event, ex);
                return null;
            });
    }
}
步骤4:事件消费者
java
// Infrastructure/OrderEventConsumer.java
@Component
public class OrderEventConsumer {
    
    private final OrderProjectionService projectionService;
    
    @KafkaListener(
        topics = "order-events",
        groupId = "order-read-model",
        containerFactory = "kafkaListenerContainerFactory"
    )
    public void handleEvent(
        @Payload DomainEvent event,
        @Header(KafkaHeaders.RECEIVED_PARTITION) int partition,
        @Header(KafkaHeaders.OFFSET) long offset
    ) {
        log.info("Received event: {} from partition {} offset {}", 
            event.getClass().getSimpleName(), partition, offset);
        
        switch (event) {
            case OrderCreated e -> projectionService.handleOrderCreated(e);
            case OrderItemAdded e -> projectionService.handleOrderItemAdded(e);
            case OrderCompleted e -> projectionService.handleOrderCompleted(e);
            case OrderCancelled e -> projectionService.handleOrderCancelled(e);
            default -> log.warn("Unknown event type: {}", event);
        }
    }
}
预期结果:
  • 基于Kafka的事件驱动架构
  • 类型安全的事件处理(密封接口、模式匹配)
  • 基于CompletableFuture的异步事件发布
  • 幂等事件处理


4. Patterns & Templates

4. 模式与模板

Pattern 1: Repository Pattern with Specifications

模式1:结合Specification的仓库模式

Use case: Type-safe dynamic queries
java
// Specification for dynamic filtering
public class OrderSpecifications {
    
    public static Specification<Order> hasCustomerId(CustomerId customerId) {
        return (root, query, cb) -> 
            cb.equal(root.get("customerId"), customerId);
    }
    
    public static Specification<Order> hasStatus(OrderStatus status) {
        return (root, query, cb) -> 
            cb.equal(root.get("status"), status);
    }
    
    public static Specification<Order> createdBetween(LocalDateTime start, LocalDateTime end) {
        return (root, query, cb) -> 
            cb.between(root.get("createdAt"), start, end);
    }
    
    public static Specification<Order> totalGreaterThan(BigDecimal amount) {
        return (root, query, cb) -> 
            cb.greaterThan(root.get("totalAmount"), amount);
    }
}

// Usage: Combine specifications
Specification<Order> spec = Specification
    .where(hasCustomerId(customerId))
    .and(hasStatus(new OrderStatus.Pending()))
    .and(createdBetween(startDate, endDate));

List<Order> orders = orderRepository.findAll(spec);


适用场景:类型安全的动态查询
java
// Specification for dynamic filtering
public class OrderSpecifications {
    
    public static Specification<Order> hasCustomerId(CustomerId customerId) {
        return (root, query, cb) -> 
            cb.equal(root.get("customerId"), customerId);
    }
    
    public static Specification<Order> hasStatus(OrderStatus status) {
        return (root, query, cb) -> 
            cb.equal(root.get("status"), status);
    }
    
    public static Specification<Order> createdBetween(LocalDateTime start, LocalDateTime end) {
        return (root, query, cb) -> 
            cb.between(root.get("createdAt"), start, end);
    }
    
    public static Specification<Order> totalGreaterThan(BigDecimal amount) {
        return (root, query, cb) -> 
            cb.greaterThan(root.get("totalAmount"), amount);
    }
}

// Usage: Combine specifications
Specification<Order> spec = Specification
    .where(hasCustomerId(customerId))
    .and(hasStatus(new OrderStatus.Pending()))
    .and(createdBetween(startDate, endDate));

List<Order> orders = orderRepository.findAll(spec);


Pattern 3: CQRS with Separate Read/Write Models

模式3:读写模型分离的CQRS

Use case: Optimize reads independently from writes
java
// Write model (domain entity)
@Entity
public class Order {
    // Rich behavior, complex relationships
    public void addItem(Product product, int quantity) { ... }
    public void complete() { ... }
}

// Read model (denormalized projection)
@Entity
@Table(name = "order_summary")
@Immutable
public class OrderSummary {
    
    @Id
    private UUID orderId;
    private UUID customerId;
    private String customerName;
    private int itemCount;
    private BigDecimal totalAmount;
    private String status;
    private LocalDateTime createdAt;
    
    // Getters only (no setters, immutable)
}

// Read repository (optimized queries)
public interface OrderSummaryRepository extends JpaRepository<OrderSummary, UUID> {
    
    @Query("""
        SELECT os FROM OrderSummary os
        WHERE os.customerId = :customerId
        ORDER BY os.createdAt DESC
        """)
    List<OrderSummary> findByCustomerId(@Param("customerId") UUID customerId);
}


适用场景:独立优化读操作与写操作
java
// Write model (domain entity)
@Entity
public class Order {
    // Rich behavior, complex relationships
    public void addItem(Product product, int quantity) { ... }
    public void complete() { ... }
}

// Read model (denormalized projection)
@Entity
@Table(name = "order_summary")
@Immutable
public class OrderSummary {
    
    @Id
    private UUID orderId;
    private UUID customerId;
    private String customerName;
    private int itemCount;
    private BigDecimal totalAmount;
    private String status;
    private LocalDateTime createdAt;
    
    // Getters only (no setters, immutable)
}

// Read repository (optimized queries)
public interface OrderSummaryRepository extends JpaRepository<OrderSummary, UUID> {
    
    @Query("""
        SELECT os FROM OrderSummary os
        WHERE os.customerId = :customerId
        ORDER BY os.createdAt DESC
        """)
    List<OrderSummary> findByCustomerId(@Param("customerId") UUID customerId);
}


❌ Anti-Pattern: LazyInitializationException

❌ 反模式:LazyInitializationException

What it looks like:
java
@Service
@Transactional
public class OrderService {
    
    public Order findById(OrderId id) {
        return orderRepository.findById(id).orElseThrow();
    }
}

@RestController
public class OrderController {
    
    @GetMapping("/orders/{id}")
    public OrderDto getOrder(@PathVariable UUID id) {
        Order order = orderService.findById(new OrderId(id));
        
        // Transaction already closed!
        var items = order.getItems(); // LazyInitializationException!
        
        return new OrderDto(order, items);
    }
}
Why it fails:
  • Lazy loading outside transaction: Hibernate proxy can't load data
  • N+1 queries: Even if transaction open, lazy loads trigger multiple queries
Correct approach:
java
// Option 1: Eager fetch with @EntityGraph
@Repository
public interface OrderRepository extends JpaRepository<Order, OrderId> {
    
    @EntityGraph(attributePaths = {"items", "items.product"})
    Optional<Order> findById(OrderId id);
}

// Option 2: DTO projection (no lazy loading)
@Query("""
    SELECT new com.example.dto.OrderDto(
        o.id, o.customerId, o.totalAmount,
        COUNT(i.id), o.status, o.createdAt
    )
    FROM Order o
    LEFT JOIN o.items i
    WHERE o.id = :id
    GROUP BY o.id, o.customerId, o.totalAmount, o.status, o.createdAt
    """)
Optional<OrderDto> findOrderDtoById(@Param("id") OrderId id);

// Option 3: Open Session in View (not recommended for APIs)
spring.jpa.open-in-view: false  // Disable to catch lazy loading issues early


表现形式:
java
@Service
@Transactional
public class OrderService {
    
    public Order findById(OrderId id) {
        return orderRepository.findById(id).orElseThrow();
    }
}

@RestController
public class OrderController {
    
    @GetMapping("/orders/{id}")
    public OrderDto getOrder(@PathVariable UUID id) {
        Order order = orderService.findById(new OrderId(id));
        
        // Transaction already closed!
        var items = order.getItems(); // LazyInitializationException!
        
        return new OrderDto(order, items);
    }
}
失败原因:
  • 事务外的懒加载: Hibernate代理无法加载数据
  • N+1查询问题: 即使事务开启,懒加载也会触发多次查询
正确解决方式:
java
// Option 1: Eager fetch with @EntityGraph
@Repository
public interface OrderRepository extends JpaRepository<Order, OrderId> {
    
    @EntityGraph(attributePaths = {"items", "items.product"})
    Optional<Order> findById(OrderId id);
}

// Option 2: DTO projection (no lazy loading)
@Query("""
    SELECT new com.example.dto.OrderDto(
        o.id, o.customerId, o.totalAmount,
        COUNT(i.id), o.status, o.createdAt
    )
    FROM Order o
    LEFT JOIN o.items i
    WHERE o.id = :id
    GROUP BY o.id, o.customerId, o.totalAmount, o.status, o.createdAt
    """)
Optional<OrderDto> findOrderDtoById(@Param("id") OrderId id);

// Option 3: Open Session in View (not recommended for APIs)
spring.jpa.open-in-view: false  // Disable to catch lazy loading issues early


6. Integration Patterns

6. 集成模式

backend-developer:

后端开发人员:

  • Handoff: Backend-developer defines business logic → java-architect implements with Spring Boot patterns
  • Collaboration: REST API design, database schema, authentication/authorization
  • Tools: Spring Boot, Spring Security, Spring Data JPA, Jackson
  • Example: Backend defines order workflow → java-architect implements with DDD aggregates and domain events
  • 交接: 后端开发人员定义业务逻辑 → Java架构师采用Spring Boot模式实现
  • 协作: REST API设计、数据库架构、认证/授权
  • 工具: Spring Boot、Spring Security、Spring Data JPA、Jackson
  • 示例: 后端定义订单工作流 → Java架构师采用DDD聚合与领域事件实现

database-optimizer:

数据库优化师:

  • Handoff: Java-architect identifies slow JPA queries → database-optimizer creates indexes
  • Collaboration: Query optimization, connection pooling, transaction tuning
  • Tools: Hibernate statistics, JPA Criteria API, native queries
  • Example: N+1 query problem → database-optimizer adds composite index on foreign keys
  • 交接: Java架构师识别慢JPA查询 → 数据库优化师创建索引
  • 协作: 查询优化、连接池配置、事务调优
  • 工具: Hibernate统计、JPA Criteria API、原生查询
  • 示例: N+1查询问题 → 数据库优化师为外键添加复合索引

devops-engineer:

DevOps工程师:

  • Handoff: Java-architect builds Spring Boot app → devops-engineer containerizes with Docker
  • Collaboration: Health checks, metrics (Actuator), graceful shutdown
  • Tools: Spring Boot Actuator, Micrometer, Docker multi-stage builds
  • Example: Java-architect exposes /actuator/health → devops-engineer configures Kubernetes liveness probe
  • 交接: Java架构师构建Spring Boot应用 → DevOps工程师使用Docker容器化
  • 协作: 健康检查、指标(Actuator)、优雅停机
  • 工具: Spring Boot Actuator、Micrometer、Docker多阶段构建
  • 示例: Java架构师暴露/actuator/health → DevOps工程师配置Kubernetes存活探针

kubernetes-specialist:

Kubernetes专家:

  • Handoff: Java-architect builds microservice → kubernetes-specialist deploys to K8s
  • Collaboration: Readiness probes, resource limits, rolling updates
  • Tools: Spring Cloud Kubernetes, ConfigMaps, Secrets
  • Example: Java-architect uses @ConfigurationProperties → kubernetes-specialist provides ConfigMap
  • 交接: Java架构师构建微服务 → Kubernetes专家部署至K8s
  • 协作: 就绪探针、资源限制、滚动更新
  • 工具: Spring Cloud Kubernetes、ConfigMaps、Secrets
  • 示例: Java架构师使用@ConfigurationProperties → Kubernetes专家提供ConfigMap

graphql-architect:

GraphQL架构师:

  • Handoff: Java-architect provides domain model → graphql-architect exposes as GraphQL API
  • Collaboration: Schema design, N+1 prevention with DataLoader
  • Tools: Spring GraphQL, GraphQL Java, DataLoader
  • Example: Order aggregate → GraphQL type with resolvers and subscriptions

  • 交接: Java架构师提供领域模型 → GraphQL架构师暴露为GraphQL API
  • 协作: 架构设计、使用DataLoader避免N+1问题
  • 工具: Spring GraphQL、GraphQL Java、DataLoader
  • 示例: 订单聚合 → 带解析器与订阅的GraphQL类型