Python Collaborative Development Skill
Task Objectives
- This Skill is used to: convert users' natural language requirements into complete Python project deliverables through the collaboration of a virtual four-role team (Autonomous Learning → PM → Architect → Senior Programmer)
- Capabilities include: autonomous learning and knowledge integration, requirement analysis and clarification, architecture design (adaptive to project size), code implementation, function verification, version control, feature expansion, project refactoring, skill recognition and invocation, database design and implementation, data layer abstraction
- Trigger conditions: Users put forward clear software development requirements (such as "create a weather query tool", "implement a to-do list system", "add data export function", "optimize code performance", "design a user management system", etc.)
Preparations
- Dependency description: web_search tool (integrated)
- Package management tool: UV (modern Python package manager, fast, reliable, dependency locking)
- Installation:
- Project initialization:
- Dependency management:
- File preparation: No pre-files required
- Version control mechanism: All documents and code must include version information, use Semantic Versioning (e.g., v1.0.0), and record version history and change reasons
Operation Steps
Phase 0: Autonomous Learning and Knowledge Integration
Objective: Obtain relevant domain knowledge through web search, generate project background documents, and provide knowledge support for subsequent phases
Execution Steps:
-
Identify keywords: Extract core technology, domain, and tool keywords from user requirements
- Technical keywords: such as "REST API", "WebSocket", "CSV processing", "database", "ORM"
- Domain keywords: such as "weather query", "to-do list", "data analysis", "user management"
- Tool keywords: such as "Flask", "pandas", "UV", "SQLAlchemy", "MongoDB"
-
Execute web search (3-6 rounds):
- Round 1: Search "<domain> best practices" to obtain general design principles
- Round 2: Search "<technology> framework comparison", such as "Flask vs FastAPI comparison"
- Round 3: Search "<function> implementation scheme", such as "Python weather API call example"
- Round 4 (optional): Search "<problem> common pitfalls", such as "REST API common errors"
- Round 5: Search "Python 3.11+ new features", "framework version compatibility", "UV package manager best practices", "loguru logging library best practices"
- Round 6 (new): Search "database selection", "SQLAlchemy vs MongoDB", "vector database comparison", "graph database comparison", "Repository pattern best practices", "database performance optimization"
-
Filter and integrate information:
- Evaluate the credibility and timeliness of search results
- Extract key information: technical selection basis, design patterns, best practices, common issues, version features, UV usage, database selection
- Integrate conflicting information, mark the advantages and disadvantages of different solutions
- Key focus: Version compatibility, long-term maintainability, learning resources, UV package management advantages, database features
- Generate structured knowledge summary
-
- Refer to the structure of references/background-template.md
- Include: Project background overview, technical domain knowledge, best practices, tool comparison (including version information), common issues, UV package management, database-related knowledge
- Enhancement: Add version compatibility, long-term maintainability, and learning resources to tool comparison dimensions
- Enhancement: Database selection guide (SQLite, PostgreSQL, MongoDB, vector databases, graph databases)
- Enhancement: Best practices for database design and implementation (ORM/ODM, indexing strategies, performance optimization)
- Ensure information is accurate, practical, and easy to understand
Phase 1: Project Manager (PM) - Requirement Analysis and Documentation
Objective: Combine background knowledge to understand user requirements, generate clear requirement documents, and evaluate project scale
Execution Steps:
-
Analyze requirements combined with background knowledge:
- Read to understand domain best practices and common pitfalls
- Identify possible missing key points in user requirements (based on domain knowledge)
- Evaluate the rationality and feasibility of requirements
-
Identify ambiguous or incomplete parts in requirements
-
Conduct up to 2 rounds of clarification interactions (if requirements are unclear):
- Start with "[Question]" to confirm details with users
- Typical questions: Function boundaries, input/output formats, non-functional requirements, tech stack preferences, version requirements, expected project scale, etc.
- Example: "Create a weather query tool" → Clarify "Which cities are supported? Data source? Is caching enabled?"
-
Evaluate project scale:
- Small project: < 5 functional points, expected < 500 lines of code
- Medium project: 5-10 functional points, expected 500-2000 lines of code
- Large project: > 10 functional points, expected > 2000 lines of code
- Clearly mark the project scale in
-
Database requirement clarification (new):
- Confirm whether persistent data storage is needed
- Confirm data type (relational, document, vector, graph)
- Confirm data scale and concurrency requirements
- Confirm whether multi-database support is needed
- Confirm whether database switching capability is needed
-
Based on user feedback, background knowledge, and project scale evaluation, generate
:
- Refer to the structure of references/requirements-template.md
- Include: Function list, input/output description, non-functional requirements, assumptions and constraints, version requirements, project scale, database requirements
- Key: Refer to best practices in background.md to improve non-functional requirements
- New: Clearly specify Python 3.11+ version requirements, version control strategy, UV package management requirements, project scale, loguru logging library requirements (mandatory use), database selection requirements (refer to database-selection-guide.md)
- Ensure each functional point is clearly executable
- Record version: v1.0.0 (initial version)
Phase 2: System Architect - Architecture Design
Objective: Based on requirement documents, background knowledge, and project scale, design system architecture, generate design documents and task lists
Execution Steps:
- Read and to understand complete requirements, domain knowledge, project scale, and database requirements
- Select architecture organization method based on project scale:
- Small project: Use single file or simple folder structure
project/
├── main.py
├── README.md # Project root directory
├── pyproject.toml
├── uv.lock
└── docs/ # Document folder
├── background.md
├── requirements.md
└── architecture.md
- Large project: Use standard Python project folder organization
project/
├── README.md # Project root directory
├── pyproject.toml
├── uv.lock
├── src/
│ ├── __init__.py
│ ├── main.py
│ ├── api/ # API module
│ ├── models/ # Data models
│ ├── services/ # Business logic
│ ├── repositories/ # Data access layer (Repository)
│ └── utils/ # Utility functions
├── tests/
│ └── test_main.py
└── docs/ # Document folder
├── background.md
├── requirements.md
└── architecture.md
- Design system architecture:
- Refer to tool comparison in background.md for technical selection
- Module division and responsibility allocation (based on project scale)
- Data flow design
- Technical selection (must explain reasons, reference comparison analysis in background.md):
- Must compare at least 2 solutions in detail
- Explain trade-off reasons (performance vs ease of use, functionality vs complexity)
- Reference comparison data in background.md (performance, ecosystem, learning curve, version compatibility)
- Clearly explain the advantages and disadvantages of the selected solution
- Interface definition (function signatures, parameter descriptions, must include type annotations)
- Refer to best practices in background.md to apply design patterns and architecture patterns
- UV package management strategy: Use pyproject.toml to manage dependencies, uv.lock to lock versions
- Skill recognition: Identify possible reusable skill modules in the project, define skill interfaces in architecture.md
- Design database architecture (new):
- Select appropriate database (SQLite, PostgreSQL, MongoDB, ChromaDB, Neo4j, etc.) referring to database selection guide
- Design data models (table structure, indexing strategies)
- Design data access layer (Repository pattern)
- Implement database abstraction layer (supports multi-database switching)
- Generate database design documents and data layer abstraction documents
- Generate the following files:
- : Refer to references/architecture-template.md
- New: Database design chapter (refer to database-design-template.md)
- New: Data access layer chapter (refer to data-abstraction-template.md)
- New: Select corresponding architecture template based on project scale
- New: Version control chapter (version strategy, dependency version locking)
- New: UV package management chapter (pyproject.toml configuration, uv.lock strategy)
- New: Database design chapter (database selection, data models, indexing strategies)
- New: Data access layer chapter (Repository pattern, BaseRepository interface, specific implementation)
- New: Skill recognition and management chapter (skill list, interface definition, reuse guide)
- Clearly specify Python 3.11+ version requirements
- Add technical selection trade-off explanations
- : Refer to references/database-design-template.md
- Database selection and configuration
- Data model design
- Indexing strategies
- Data access interface definition
- : Refer to references/data-abstraction-template.md
- BaseRepository interface definition
- SQLAlchemy / MongoDB implementation
- Repository factory
- Database switching strategy
- : Refer to references/readme-template.md, including basic usage, dependency installation (using UV), and running methods
- : Refer to references/todo-template.md, mark all tasks as (uncompleted) in initial state
- New: Tasks related to type annotations
- New: UV package management tasks
- New: Database design and implementation tasks
- New: Skill recognition and management tasks
- New: Adjust task list based on project scale
Phase 3: Senior Programmer - Code Implementation
Objective: Based on architecture design, task list, and background knowledge, write high-quality executable code
Execution Steps:
- Read , , , and to understand system design, project scale, database design, data access layer architecture, domain best practices, and skill definitions
- Organize code files based on project scale:
- Small project: All code in
- Large project:
- : Package initialization
- : Main entry
- : API module
- : Data models
- : Business logic
- : Data access layer (Repository)
- : Utility functions
- : Test files
- Write code:
- Must use Python 3.11+ features:
- Type Hints: All function parameters and return values must use type annotations
- Use PEP 585 (built-in generic types): such as instead of
- Use PEP 646 (type parameters): such as
def func[T](items: list[T]) -> T:
- Use match-case statements (Python 3.10+) for pattern matching
- Use improved features of dataclass
- Clear code structure with necessary comments
- Apply best practices in background.md: such as error handling, use loguru for logging, code specifications
- Refer to common issues in background.md to avoid typical pitfalls
- Call existing skills: When implementing new functions, prioritize calling skill modules defined in architecture.md to avoid repeated development
- Implement data access layer:
- Implement BaseRepository interface
- Implement SQLAlchemyBaseRepository / MongoDBBaseRepository
- Implement Repository for specific entities (UserRepository, etc.)
- Implement RepositoryFactory
- Implement all functional points
- Include
if __name__ == "__main__"
entry
- Ensure code can be run directly without additional configuration
- Type annotation example:
python
def process_data(data: list[dict[str, Any]]) -> dict[str, int]:
"""Process data and return statistical results"""
counts: dict[str, int] = {}
for item in data:
key = item.get("key")
if key:
counts[key] = counts.get(key, 0) + 1
return counts
- Skill invocation example:
python
from src.utils.helpers import existing_skill
def new_feature(data: list[Any]) -> dict[str, Any]:
"""New feature implementation, call existing skill"""
# Directly call skill
result = existing_skill(data)
return result
- Repository usage example:
python
from src.repositories import UserRepository
def get_user(user_id: int) -> User | None:
"""Get user"""
repo = UserRepository(session)
return repo.get_by_id(user_id)
- Generate UV dependency files:
- : Use standard format, include project metadata and dependencies
- New database dependencies: sqlalchemy, pymongo, chromadb, psycopg2-binary, etc.
- New migration tools: alembic (relational databases), mongomock (MongoDB testing)
- : Automatically generated, locks dependency versions
- Refer to template: references/uv-lock-template.md
- Synchronously update :
- Mark completed tasks as
- Ensure each task corresponds to an entry in
Phase 4: Quality Verification - Functional Testing
Objective: Verify whether the code meets the original requirements, generate test reports
Execution Steps:
- Design test cases:
- Cover all functional points in
- Refer to common issues in background.md to design targeted test scenarios
- Include normal scenarios and boundary cases
- New database tests:
- Test CRUD operations
- Test transaction processing
- Test concurrent operations
- Use in-memory databases for testing (SQLite / mongomock)
- Record input and expected output for each test case
- Execute tests (simulated operation):
- Analyze code logic to verify whether functions are correctly implemented
- Check compliance with architecture design
- Verify UV dependency files: Ensure pyproject.toml and uv.lock are correct
- Verify type annotation completeness: Ensure all key functions have type annotations
- Verify skill invocation: Confirm new functions correctly call existing skills
- Verify loguru logging configuration: Ensure logs are output correctly
- Verify data access layer: Confirm Repository interface implementation is correct
- Verify database operations: Confirm CRUD operations, query optimization are correct
- Refer to security considerations in background.md to verify security measures
- Generate :
- Refer to references/test-report-template.md
- List: Verified functions, pass status, potential risks or uncovered scenarios
- Give clear pass/fail judgment for each function
- Reference knowledge in background.md to explain common issues covered by tests
- New: Verify Python 3.11+ feature usage, UV dependency management, skill invocation, loguru logging configuration, database operation verification
Phase 5: Feature Extension (Add Functions)
Objective: Add new functions based on existing projects, identify and reuse existing skills
Trigger Conditions: Users put forward requirements such as "add function", "expand function", etc.
Execution Steps:
-
Read existing project:
- Read , , , ,
- Read existing code files (main.py or src/ directory)
- Read to understand existing dependencies
-
Analyze existing architecture:
- Understand existing functional modules and data flow
- Identify callable skills defined in architecture.md
- Evaluate database extension requirements: Whether new tables/collections are needed, index updates
- Evaluate the impact of new functions on existing systems
-
Design feature extension plan:
- Clarify requirements and interfaces of new functions
- Prioritize using existing skills to implement new functions
- Design code adaptation scheme for new functions
- Design database changes: Add tables/collections, update Repository
- Evaluate whether new dependencies are needed
-
Adapt code:
- Call existing skill modules
- Write new function code
- Update data access layer: Add or modify Repository
- Write database migration scripts (if needed)
- Ensure consistency with existing code style
- Add type annotations and loguru logging
-
Update documents:
- Update : Add new function requirements, update version number (v1.0.0 → v1.1.0)
- Update : Update architecture design, skill list
- Update : Update database design
- Update : Update data access layer
- Update : Add new function development tasks, database extension tasks
- Update : Update function list and usage instructions
-
Test verification:
- Test new functions
- Regression test existing functions
- Test database changes: Verify new tables/collections, indexes, migrations
- Update
- Use UV to update dependencies (if needed):
-
Generate extension plan document:
- Refer to references/feature-extension-template.md
- Record extension process, code changes, document updates, database changes
Phase 6: Project Refactoring
Objective: Analyze code quality and performance, execute refactoring, improve maintainability
Trigger Conditions: Users put forward requirements such as "refactor", "optimize", "improve code", etc.
Execution Steps:
-
Read existing project:
- Read all code files
- Read all document files
- Read and
-
Analyze code quality and performance:
- Analyze code complexity, duplicate code, naming conventions
- Identify performance bottlenecks
- Identify architecture issues (high coupling, low cohesion)
- Analyze database performance: Query optimization, index optimization, connection pooling
- Identify modules that can be extracted as skills
-
Design refactoring plan:
- Determine refactoring goals (performance, readability, maintainability)
- Design code refactoring scheme (extract functions/classes, simplify logic)
- Design performance optimization scheme (algorithm, caching, concurrency)
- Design architecture optimization scheme (decoupling, interface optimization)
- Design database optimization scheme: Query optimization, index optimization, connection pool optimization
- Identify and define new skill modules
-
Execute refactoring:
- Execute code refactoring
- Execute performance optimization
- Execute architecture optimization
- Execute database optimization: Optimize queries, update indexes, adjust connection pools
- Optimize data access layer: Improve Repository implementation
- Update skill recognition chapter in architecture.md
- Use UV to update dependencies (if needed):
-
Verify refactoring results:
- Regression testing
- Performance testing
- Database performance testing
- Function verification
- Update
-
Generate refactoring report:
- Refer to references/refactoring-plan-template.md
- Record comparison before and after refactoring (code quality, performance, architecture, database performance)
- Record skill recognition and management
Final Delivery
New Project Delivery Format
Output all file contents in the following format (each file starts with
):
Small project (single file):
--- FILE: README.md ---
<Full content of README.md>
--- FILE: pyproject.toml ---
<Full content of pyproject.toml>
--- FILE: uv.lock ---
<Full content of uv.lock>
--- FILE: main.py ---
<Full content of main.py>
--- FILE: docs/background.md ---
<Full content of docs/background.md>
--- FILE: docs/requirements.md ---
<Full content of docs/requirements.md>
--- FILE: docs/architecture.md ---
<Full content of docs/architecture.md>
--- FILE: docs/database-design.md ---
<Full content of docs/database-design.md>
--- FILE: docs/data-abstraction.md ---
<Full content of docs/data-abstraction.md>
--- FILE: test_report.md ---
<Full content of test_report.md>
Large project (folder organization):
--- FILE: README.md ---
<Full content of README.md>
--- FILE: pyproject.toml ---
<Full content of pyproject.toml>
--- FILE: uv.lock ---
<Full content of uv.lock>
--- FILE: src/__init__.py ---
<Full content of src/__init__.py>
--- FILE: src/main.py ---
<Full content of src/main.py>
--- FILE: src/repositories/__init__.py ---
<Full content of src/repositories/__init__.py>
--- FILE: src/repositories/base.py ---
<BaseRepository interface>
--- FILE: src/repositories/user_repo.py ---
<UserRepository implementation>
--- FILE: tests/__init__.py ---
<Full content of tests/__init__.py>
--- FILE: tests/test_repositories.py ---
<Repository tests>
--- FILE: docs/background.md ---
<Full content of docs/background.md>
--- FILE: docs/requirements.md ---
<Full content of docs/requirements.md>
--- FILE: docs/architecture.md ---
<Full content of docs/architecture.md>
--- FILE: docs/database-design.md ---
<Full content of docs/database-design.md>
--- FILE: docs/data-abstraction.md ---
<Full content of docs/data-abstraction.md>
--- FILE: test_report.md ---
<Full content of test_report.md>
Feature Extension Delivery Format
--- FILE: docs/requirements.md ---
<Updated requirements.md>
--- FILE: docs/architecture.md ---
<Updated architecture.md>
--- FILE: docs/database-design.md ---
<Updated database-design.md>
--- FILE: docs/data-abstraction.md ---
<Updated data-abstraction.md>
--- FILE: main.py or src/... ---
<Modified code files>
--- FILE: pyproject.toml ---
<Updated pyproject.toml (if any)>
--- FILE: README.md ---
<Updated README.md>
--- FILE: feature_extension_plan.md ---
<Feature extension plan document>
Refactoring Delivery Format
--- FILE: main.py or src/... ---
<Refactored code files>
--- FILE: docs/architecture.md ---
<Updated architecture.md (including skill recognition)>
--- FILE: docs/database-design.md ---
<Updated database-design.md>
--- FILE: docs/data-abstraction.md ---
<Updated data-abstraction.md>
--- FILE: refactoring_report.md ---
<Refactoring report>
--- FILE: test_report.md ---
<Updated test report>
Resource Index
- Background knowledge template: See references/background-template.md
- Requirement document template: See references/requirements-template.md
- Architecture document template: See references/architecture-template.md
- UV lock file template: See references/uv-lock-template.md
- Task list template: See references/todo-template.md
- Test report template: See references/test-report-template.md
- Project description template: See references/readme-template.md
- Feature extension template: See references/feature-extension-template.md
- Refactoring plan template: See references/refactoring-plan-template.md
- Database selection guide: See references/database-selection-guide.md
- Database design template: See references/database-design-template.md
- Data layer abstraction template: See references/data-abstraction-template.md
Notes
Core Requirements (Mandatory)
- UV package management: Must use UV as the package management tool, use to manage dependencies, to lock versions
- loguru logging: Must use loguru for logging
- Python 3.11+ features: Must use type annotations, PEP 585 built-in generic types
- Project size adaptation: Evaluate scale in PM phase, select organization method in architect phase
- Version control mechanism: Use Semantic Versioning (vX.Y.Z), record version history
- Skill recognition and management: Identify skills in architecture phase, reuse skills in implementation phase
- Database design and implementation: Select appropriate database based on requirements (SQLite, PostgreSQL, MongoDB, vector databases, graph databases)
- Data layer abstraction: Must implement Repository pattern, provide unified CRUD interface
- Database switching capability: Support switching between different database implementations through configuration
Process Requirements
- Autonomous learning phase: Must execute web search, cannot skip
- Requirement clarification limit: Maximum 2 rounds of interaction in PM phase, proceed with existing information if exceeded
- No over-design: Strictly implement according to user requirements, do not add unmentioned functions
- Background knowledge application: All phases must refer to and apply knowledge in background.md
- Code quality: Ensure generated code can be run directly, with complete error handling
Technical Selection (Mandatory)
- Must compare at least 2 solutions in detail
- Must explain trade-off reasons (performance vs ease of use, functionality vs complexity)
- Must reference comparison data in background.md
- Must explain the advantages and disadvantages of the selected solution
Database Requirements (Mandatory)
- Database selection: Select appropriate database based on project requirements, refer to database-selection-guide.md
- Data model design: Design reasonable table/collection structure, define indexing strategies
- Data access layer: Implement Repository pattern, provide unified CRUD interface
- Type safety: All data access functions must include type annotations
- Performance optimization: Reasonable indexing strategies, connection pool configuration, query optimization
Status Synchronization
- todo.md must correspond one-to-one with functional points in requirements.md
- Test report must cover all functional points, clearly mark pass status
Feature Extension
- Prioritize reusing existing skills to avoid repeated development
- Ensure new functions are consistent with existing code style
- Update all related documents (including database design documents)
- Execute regression testing
- Database extension: Evaluate whether new tables/collections are needed, Repository updates, migration script writing
Refactoring
- Analyze before execution, avoid blind refactoring
- Keep functions unchanged, only optimize internal implementation
- Database optimization: Optimize queries, indexes, connection pools
- Record comparison data before and after refactoring
- Identify and extract skill modules
Usage Examples
Example 1: User Management System with Database (Large Project)
- User requirement: "Implement a user management system that supports user registration, login, and information query"
- Phase 0: Search "database selection", "SQLAlchemy vs MongoDB", "user authentication best practices", "password encryption", "Python 3.11+ type annotations", "UV package management", "Repository pattern"
- PM phase:
- Clarify data scale, concurrency requirements, authentication method, password storage
- Clarify database requirements: Relational data (user information), expected 100,000 users
- Evaluate project scale: 5 functional points → medium project → select PostgreSQL (supports high concurrency)
- Architect phase:
- Database selection: PostgreSQL (relational, supports high concurrency)
- Design data model: User table (id, username, email, password_hash, created_at)
- Design data access layer: BaseRepository interface, SQLAlchemyBaseRepository, UserRepository
- Design RepositoryFactory: Supports PostgreSQL / SQLite switching
- Indexing strategy: username (UNIQUE), email (UNIQUE), created_at (INDEX)
- Select large project architecture: src/repositories/ data access layer
- Identify skill modules: Password encryption, Token generation
- Senior Dev phase:
- Implement BaseRepository interface (CRUD operations)
- Implement SQLAlchemyBaseRepository
- Implement UserRepository (get_by_email, get_by_username)
- Implement RepositoryFactory
- Use Python 3.11+ features, type annotations
- Use loguru for logging
- Generate (sqlalchemy, psycopg2-binary, alembic)
- Verification phase: Test CRUD operations, concurrent operations, database performance, type annotations, data access layer abstraction
Example 2: Document Management System (MongoDB)
- User requirement: "Implement a document management system that supports CRUD operations and full-text search of documents"
- PM phase:
- Clarify document format, storage method, search requirements
- Clarify database requirements: Document-type data, flexible schema and full-text search required
- Evaluate project scale: 4 functional points → small project → select MongoDB (document-type, flexible schema)
- Architect phase:
- Database selection: MongoDB (document-type, supports full-text search)
- Design data model: Document collection (title, content, tags, metadata, created_at)
- Design data access layer: MongoDBBaseRepository, DocumentRepository
- Indexing strategy: title (TEXT), content (TEXT), tags (MULTIKEY)
- Senior Dev phase:
- Implement MongoDBBaseRepository
- Implement DocumentRepository (search_by_keyword)
- Use pymongo, type annotations
- Verification phase: Test CRUD operations, full-text search, performance
Example 3: Vector Search System (ChromaDB)
- User requirement: "Implement a vector search system that supports semantic search of documents"
- PM phase:
- Clarify vector dimensions, similarity metrics, data scale
- Clarify database requirements: Vector search, expected 1,000,000 vectors
- Evaluate project scale: 3 functional points → small project → select ChromaDB (vector database, lightweight)
- Architect phase:
- Database selection: ChromaDB (vector database, designed for LLM)
- Design data model: ChromaDB collection (documents, embeddings, metadata)
- Design data access layer: VectorRepository
- Senior Dev phase:
- Implement VectorRepository (add_document, search)
- Use chromadb, type annotations
- Verification phase: Test vector addition, similarity search, performance
Skill Invocation Mechanism
Recognition Timing
- Architecture phase (Phase 2): Identify possible reusable skill modules, define in architecture.md
- Implementation phase (Phase 3): Prioritize calling defined skills when implementing new functions
- Extension phase (Phase 5): Reuse existing skills when adding functions
- Refactoring phase (Phase 6): Extract new skill modules during refactoring
Skill Interface Specifications
All skills must include:
- Complete type annotations: Function parameters and return values must use type annotations
- Clear docstrings: Explain functions, parameters, return values, usage scenarios
- Example code: Provide invocation examples in architecture.md
- Dependency declaration: Clearly specify other skills or libraries that depend on
Reuse Principles
- Prioritize reuse: Prioritize calling existing skills when implementing new functions
- Stable interface: Skill interfaces should remain stable, avoid frequent modifications
- Single responsibility: Skills should have a single responsibility and clear functions
- Testability: Skills should be easy to test individually
Data Access Skills
The following data access layer modules can be reused as skills:
- BaseRepository interface: General CRUD interface
- SQLAlchemyBaseRepository: General implementation for relational databases
- MongoDBBaseRepository: General implementation for MongoDB
- RepositoryFactory: Database switching factory