tdd-pytest

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

TDD-Pytest Skill

TDD-Pytest 技能

Activate this skill when the user needs help with:
  • Writing tests using TDD methodology (Red-Green-Refactor)
  • Auditing existing pytest test files for quality
  • Running tests with coverage
  • Generating test reports to
    TESTING_REPORT.local.md
  • Setting up pytest configuration in
    pyproject.toml
当用户需要以下帮助时,可激活此技能:
  • 使用TDD方法论(红-绿-重构)编写测试
  • 审计现有pytest测试文件的质量
  • 运行带覆盖率统计的测试
  • 生成测试报告至
    TESTING_REPORT.local.md
  • pyproject.toml
    中配置pytest

TDD Workflow

TDD工作流

Red-Green-Refactor Cycle

红-绿-重构循环

  1. RED - Write a failing test first
    • Test should fail for the right reason (not import errors)
    • Test should be minimal and focused
    • Show the failing test output
  2. GREEN - Write minimal code to pass
    • Only implement what's needed to pass the test
    • No premature optimization
    • Show the passing test output
  3. REFACTOR - Improve code while keeping tests green
    • Clean up duplication
    • Improve naming
    • Extract functions/classes if needed
    • Run tests after each change
  1. - 先编写失败的测试
    • 测试应因正确的原因失败(而非导入错误)
    • 测试应简洁且聚焦
    • 展示失败的测试输出
  2. 绿 - 编写最少代码使测试通过
    • 仅实现让测试通过所需的内容
    • 避免过早优化
    • 展示通过的测试输出
  3. 重构 - 在保持测试通过的同时优化代码
    • 消除重复代码
    • 优化命名
    • 必要时提取函数/类
    • 每次修改后运行测试

Test Organization

测试组织

File Structure

文件结构

text
project/
  src/
    module.py
  tests/
    conftest.py          # Shared fixtures
    test_module.py       # Tests for module.py
  pyproject.toml         # Pytest configuration
text
project/
  src/
    module.py
  tests/
    conftest.py          # 共享夹具
    test_module.py       # module.py的测试文件
  pyproject.toml         # Pytest配置文件

Naming Conventions

命名规范

  • Test files:
    test_*.py
    or
    *_test.py
  • Test functions:
    test_*
  • Test classes:
    Test*
  • Fixtures: Descriptive names (
    mock_database
    ,
    sample_user
    )
  • 测试文件:
    test_*.py
    *_test.py
  • 测试函数:
    test_*
  • 测试类:
    Test*
  • 夹具:使用描述性名称(
    mock_database
    sample_user

Pytest Best Practices

Pytest最佳实践

Fixtures

夹具

python
import pytest

@pytest.fixture
def sample_config():
    return {"key": "value"}

@pytest.fixture
def mock_client(mocker):
    return mocker.MagicMock()
python
import pytest

@pytest.fixture
def sample_config():
    return {"key": "value"}

@pytest.fixture
def mock_client(mocker):
    return mocker.MagicMock()

Parametrization

参数化

python
@pytest.mark.parametrize("input,expected", [
    ("hello", "HELLO"),
    ("world", "WORLD"),
    ("", ""),
])
def test_uppercase(input, expected):
    assert input.upper() == expected
python
@pytest.mark.parametrize("input,expected", [
    ("hello", "HELLO"),
    ("world", "WORLD"),
    ("", ""),
])
def test_uppercase(input, expected):
    assert input.upper() == expected

Async Tests

异步测试

python
import pytest

@pytest.mark.asyncio
async def test_async_function():
    result = await async_operation()
    assert result == expected
python
import pytest

@pytest.mark.asyncio
async def test_async_function():
    result = await async_operation()
    assert result == expected

Exception Testing

异常测试

python
def test_raises_value_error():
    with pytest.raises(ValueError, match="invalid input"):
        process_input(None)
python
def test_raises_value_error():
    with pytest.raises(ValueError, match="invalid input"):
        process_input(None)

Running Tests

运行测试

With uv

使用uv

bash
uv run pytest                              # Run all tests
uv run pytest tests/test_module.py         # Run specific file
uv run pytest -k "test_name"               # Run by name pattern
uv run pytest -v --tb=short                # Verbose with short traceback
uv run pytest --cov=src --cov-report=term  # With coverage
bash
uv run pytest                              # 运行所有测试
uv run pytest tests/test_module.py         # 运行指定文件
uv run pytest -k "test_name"               # 按名称模式运行
uv run pytest -v --tb=short                # 详细输出+简短回溯
uv run pytest --cov=src --cov-report=term  # 带覆盖率统计

Common Flags

常用参数

  • -v
    /
    --verbose
    - Detailed output
  • -x
    /
    --exitfirst
    - Stop on first failure
  • --tb=short
    - Short tracebacks
  • --tb=no
    - No tracebacks
  • -k EXPR
    - Run tests matching expression
  • -m MARKER
    - Run tests with marker
  • --cov=PATH
    - Coverage for path
  • --cov-report=term-missing
    - Show missing lines
  • -v
    /
    --verbose
    - 详细输出
  • -x
    /
    --exitfirst
    - 首次失败即停止
  • --tb=short
    - 简短回溯信息
  • --tb=no
    - 不显示回溯信息
  • -k EXPR
    - 运行匹配表达式的测试
  • -m MARKER
    - 运行带标记的测试
  • --cov=PATH
    - 统计指定路径的覆盖率
  • --cov-report=term-missing
    - 显示未覆盖的代码行

pyproject.toml Configuration

pyproject.toml配置

Minimal Setup

基础配置

toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]

Full Configuration

完整配置

toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
    "slow: marks tests as slow",
    "integration: marks integration tests",
]
filterwarnings = [
    "ignore::DeprecationWarning",
]

[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
fail_under = 80
show_missing = true
toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
    "slow: marks tests as slow",
    "integration: marks integration tests",
]
filterwarnings = [
    "ignore::DeprecationWarning",
]

[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
    "raise NotImplementedError",
]
fail_under = 80
show_missing = true

Report Generation

报告生成

The
TESTING_REPORT.local.md
file should contain:
  1. Test execution summary (passed/failed/skipped)
  2. Coverage metrics by module
  3. Audit findings by severity
  4. Recommendations with file:line references
  5. Evidence (command outputs)
TESTING_REPORT.local.md
文件应包含以下内容:
  1. 测试执行摘要(通过/失败/跳过数量)
  2. 按模块统计的覆盖率指标
  3. 按严重程度分类的审计结果
  4. 带文件:行号引用的改进建议
  5. 证据(命令输出)

Integration with Conversation

对话集成

When the user asks to write tests:
  1. Check conversation history for context about what to test
  2. Identify the code/feature being discussed
  3. If unclear, ask clarifying questions:
    • "What specific behavior should I test?"
    • "Should I include edge cases for X?"
    • "Do you want unit tests, integration tests, or both?"
  4. Follow TDD: Write failing test first, then implement
当用户请求编写测试时:
  1. 检查对话历史,明确测试对象的上下文
  2. 识别讨论中的代码/功能
  3. 若信息不明确,询问澄清问题:
    • "我需要测试哪些具体行为?"
    • "是否需要为X添加边缘用例?"
    • "您需要单元测试、集成测试还是两者都要?"
  4. 遵循TDD流程:先编写失败的测试,再实现功能

Commands Available

可用命令

  • /tdd-pytest:init
    - Initialize pytest configuration
  • /tdd-pytest:test [path]
    - Write tests using TDD (context-aware)
  • /tdd-pytest:test-all
    - Run all tests
  • /tdd-pytest:report
    - Generate/update TESTING_REPORT.local.md
  • /tdd-pytest:init
    - 初始化pytest配置
  • /tdd-pytest:test [path]
    - 结合上下文使用TDD编写测试
  • /tdd-pytest:test-all
    - 运行所有测试
  • /tdd-pytest:report
    - 生成/更新TESTING_REPORT.local.md