tdd-pytest
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseTDD-Pytest Skill
TDD-Pytest 技能
Activate this skill when the user needs help with:
- Writing tests using TDD methodology (Red-Green-Refactor)
- Auditing existing pytest test files for quality
- Running tests with coverage
- Generating test reports to
TESTING_REPORT.local.md - Setting up pytest configuration in
pyproject.toml
当用户需要以下帮助时,可激活此技能:
- 使用TDD方法论(红-绿-重构)编写测试
- 审计现有pytest测试文件的质量
- 运行带覆盖率统计的测试
- 生成测试报告至
TESTING_REPORT.local.md - 在中配置pytest
pyproject.toml
TDD Workflow
TDD工作流
Red-Green-Refactor Cycle
红-绿-重构循环
-
RED - Write a failing test first
- Test should fail for the right reason (not import errors)
- Test should be minimal and focused
- Show the failing test output
-
GREEN - Write minimal code to pass
- Only implement what's needed to pass the test
- No premature optimization
- Show the passing test output
-
REFACTOR - Improve code while keeping tests green
- Clean up duplication
- Improve naming
- Extract functions/classes if needed
- Run tests after each change
-
红 - 先编写失败的测试
- 测试应因正确的原因失败(而非导入错误)
- 测试应简洁且聚焦
- 展示失败的测试输出
-
绿 - 编写最少代码使测试通过
- 仅实现让测试通过所需的内容
- 避免过早优化
- 展示通过的测试输出
-
重构 - 在保持测试通过的同时优化代码
- 消除重复代码
- 优化命名
- 必要时提取函数/类
- 每次修改后运行测试
Test Organization
测试组织
File Structure
文件结构
text
project/
src/
module.py
tests/
conftest.py # Shared fixtures
test_module.py # Tests for module.py
pyproject.toml # Pytest configurationtext
project/
src/
module.py
tests/
conftest.py # 共享夹具
test_module.py # module.py的测试文件
pyproject.toml # Pytest配置文件Naming Conventions
命名规范
- Test files: or
test_*.py*_test.py - Test functions:
test_* - Test classes:
Test* - Fixtures: Descriptive names (,
mock_database)sample_user
- 测试文件:或
test_*.py*_test.py - 测试函数:
test_* - 测试类:
Test* - 夹具:使用描述性名称(、
mock_database)sample_user
Pytest Best Practices
Pytest最佳实践
Fixtures
夹具
python
import pytest
@pytest.fixture
def sample_config():
return {"key": "value"}
@pytest.fixture
def mock_client(mocker):
return mocker.MagicMock()python
import pytest
@pytest.fixture
def sample_config():
return {"key": "value"}
@pytest.fixture
def mock_client(mocker):
return mocker.MagicMock()Parametrization
参数化
python
@pytest.mark.parametrize("input,expected", [
("hello", "HELLO"),
("world", "WORLD"),
("", ""),
])
def test_uppercase(input, expected):
assert input.upper() == expectedpython
@pytest.mark.parametrize("input,expected", [
("hello", "HELLO"),
("world", "WORLD"),
("", ""),
])
def test_uppercase(input, expected):
assert input.upper() == expectedAsync Tests
异步测试
python
import pytest
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result == expectedpython
import pytest
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result == expectedException Testing
异常测试
python
def test_raises_value_error():
with pytest.raises(ValueError, match="invalid input"):
process_input(None)python
def test_raises_value_error():
with pytest.raises(ValueError, match="invalid input"):
process_input(None)Running Tests
运行测试
With uv
使用uv
bash
uv run pytest # Run all tests
uv run pytest tests/test_module.py # Run specific file
uv run pytest -k "test_name" # Run by name pattern
uv run pytest -v --tb=short # Verbose with short traceback
uv run pytest --cov=src --cov-report=term # With coveragebash
uv run pytest # 运行所有测试
uv run pytest tests/test_module.py # 运行指定文件
uv run pytest -k "test_name" # 按名称模式运行
uv run pytest -v --tb=short # 详细输出+简短回溯
uv run pytest --cov=src --cov-report=term # 带覆盖率统计Common Flags
常用参数
- /
-v- Detailed output--verbose - /
-x- Stop on first failure--exitfirst - - Short tracebacks
--tb=short - - No tracebacks
--tb=no - - Run tests matching expression
-k EXPR - - Run tests with marker
-m MARKER - - Coverage for path
--cov=PATH - - Show missing lines
--cov-report=term-missing
- /
-v- 详细输出--verbose - /
-x- 首次失败即停止--exitfirst - - 简短回溯信息
--tb=short - - 不显示回溯信息
--tb=no - - 运行匹配表达式的测试
-k EXPR - - 运行带标记的测试
-m MARKER - - 统计指定路径的覆盖率
--cov=PATH - - 显示未覆盖的代码行
--cov-report=term-missing
pyproject.toml Configuration
pyproject.toml配置
Minimal Setup
基础配置
toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]Full Configuration
完整配置
toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
"slow: marks tests as slow",
"integration: marks integration tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
]
[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"raise NotImplementedError",
]
fail_under = 80
show_missing = truetoml
[tool.pytest.ini_options]
asyncio_mode = "auto"
asyncio_default_fixture_loop_scope = "function"
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]
addopts = "-v --tb=short"
markers = [
"slow: marks tests as slow",
"integration: marks integration tests",
]
filterwarnings = [
"ignore::DeprecationWarning",
]
[tool.coverage.run]
source = ["src"]
branch = true
omit = ["tests/*", "*/__init__.py"]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"if TYPE_CHECKING:",
"raise NotImplementedError",
]
fail_under = 80
show_missing = trueReport Generation
报告生成
The file should contain:
TESTING_REPORT.local.md- Test execution summary (passed/failed/skipped)
- Coverage metrics by module
- Audit findings by severity
- Recommendations with file:line references
- Evidence (command outputs)
TESTING_REPORT.local.md- 测试执行摘要(通过/失败/跳过数量)
- 按模块统计的覆盖率指标
- 按严重程度分类的审计结果
- 带文件:行号引用的改进建议
- 证据(命令输出)
Integration with Conversation
对话集成
When the user asks to write tests:
- Check conversation history for context about what to test
- Identify the code/feature being discussed
- If unclear, ask clarifying questions:
- "What specific behavior should I test?"
- "Should I include edge cases for X?"
- "Do you want unit tests, integration tests, or both?"
- Follow TDD: Write failing test first, then implement
当用户请求编写测试时:
- 检查对话历史,明确测试对象的上下文
- 识别讨论中的代码/功能
- 若信息不明确,询问澄清问题:
- "我需要测试哪些具体行为?"
- "是否需要为X添加边缘用例?"
- "您需要单元测试、集成测试还是两者都要?"
- 遵循TDD流程:先编写失败的测试,再实现功能
Commands Available
可用命令
- - Initialize pytest configuration
/tdd-pytest:init - - Write tests using TDD (context-aware)
/tdd-pytest:test [path] - - Run all tests
/tdd-pytest:test-all - - Generate/update TESTING_REPORT.local.md
/tdd-pytest:report
- - 初始化pytest配置
/tdd-pytest:init - - 结合上下文使用TDD编写测试
/tdd-pytest:test [path] - - 运行所有测试
/tdd-pytest:test-all - - 生成/更新TESTING_REPORT.local.md
/tdd-pytest:report