Loading...
Loading...
Generate exhaustive integration functions with comprehensive test suites for all 3rd-party APIs and external services. Automatically creates function wrappers, individual test files, integrated test runners, and a detailed report of API behavior, response signatures, latency, and failure modes.
npx skill4agent add harshitsinghbhandari/domain-expansion itemized-functionsfunction_*.pytest_*.pyrun_all_tests.pyintegrations.debug.logITEMIZED_FUNCTIONS_REPORT.mdintegration_tests/.env.dev# Ollama
OLLAMA_API_URL=http://localhost:11434
OLLAMA_MODEL=neural-chat
# GitHub Linguist
GITHUB_LINGUIST_PATH=/path/to/github-linguist
# PostgreSQL
DB_HOST=localhost
DB_PORT=5432
DB_NAME=testdb
DB_USER=testuser
DB_PASSWORD=
# [Other integrations...]integration_tests/function_[service].pyintegrations.debug.logimport os
import logging
from typing import Any, Dict, List
import requests
from datetime import datetime
logger = logging.getLogger(__name__)
def call_ollama_chat(prompt: str, model: str = None, temperature: float = 0.7, timeout: int = 30) -> Dict[str, Any]:
"""
Call Ollama API for chat completion.
Args:
prompt: The user prompt
model: Model name (uses OLLAMA_MODEL env var if not provided)
temperature: Sampling temperature (0.0-1.0)
timeout: Request timeout in seconds
Returns:
Dict with keys: response, model, created_at, latency_ms
Raises:
ValueError: If credentials/config missing
requests.Timeout: If request exceeds timeout
requests.RequestException: For API errors
"""
try:
start_time = datetime.now()
api_url = os.getenv("OLLAMA_API_URL", "http://localhost:11434")
model = model or os.getenv("OLLAMA_MODEL")
if not model:
raise ValueError("OLLAMA_MODEL not set in environment")
response = requests.post(
f"{api_url}/api/chat",
json={
"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"stream": False
},
timeout=timeout
)
response.raise_for_status()
latency_ms = (datetime.now() - start_time).total_seconds() * 1000
result = response.json()
result["latency_ms"] = latency_ms
logger.debug(f"Ollama chat call successful. Latency: {latency_ms}ms")
return result
except requests.Timeout:
logger.error(f"Ollama API timeout after {timeout}s")
raise
except requests.RequestException as e:
logger.error(f"Ollama API error: {str(e)}")
raise
except Exception as e:
logger.error(f"Unexpected error calling Ollama: {str(e)}")
raiseintegration_tests/test_[service].pyintegrations.debug.logimport pytest
import os
from unittest.mock import patch, MagicMock
import requests
from function_ollama import call_ollama_chat
@pytest.fixture
def setup_env(monkeypatch):
"""Setup environment variables for testing."""
monkeypatch.setenv("OLLAMA_API_URL", "http://localhost:11434")
monkeypatch.setenv("OLLAMA_MODEL", "neural-chat")
class TestOllamaChatSuccess:
"""Test successful Ollama chat calls."""
def test_basic_chat(self, setup_env):
"""Test basic chat completion."""
response = call_ollama_chat("What is 2+2?")
assert "response" in response
assert response["model"] == "neural-chat"
assert "latency_ms" in response
assert response["latency_ms"] > 0
def test_chat_with_temperature(self, setup_env):
"""Test chat with different temperature values."""
for temp in [0.0, 0.5, 1.0]:
response = call_ollama_chat("Tell a story", temperature=temp)
assert "response" in response
assert response["latency_ms"] > 0
def test_long_prompt(self, setup_env):
"""Test with very long prompt."""
long_prompt = "What is the meaning of life? " * 100
response = call_ollama_chat(long_prompt)
assert "response" in response
class TestOllamaChatFailures:
"""Test failure modes."""
def test_auth_failure(self, setup_env, monkeypatch):
"""Test behavior when API authentication fails."""
monkeypatch.setenv("OLLAMA_API_URL", "http://invalid-url:11434")
with pytest.raises(requests.RequestException):
call_ollama_chat("test")
def test_timeout(self, setup_env, monkeypatch):
"""Test timeout handling."""
with patch('requests.post') as mock_post:
mock_post.side_effect = requests.Timeout()
with pytest.raises(requests.Timeout):
call_ollama_chat("test", timeout=1)
def test_missing_credentials(self, monkeypatch):
"""Test when required env vars are missing."""
monkeypatch.delenv("OLLAMA_MODEL", raising=False)
with pytest.raises(ValueError):
call_ollama_chat("test")integration_tests/heavy_test_[service].py"""
Heavy API test suite for [service].
Reasoning:
- [Service] requires extensive testing due to [specific reason]:
- Streaming responses with large payloads
- Multiple chained API calls (25+ total)
- Data processing >50MB
- Complex state management across calls
- Critical performance path in architecture
These tests are separated from standard tests to avoid:
- Excessive API quota usage during CI/CD
- Extended test execution time
- Unnecessary load on rate-limited endpoints
"""integration_tests/run_all_tests.pyITEMIZED_FUNCTIONS_REPORT.mdTest Results Summary:
- Total: 42 tests
- Passed: 38
- Failed: 2
- Skipped: 2
Service Latency Metrics:
- Ollama Chat: avg 145ms, min 89ms, max 287ms (10 calls)
- GitHub Linguist: avg 234ms, min 156ms, max 412ms (8 calls)
- PostgreSQL: avg 12ms, min 8ms, max 31ms (10 calls)
[Detailed results for each service...]integration_tests/integrations.debug.log[2024-01-15 14:32:15.342] DEBUG [ollama] Calling /api/chat with model=neural-chat
[2024-01-15 14:32:15.521] DEBUG [ollama] Response received: 145ms latency, 1250 chars
[2024-01-15 14:32:16.012] ERROR [github-linguist] FAILED_TO_TEST - Connection refused (auth_required, network_error, timeout, api_error, etc.)ITEMIZED_FUNCTIONS_REPORT.md# Itemized Functions Report
**Generated:** [timestamp]
**Architecture Analyzed:** [list of architecture files]
**Total Integrations Tested:** [count]
**Test Success Rate:** [X%]
## Executive Summary
- [count] integrations identified and tested
- [X] tests passed, [Y] failed, [Z] skipped
- Key findings and blockers (if any)
## Integration Details
### [Service Name] (e.g., Ollama)
**Purpose (from architecture):** [extracted from architecture]
**Function Signature:**
\`\`\`python
def call_ollama_chat(prompt: str, model: str = None, temperature: float = 0.7, timeout: int = 30) -> Dict[str, Any]
\`\`\`
**Test Coverage:** [count tests, all passed/mixed/failed]
**Latency:** avg X ms, min Y ms, max Z ms (10 calls)
**Sample API Response (sanitized):**
\`\`\`json
{
"response": "2 + 2 = 4",
"model": "neural-chat",
"created_at": "2024-01-15T14:32:15Z",
"latency_ms": 145
}
\`\`\`
**Failure Modes Tested:**
- ✓ Timeout (handled correctly, raises Timeout exception)
- ✓ Auth failure (handled correctly, raises RequestException)
- ✓ Malformed response (handled correctly, raises JSONDecodeError)
- ✓ Service unavailable (raises ConnectionError)
**Key Learnings:**
- [Finding 1]: [detail]
- [Finding 2]: [detail]
- [Gotcha/quirk if discovered]: [detail]
**Heavy Tests:** None
(or if applicable: `heavy_test_ollama.py` — [reason])
---
### [Next Service...]
[Same structure as above]
---
## Failed Tests & Blockers
### [Service Name] - FAILED_TO_TEST
**Reason:** [auth_required, network_error, timeout, api_error, service_down, etc.]
**Error Message:** [exact error]
**Suggestion:** [how to resolve, e.g., "Set OLLAMA_API_URL in .env.dev and ensure Ollama service is running"]
---
## Cross-Service Insights
[Any patterns, dependencies, or interactions discovered across integrations]
---
## Recommendations
- [Any critical issues or setup requirements]
- [Performance or scaling considerations]
- [Dependencies between services]
---
## Test Execution Log
[Link to or excerpt from integrations.debug.log]test_*.pyintegration_tests/
├── .env.dev # Template credentials file
├── integrations.debug.log # Debug log from test execution
├── ITEMIZED_FUNCTIONS_REPORT.md # Final summary report
├── run_all_tests.py # Master test runner
├── function_ollama.py # Function wrapper
├── test_ollama.py # Standard tests
├── heavy_test_ollama.py # (if needed) Heavy tests
├── function_github_linguist.py # Another wrapper
├── test_github_linguist.py # Standard tests
├── function_postgres.py # Another wrapper
├── test_postgres.py # Standard tests
└── [More function/test pairs...]integration_tests/.env.devpython run_all_tests.pyITEMIZED_FUNCTIONS_REPORT.mdintegrations.debug.log