Intelligent Code Debugging Assistant
Mission Objectives
- This Skill is designed to: help you better understand how code runs, in a simpler and more intuitive way than traditional breakpoint debugging
- Capabilities include:
- Telling you the code execution order and which functions are called
- Identifying slow-running sections of code (which functions take the most time)
- Helping you locate error causes (where exceptions are thrown and how they propagate)
- Providing specific code modification suggestions
- Trigger conditions: When you say phrases like "Observe code execution", "Code quality", "Optimize code", "Code runs slowly", "Don't know where the error is", "This logic is too complex to understand", "Want to see how code executes", etc.
Operation Steps
1. Environment Preparation and Data Acquisition
Step 1.1: Debugging Environment Confirmation
- The agent asks the developer about the status of the debugging environment
- If the debugging environment terminal is not open, provide startup guidance:
- Commands or steps to open the application terminal
- Specific configurations to start the debugging tool (possibly named dev-observability or similar)
- Wait for the developer to confirm that the debugging environment is ready
Step 1.2: Query Data Storage Locations
- Before obtaining logs or metric information, you must call the Skill to query the data storage location information of the debugging tool (such as dev-observability)
- Query content:
- Storage path of the log file (observability.log)
- Storage path of metric data (metrics.prom) (if available)
- Storage path of application status data (app_status.json) (if available)
- Storage path of project data (project_data.json) (if available)
- Storage path of test data (test_metrics.json) (if available)
- Obtain the actual file path based on the query results
Step 1.3: Acquire Observation Data
- The agent reads the following data files based on the queried storage paths:
- Log files (from the queried path)
- Metric data (optional, from the queried path)
- Application status data (optional, from the queried path)
- Project data (optional, from the queried path)
- Test data (optional, from the queried path)
2. Data Parsing and Analysis
Step 2.1: Parse Log Data
Call
for processing (using the actual path queried from skill-manager):
bash
python3 scripts/parse_logs.py --log-file <log path queried from skill-manager> --output ./parsed_logs.json
- Extract execution path information
- Identify function calls and execution time
- Locate exception throwing positions
Step 2.2: Parse Metric Data (if available)
Call
scripts/parse_prometheus.py
for processing (using the actual path queried from skill-manager):
bash
python3 scripts/parse_prometheus.py --prom-file <metrics path queried from skill-manager> --output ./parsed_metrics.json
Step 2.3: Analyze Multi-dimensional Data (if available)
Call the corresponding analysis scripts (using the actual path queried from skill-manager):
bash
python3 scripts/analyze_app_status.py --input <app_status path queried from skill-manager> --output ./app_analysis.json
python3 scripts/analyze_project_data.py --input <project_data path queried from skill-manager> --output ./project_analysis.json
python3 scripts/analyze_test_metrics.py --input <test_metrics path queried from skill-manager> --output ./test_analysis.json
3. Full-process Trace Report Generation
Step 3.1: Generate Trace Report
Call
scripts/generate_trace_report.py
to generate full-process visual tracking:
bash
python3 scripts/generate_trace_report.py \
--logs ./parsed_logs.json \
--metrics ./parsed_metrics.json \
--app-status ./app_analysis.json \
--project-data ./project_analysis.json \
--test-metrics ./test_analysis.json \
--output ./trace_report.md
Step 3.2: Report Content Structure
- Execution path visualization: function call chain and timeline
- Performance metric analysis: time consumption distribution, bottleneck identification
- Exception tracing: exception stack, trigger path, root cause analysis
- Cross-dimensional correlation: project/application/test status correlation analysis
4. Problem Diagnosis and Solution Generation
Step 4.1: Agent Analyzes Trace Report
- Identify performance bottlenecks: high-latency functions, frequently called hot paths
- Locate exception root causes: exception propagation paths, precondition analysis
- Evaluate code quality: complexity, duplicate code, potential risks
Step 4.2: Generate Solutions
- For performance issues: optimization suggestions, caching strategies, concurrent processing solutions
- For exception issues: enhanced exception handling, boundary condition checks, defensive programming
- For architecture issues: module decoupling, dependency optimization, design pattern application
Step 4.3: Code Fix Suggestions
- Provide specific code modification examples
- Explain the reasons for modification and expected effects
- Give test verification suggestions
5. Running Log Recording
Step 5.1: Data Integrity Check
- Check whether all data required by this Skill can be obtained from the debugging tool (such as dev-observability):
- Log files: whether they exist and are readable
- Metric data: whether they exist (optional)
- Application status data: whether they exist (optional)
- Project data: whether they exist (optional)
- Test data: whether they exist (optional)
- Evaluate data quality issues:
- Whether logs contain necessary information (time, level, message, etc.)
- Whether metric data format is correct
- Whether JSON data structure is complete and valid
Step 5.2: Problem Identification and Recording (Mandatory)
-
This Skill relies on output files from other Skills (such as dev-observability) for code analysis
-
When any of the following situations are found, you must call the Skill to record the problem:
- Dependent file does not exist: e.g., the observability.log file from dev-observability does not exist
- Dependent file is unreadable: file exists but cannot be read (permission issues, file corruption, etc.)
- Dependent file format does not meet requirements: file exists but format does not meet the parsing requirements of this Skill
- Dependent file content is incomplete: file exists but lacks required fields or data
- Dependent file quality does not meet requirements: file content quality is insufficient to support effective code analysis
- Data reading failed: exceptions or errors occur when attempting to read dependent files
- Data parsing failed: file is read successfully but errors occur during parsing
- Any other dependency issues that affect the normal execution of this Skill
-
Importance of Problem Recording:
- The execution of this Skill depends on the output results of other Skills
- If dependent files do not exist or do not meet requirements, this Skill cannot perform normal analysis
- Recording these problems can help improve the data output of dependent Skills
- Provide problem tracking and optimization basis for Skill collaboration
Step 5.3: Problem Recording Format (Mandatory)
When calling
, you must strictly record the problem in the following JSON format:
json
{
"level": "critical / high / medium / low",
"message": "[Problem Phenomenon] [Problem Cause] [Problem Impact]"
}
Format Description:
-
(required): Severity of the problem
- : Severe problem that completely blocks Skill execution (e.g., critical dependent file missing)
- : High-priority problem that severely affects Skill functionality (e.g., main dependent file format error)
- : Medium-priority problem that affects partial functionality (e.g., optional dependent file missing)
- : Low-priority problem with minor impact (e.g., partial data incomplete)
-
(required): Problem description, must include three parts
- [Problem Phenomenon]: Specific description of what problem occurred
- [Problem Cause]: Analysis of the cause of the problem
- [Problem Impact]: Explanation of the impact of the problem on the execution of this Skill
Format Examples:
json
{
"level": "critical",
"message": "[No observability.log file] [The medium solution (Web interface) of dev-observability Skill does not generate log files] [Cannot perform execution path tracing, this Skill cannot work normally]"
}
json
{
"level": "medium",
"message": "[metrics.prom file format is incorrect] [Missing necessary TYPE comment lines] [Cannot parse Prometheus metrics, performance analysis functionality is limited]"
}
json
{
"level": "high",
"message": "[observability.log file lacks timestamp field] [Log output configuration is incomplete] [Cannot trace code execution path in chronological order, trace report is incomplete]"
}
Step 5.4: Value Proposition
- Record data output problems of dependent Skills (such as dev-observability)
- Help improve data quality and output format of dependent Skills
- Provide problem tracking and continuous optimization mechanism for Skill collaboration
- Ensure that this Skill can perform code analysis tasks normally
Step 5.5: Skill Optimization Point Identification and Suggestions (Handled by Agent)
When analyzing identified problems, the agent needs to determine whether they involve Skill optimization points:
Judgment Criteria:
- Skill configuration issues: Problems related to Skill configuration information (e.g., SKILL.md lacks required fields, version field is missing, etc.)
- Script issues: Problems related to Skill script output (e.g., incorrect log format, missing data fields, etc.)
- Documentation issues: Problems related to Skill documentation (e.g., unclear descriptions, incomplete steps, etc.)
- Integration issues: Problems related to collaboration between Skills (e.g., incompatible interfaces, inconsistent data formats, etc.)
Examples of Optimization Point Identification:
Example 1: Log Format Issue → Skill Optimization Point
- Problem description:
observability.log file lacks timestamp field
- Optimization point judgment: Related to log output format of dev-observability Skill
- Optimization type: Script output optimization
- Recommended action: Call skill-evolution-driver Skill
Example 2: Configuration Missing Issue → Skill Optimization Point
- Problem description:
SKILL.md of dev-observability lacks version field
- Optimization point judgment: Related to Skill configuration information
- Optimization type: Format improvement
- Recommended action: Call skill-evolution-driver Skill
Example 3: Data Quality Issue → Non-Skill Optimization Point
- Problem description:
Test coverage data is incomplete
- Optimization point judgment: Belongs to user data issue, not related to the Skill itself
- Optimization type: Data improvement
- Recommended action: Remind user to supplement data
Agent Response Process:
-
Analyze the list of identified problems
- Traverse all problems recorded in Step 5.2
- Determine whether each problem involves a Skill optimization point
-
Identify Skill optimization points
- If the problem involves Skill configuration, script output, documentation or collaboration
- Mark it as a Skill optimization point
- Record the involved Skill name and optimization type
-
Ask the user
If there are Skill optimization points, the agent should ask the user:
During this code analysis, the following Skill optimization points were identified:
1. Skill: dev-observability
- Optimization type: Script output optimization
- Problem: observability.log file lacks timestamp field
- Impact: Cannot trace code execution path in chronological order
Do you need to call the skill-evolution-driver Skill to handle these optimization points? (y/n)
-
Handle user selection
Select y (Yes):
- Call the skill-evolution-driver Skill
- Pass the list of optimization points (Skill name, optimization type, problem description)
- Wait for skill-evolution-driver to perform optimization
Select n (No):
- Skip optimization point processing
- Continue with subsequent steps (such as generating trace report)
- Suggest that the user can manually call skill-evolution-driver later
Notes:
- Optimization point identification is an analytical judgment by the agent, not simple keyword matching
- Need to combine problem context and Skill knowledge for judgment
- If unsure whether it is a Skill optimization point, consult the user
- Optimization point identification does not affect the core functionality of this Skill (code analysis)
Resource Index
Required Scripts
- : Parse structured logs, extract execution paths, function calls and exception information
scripts/parse_prometheus.py
: Parse Prometheus metric data, extract performance indicators
scripts/analyze_app_status.py
: Analyze application module status and completion rate
scripts/analyze_project_data.py
: Analyze project iteration progress and task status
scripts/analyze_test_metrics.py
: Analyze test tracking points and exception situations
scripts/generate_trace_report.py
: Integrate multi-dimensional data, generate full-process visual trace reports
Domain References
- : Log format specifications and parsing rules (read before parsing logs)
references/prometheus_format.md
: Prometheus metric format specifications (read before parsing metrics)
references/json_data_format.md
: JSON data format specifications (read before analyzing JSON data)
references/trace_analysis_guide.md
: Trace analysis guidelines and methodologies (read before generating reports)
Output Assets
assets/trace_templates/execution_trace.md
: Execution trace report template
assets/trace_templates/performance_metrics.md
: Performance metric report template
assets/trace_templates/error_analysis.md
: Exception analysis report template
Notes
- Important: Before obtaining logs or metric information, you must call the Skill to query the data storage location information of the debugging tool (such as dev-observability)
- Very Important (Mandatory): During execution, if dependent output files are found to be missing or do not meet requirements, you must call the Skill to record the problem in the format specified in Step 5.3
- Mandatory Rule: When dependent files are missing, unreadable, have format errors, incomplete content or insufficient quality, you must record the problem and cannot skip it
- Read reference documents only when necessary to keep the context concise
- Prioritize calling scripts for technical data processing (log parsing, metric extraction, report generation)
- Problem analysis and solution generation are completed by the agent, making full use of its reasoning capabilities
- Trace reports use Markdown format for easy visualization and version control
- Support progressive analysis: can use only log data, or integrate multi-dimensional data to improve analysis depth
- The path of all data files must be obtained from querying the debugging tool
- Problem records must strictly follow the JSON format in Step 5.3, including level and message fields
Usage Examples
Example 1: Basic Code Tracing
User Scenario: "I want to see how this function executes and why it's so slow"
Execution Method: skill-manager query + script + agent analysis + running log recording
Key Steps:
bash
# 1. Call skill-manager to query the log storage path of the debugging tool
# (Completed by the agent calling skill-manager)
# 2. Parse logs (using the queried actual path)
python3 scripts/parse_logs.py --log-file <queried log path> --output ./parsed_logs.json
# 3. Generate trace report
python3 scripts/generate_trace_report.py --logs ./parsed_logs.json --output ./trace_report.md
# 4. Agent analyzes the report and generates solutions
# 5. Data integrity check and problem recording
# If problems are found, must call skill-manager to record the problem in the format specified in Step 5.3
Example 2: Comprehensive Code Analysis
User Scenario: "Help me comprehensively analyze the running status of the code to see if there are any performance issues or errors"
Execution Method: skill-manager query + full scripts + agent in-depth analysis + running log recording
Key Steps:
bash
# 1. Call skill-manager to query the storage paths of all data files of the debugging tool
# (Completed by the agent calling skill-manager)
# 2. Parse all data sources (using the queried actual path)
python3 scripts/parse_logs.py --log-file <queried log path> --output ./parsed_logs.json
python3 scripts/parse_prometheus.py --prom-file <queried metrics path> --output ./parsed_metrics.json
python3 scripts/analyze_app_status.py --input <queried app_status path> --output ./app_analysis.json
python3 scripts/analyze_project_data.py --input <queried project_data path> --output ./project_analysis.json
python3 scripts/analyze_test_metrics.py --input <queried test_metrics path> --output ./test_analysis.json
# 3. Generate comprehensive analysis report
python3 scripts/generate_trace_report.py \
--logs ./parsed_logs.json \
--metrics ./parsed_metrics.json \
--app-status ./app_analysis.json \
--project-data ./project_analysis.json \
--test-metrics ./test_analysis.json \
--output ./trace_report.md
# 4. Agent performs cross-dimensional correlation analysis and generates comprehensive solutions
# 5. Data integrity check and problem recording
# Check if all data is complete, if there is any missing or format issue, must call skill-manager to record it in the format specified in Step 5.3
Example 3: Performance Problem Troubleshooting
User Scenario: "The code runs too slowly, help me find where the bottleneck is"
Execution Method: skill-manager query + script metric extraction + agent bottleneck analysis + running log recording
Key Points:
- Call skill-manager to query the log and metrics storage paths of the debugging tool
- Parse logs to extract function call execution time (using the queried actual path)
- Analyze metric data to identify high-latency operations (using the queried actual path)
- Agent generates performance optimization suggestions (caching, concurrency, algorithm optimization)
- Check data integrity, if data is missing or format is incorrect, call skill-manager to record the problem
Example 4: Error Location
User Scenario: "The code throws an error halfway through execution and I don't know where the problem is"
Execution Method: agent analysis + skill-manager recording (mandatory)
Key Points:
- Detected that the log file is missing
- Must call to record the problem in the format specified in Step 5.3:
json
{
"level": "critical",
"message": "[No observability.log file] [The medium solution (Web interface) of dev-observability Skill does not generate log files] [Cannot perform execution path tracing, this Skill cannot work normally]"
}
Problem Recording Explanation:
- : "critical" - because the log file is a key dependency of this Skill, its absence will completely block Skill execution
- : Includes three parts [Problem Phenomenon] [Problem Cause] [Problem Impact], which meets the format requirements of Step 5.3