reporting-pipelines

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Reporting Pipelines

报告流水线

Overview

概述

Your reporting pattern is consistent across repos: run a CLI or script that emits structured data, then export CSV/JSON/markdown reports with timestamped filenames into
reports/
or
tests/results/
.
你的报告模式在各个代码库中保持一致:运行CLI或脚本生成结构化数据,然后将带有时间戳文件名的CSV/JSON/Markdown报告导出到
reports/
tests/results/
目录中。

GitFlow Analytics Pattern

GitFlow Analytics模式

bash
undefined
bash
undefined

Basic run

基础运行命令

gitflow-analytics -c config.yaml --weeks 8 --output ./reports
gitflow-analytics -c config.yaml --weeks 8 --output ./reports

Explicit analyze + CSV

显式分析并生成CSV

gitflow-analytics analyze -c config.yaml --weeks 12 --output ./reports --generate-csv

Outputs include CSV + markdown narrative reports with date suffixes.
gitflow-analytics analyze -c config.yaml --weeks 12 --output ./reports --generate-csv

输出内容包括带日期后缀的CSV和Markdown叙述性报告。

EDGAR CSV Export Pattern

EDGAR CSV导出模式

edgar/scripts/create_csv_reports.py
reads a JSON results file and emits:
  • executive_compensation_<timestamp>.csv
  • top_25_executives_<timestamp>.csv
  • company_summary_<timestamp>.csv
This script uses pandas for sorting and percentile calculations.
edgar/scripts/create_csv_reports.py
读取JSON结果文件并生成:
  • executive_compensation_<timestamp>.csv
  • top_25_executives_<timestamp>.csv
  • company_summary_<timestamp>.csv
该脚本使用pandas进行排序和百分位数计算。

Standard Pipeline Steps

标准流水线步骤

  1. Collect base data (CLI or JSON artifacts)
  2. Normalize into rows/records
  3. Export CSV/JSON/markdown with timestamp suffixes
  4. Summarize key metrics in stdout
  5. Store outputs in
    reports/
    or
    tests/results/
  1. 收集基础数据(CLI或JSON产物)
  2. 标准化为行/记录格式
  3. 导出带时间戳后缀的CSV/JSON/Markdown文件
  4. 在标准输出中汇总关键指标
  5. 存储输出文件到
    reports/
    tests/results/
    目录

Naming Conventions

命名规范

  • Use
    YYYYMMDD
    or
    YYYYMMDD_HHMMSS
    suffixes
  • Keep one output directory per repo (
    reports/
    or
    tests/results/
    )
  • Prefer explicit prefixes (e.g.,
    narrative_report_
    ,
    comprehensive_export_
    )
  • 使用
    YYYYMMDD
    YYYYMMDD_HHMMSS
    作为后缀
  • 每个代码库保留一个输出目录(
    reports/
    tests/results/
  • 优先使用明确的前缀(例如:
    narrative_report_
    comprehensive_export_

Troubleshooting

故障排查

  • Missing output: ensure output directory exists and is writable.
  • Large CSVs: filter or aggregate before export; keep summary CSVs for quick review.
  • 输出缺失:确保输出目录存在且可写入。
  • 大型CSV文件:在导出前进行过滤或聚合;保留摘要CSV以便快速查看。

Related Skills

相关技能

  • universal/data/sec-edgar-pipeline
  • toolchains/universal/infrastructure/github-actions
  • universal/data/sec-edgar-pipeline
  • toolchains/universal/infrastructure/github-actions