tooluniverse-epidemiological-analysis
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseEpidemiological Data Analysis
流行病学数据分析
Complete workflow for observational epidemiology — from research question to publication-ready report. Write and run Python code for every step. Never describe what you "would do" — do it.
观察性流行病学完整工作流——从研究问题到可用于发表的报告。为每个步骤编写并运行Python代码。永远不要描述你“将要做什么”——直接执行。
Step 1: Formulate the Research Question (PECO Framework)
步骤1:明确研究问题(PECO框架)
Define Population, Exposure, Comparator, Outcome before touching data.
- Population: Who? (e.g., adults aged 20-79, cancer patients stage III+, ICU admissions)
- Exposure: What factor? (e.g., nutrient intake, drug treatment, gene mutation, environmental pollutant)
- Comparator: Vs. what? (e.g., lowest tertile, unexposed, wild-type, placebo)
- Outcome: What health event? (e.g., disease incidence, survival time, biomarker level, mortality)
Study design check: Does the question require temporality?
- Cross-sectional: prevalence, associations at one time point
- Longitudinal/cohort: incidence, causal inference, temporal relationships
- Case-control: rare outcomes, odds ratios (nested within cohort)
- Clinical trial: intervention effects with randomized controls
If the question implies causation ("does X cause Y?") but only cross-sectional data is available, state the limitation explicitly and proceed with association language.
在处理数据之前,先定义P人群、E暴露因素、C对照、O结局。
- 人群:研究对象是谁?(例如,20-79岁成年人、III期及以上癌症患者、ICU入院患者)
- 暴露因素:研究的因素是什么?(例如,营养素摄入、药物治疗、基因突变、环境污染物)
- 对照:与什么对比?(例如,最低三分位数、未暴露组、野生型、安慰剂)
- 结局:关注的健康事件是什么?(例如,疾病发病率、生存时间、生物标志物水平、死亡率)
研究设计检查:研究问题是否需要考虑时间先后关系?
- 横断面研究:患病率、某一时间点的关联关系
- 纵向/队列研究:发病率、因果推断、时间先后关系
- 病例对照研究:罕见结局、比值比(可嵌套于队列研究中)
- 临床试验:随机对照下的干预效果
如果研究问题涉及因果关系(“X是否导致Y?”)但仅能获取横断面数据,需明确说明该局限性,并使用关联关系的表述进行分析。
Step 2: Find and Evaluate Data
步骤2:查找与评估数据
Use ToolUniverse to discover datasets and find what prior studies used:
python
undefined使用ToolUniverse工具发现数据集,并查找既往研究使用的数据:
python
undefinedSearch for relevant datasets — use find_tools to discover what's available
Search for relevant datasets — use find_tools to discover what's available
find_tools("dataset search")
find_tools("your domain keywords") # e.g., "cancer genomics", "clinical trial", "survey health"
find_tools("dataset search")
find_tools("your domain keywords") # e.g., "cancer genomics", "clinical trial", "survey health"
Search literature for study precedents — papers cite their data sources
Search literature for study precedents — papers cite their data sources
execute_tool("PubMed_search_articles", {"query": "[exposure] [outcome] [study design]", "max_results": 5})
execute_tool("EuropePMC_search_articles", {"query": "[exposure] [outcome] cohort", "limit": 5})
**Evaluate dataset fitness**: Does it have the exposure variable? The outcome? Key confounders (age, sex, plus domain-specific)? Adequate sample size?
**Power analysis** (run before committing to a dataset):
```python
from scipy.stats import norm
import numpy as np
def sample_size_logistic(p0, OR, alpha=0.05, power=0.80):
"""Minimum N for logistic regression detecting OR at given power."""
p1 = (p0 * OR) / (1 - p0 + p0 * OR)
z_a, z_b = norm.ppf(1 - alpha/2), norm.ppf(power)
n = ((z_a + z_b)**2 * (1/(p0*(1-p0)) + 1/(p1*(1-p1)))) / (np.log(OR))**2
return int(np.ceil(n))
print(f"Need N={sample_size_logistic(0.10, 1.5)} for OR=1.5 with 10% baseline prevalence")execute_tool("PubMed_search_articles", {"query": "[exposure] [outcome] [study design]", "max_results": 5})
execute_tool("EuropePMC_search_articles", {"query": "[exposure] [outcome] cohort", "limit": 5})
**数据集适用性评估**:数据集中是否包含暴露因素变量?结局变量?关键混杂因素(年龄、性别及领域特定因素)?样本量是否充足?
**效能分析**(确定使用某数据集前执行):
```python
from scipy.stats import norm
import numpy as np
def sample_size_logistic(p0, OR, alpha=0.05, power=0.80):
"""Minimum N for logistic regression detecting OR at given power."""
p1 = (p0 * OR) / (1 - p0 + p0 * OR)
z_a, z_b = norm.ppf(1 - alpha/2), norm.ppf(power)
n = ((z_a + z_b)**2 * (1/(p0*(1-p0)) + 1/(p1*(1-p1)))) / (np.log(OR))**2
return int(np.ceil(n))
print(f"Need N={sample_size_logistic(0.10, 1.5)} for OR=1.5 with 10% baseline prevalence")Step 3: Download and Prepare Data
步骤3:下载与预处理数据
Download data programmatically. Adapt the loading code to your data source's format.
python
import pandas as pd
import requests, io通过编程方式下载数据。根据数据源格式调整加载代码。
python
import pandas as pd
import requests, ioGeneric download helper — adapt URL and format to your source
Generic download helper — adapt URL and format to your source
def download_and_parse(url, fmt="csv"):
r = requests.get(url, timeout=120)
content = io.BytesIO(r.content)
if fmt == "xpt":
return pd.read_sas(content, format="xport")
elif fmt == "csv":
return pd.read_csv(content)
elif fmt == "tsv":
return pd.read_csv(content, sep="\t")
elif fmt == "stata":
return pd.read_stata(content)
elif fmt == "json":
return pd.read_json(content)
else:
return pd.read_csv(content) # default fallback
def download_and_parse(url, fmt="csv"):
r = requests.get(url, timeout=120)
content = io.BytesIO(r.content)
if fmt == "xpt":
return pd.read_sas(content, format="xport")
elif fmt == "csv":
return pd.read_csv(content)
elif fmt == "tsv":
return pd.read_csv(content, sep="\t")
elif fmt == "stata":
return pd.read_stata(content)
elif fmt == "json":
return pd.read_json(content)
else:
return pd.read_csv(content) # default fallback
Load and merge multiple files on shared ID column
Load and merge multiple files on shared ID column
df1 = download_and_parse(url1, fmt="xpt")
df2 = download_and_parse(url2, fmt="xpt")
df = df1.merge(df2, on="id_col", how="inner")
df1 = download_and_parse(url1, fmt="xpt")
df2 = download_and_parse(url2, fmt="xpt")
df = df1.merge(df2, on="id_col", how="inner")
Filter population (inclusion/exclusion criteria)
Filter population (inclusion/exclusion criteria)
df = df[(df['age'] >= 20) & (df['age'] < 80)]
df = df[(df['age'] >= 20) & (df['age'] < 80)]
Handle missing data
Handle missing data
missing_pct = df.isnull().mean() * 100
print("Missing % per variable:\n", missing_pct[missing_pct > 0].sort_values(ascending=False))
missing_pct = df.isnull().mean() * 100
print("Missing % per variable:\n", missing_pct[missing_pct > 0].sort_values(ascending=False))
Decision: complete case if <5% missing; multiple imputation if 5-20%; drop variable if >20%
Decision: complete case if <5% missing; multiple imputation if 5-20%; drop variable if >20%
Variable coding (adapt to your data)
Variable coding (adapt to your data)
df['age_group'] = pd.cut(df['age'], bins=[20,40,60,80], labels=['20-39','40-59','60-79'])
df['outcome_binary'] = (df['outcome_continuous'] >= threshold).astype(int)
**Survey weights**: Some surveys (NHANES, BRFSS, MEPS) require sampling weights for valid inference. Check the survey documentation. For weighted regression, use `statsmodels.stats.weightstats` or linearmodels.
**REST API data**: For sources like GDC (TCGA), ClinicalTrials.gov, or OpenTargets, paginate through the API:
```python
all_records = []
offset = 0
while True:
resp = requests.get(f"{api_url}?offset={offset}&limit=500", timeout=30)
batch = resp.json().get("data", [])
if not batch:
break
all_records.extend(batch)
offset += len(batch)
df = pd.DataFrame(all_records)df['age_group'] = pd.cut(df['age'], bins=[20,40,60,80], labels=['20-39','40-59','60-79'])
df['outcome_binary'] = (df['outcome_continuous'] >= threshold).astype(int)
**调查权重**:部分调查(如NHANES、BRFSS、MEPS)需要使用抽样权重以确保推断有效性。请查阅调查文档。如需进行加权回归,可使用`statsmodels.stats.weightstats`或linearmodels库。
**REST API数据**:对于GDC(TCGA)、ClinicalTrials.gov或OpenTargets等数据源,需通过分页方式获取API数据:
```python
all_records = []
offset = 0
while True:
resp = requests.get(f"{api_url}?offset={offset}&limit=500", timeout=30)
batch = resp.json().get("data", [])
if not batch:
break
all_records.extend(batch)
offset += len(batch)
df = pd.DataFrame(all_records)Step 4: Descriptive Statistics (Table 1)
步骤4:描述性统计(表1)
python
undefinedpython
undefinedTable 1: mean +/- SD for continuous, N(%) for categorical, by exposure group
Table 1: mean +/- SD for continuous, N(%) for categorical, by exposure group
continuous_vars = ['age', 'bmi'] # adapt to your variables
for var in continuous_vars:
print(df.groupby('exposure_group')[var].agg(['mean', 'std', 'count']))
categorical_vars = ['sex', 'race'] # adapt to your variables
for var in categorical_vars:
print(pd.crosstab(df['exposure_group'], df[var], normalize='index') * 100)
Check distributions: `df[var].skew()`, `scipy.stats.shapiro()`, histograms for outliers.continuous_vars = ['age', 'bmi'] # adapt to your variables
for var in continuous_vars:
print(df.groupby('exposure_group')[var].agg(['mean', 'std', 'count']))
categorical_vars = ['sex', 'race'] # adapt to your variables
for var in categorical_vars:
print(pd.crosstab(df['exposure_group'], df[var], normalize='index') * 100)
检查分布情况:使用`df[var].skew()`、`scipy.stats.shapiro()`,并通过直方图查看异常值。Step 5: Regression Analysis
步骤5:回归分析
Sequential adjustment strategy (build evidence for confounding):
python
import statsmodels.formula.api as smf
import numpy as np逐步调整策略(为混杂因素提供证据):
python
import statsmodels.formula.api as smf
import numpy as npModel 1: Unadjusted
Model 1: Unadjusted
m1 = smf.logit('outcome ~ exposure', data=df).fit(disp=0)
m1 = smf.logit('outcome ~ exposure', data=df).fit(disp=0)
Model 2: + demographics
Model 2: + demographics
m2 = smf.logit('outcome ~ exposure + age + sex + race', data=df).fit(disp=0)
m2 = smf.logit('outcome ~ exposure + age + sex + race', data=df).fit(disp=0)
Model 3: + clinical factors
Model 3: + clinical factors
m3 = smf.logit('outcome ~ exposure + age + sex + race + bmi + smoking + alcohol', data=df).fit(disp=0)
m3 = smf.logit('outcome ~ exposure + age + sex + race + bmi + smoking + alcohol', data=df).fit(disp=0)
Report ORs with 95% CI
Report ORs with 95% CI
for name, model in [('Unadjusted', m1), ('Demographics', m2), ('Fully adjusted', m3)]:
or_val = np.exp(model.params['exposure'])
ci = np.exp(model.conf_int().loc['exposure'])
print(f"{name}: OR={or_val:.2f} (95% CI: {ci[0]:.2f}-{ci[1]:.2f}), p={model.pvalues['exposure']:.4f}")
**Model selection by outcome type**:
- Continuous outcome: `smf.ols()`
- Binary outcome: `smf.logit()`
- Ordered categories: `OrderedModel` from statsmodels
- Time-to-event: `CoxPHFitter` from lifelines
- Count data: `smf.poisson()` or `smf.negativebinomial()`
**Assumption checks**:
```python
from statsmodels.stats.outliers_influence import variance_inflation_factorfor name, model in [('Unadjusted', m1), ('Demographics', m2), ('Fully adjusted', m3)]:
or_val = np.exp(model.params['exposure'])
ci = np.exp(model.conf_int().loc['exposure'])
print(f"{name}: OR={or_val:.2f} (95% CI: {ci[0]:.2f}-{ci[1]:.2f}), p={model.pvalues['exposure']:.4f}")
**根据结局类型选择模型**:
- 连续性结局:`smf.ols()`
- 二分类结局:`smf.logit()`
- 有序分类结局:statsmodels库中的`OrderedModel`
- 生存时间结局:lifelines库中的`CoxPHFitter`
- 计数数据:`smf.poisson()`或`smf.negativebinomial()`
**假设检验**:
```python
from statsmodels.stats.outliers_influence import variance_inflation_factorMulticollinearity (VIF > 5 is concerning, > 10 is severe)
Multicollinearity (VIF > 5 is concerning, > 10 is severe)
X = df[['age', 'bmi', 'exposure']].dropna()
for i, col in enumerate(X.columns):
print(f"VIF {col}: {variance_inflation_factor(X.values, i):.1f}")
undefinedX = df[['age', 'bmi', 'exposure']].dropna()
for i, col in enumerate(X.columns):
print(f"VIF {col}: {variance_inflation_factor(X.values, i):.1f}")
undefinedStep 6: Sensitivity Analyses
步骤6:敏感性分析
python
undefinedpython
undefinedStratified analysis (effect modification)
Stratified analysis (effect modification)
for stratum_var in ['sex', 'age_group', 'race']:
print(f"\n--- Stratified by {stratum_var} ---")
for level, sub in df.groupby(stratum_var):
if len(sub) < 50: continue
try:
m = smf.logit('outcome ~ exposure + age + bmi', data=sub).fit(disp=0)
or_val = np.exp(m.params['exposure'])
ci = np.exp(m.conf_int().loc['exposure'])
print(f" {level}: OR={or_val:.2f} ({ci[0]:.2f}-{ci[1]:.2f}), p={m.pvalues['exposure']:.3f}, N={len(sub)}")
except: print(f" {level}: model failed (N={len(sub)})")
for stratum_var in ['sex', 'age_group', 'race']:
print(f"\n--- Stratified by {stratum_var} ---")
for level, sub in df.groupby(stratum_var):
if len(sub) < 50: continue
try:
m = smf.logit('outcome ~ exposure + age + bmi', data=sub).fit(disp=0)
or_val = np.exp(m.params['exposure'])
ci = np.exp(model.conf_int().loc['exposure'])
print(f" {level}: OR={or_val:.2f} ({ci[0]:.2f}-{ci[1]:.2f}), p={m.pvalues['exposure']:.3f}, N={len(sub)}")
except: print(f" {level}: model failed (N={len(sub)})")
Exclude outliers (+/- 3 SD) and re-run
Exclude outliers (+/- 3 SD) and re-run
df_no_outliers = df[np.abs(stats.zscore(df['exposure'].dropna())) < 3]
m_robust = smf.logit('outcome ~ exposure + age + sex + bmi', data=df_no_outliers).fit(disp=0)
df_no_outliers = df[np.abs(stats.zscore(df['exposure'].dropna())) < 3]
m_robust = smf.logit('outcome ~ exposure + age + sex + bmi', data=df_no_outliers).fit(disp=0)
Confounder-adjusted exposure (residual method, e.g., energy-adjusted nutrient intake)
Confounder-adjusted exposure (residual method, e.g., energy-adjusted nutrient intake)
Use when exposure correlates strongly with a confounder (total calories, body size, etc.)
Use when exposure correlates strongly with a confounder (total calories, body size, etc.)
adj_model = smf.ols('exposure ~ confounder', data=df).fit()
df['exposure_adj'] = adj_model.resid
adj_model = smf.ols('exposure ~ confounder', data=df).fit()
df['exposure_adj'] = adj_model.resid
Multiple comparisons note
Multiple comparisons note
n_tests = 5 # number of exposure-outcome pairs tested
bonferroni_threshold = 0.05 / n_tests
undefinedn_tests = 5 # number of exposure-outcome pairs tested
bonferroni_threshold = 0.05 / n_tests
undefinedStep 7: Biological Interpretation (ToolUniverse Advantage)
步骤7:生物学解释(ToolUniverse优势)
This is where ToolUniverse adds value beyond any statistics package. After finding a statistical association, investigate the biological plausibility. Use to discover the right tools for your domain.
find_toolspython
undefined这是ToolUniverse相较于其他统计软件的优势所在。在发现统计关联后,需探究其生物学合理性。使用工具查找适用于您研究领域的工具。
find_toolspython
undefined1. Literature evidence — search for mechanism connecting exposure to outcome
1. Literature evidence — search for mechanism connecting exposure to outcome
execute_tool("PubMed_search_articles", {"query": "[exposure] [outcome] mechanism", "max_results": 5})
execute_tool("EuropePMC_search_articles", {"query": "[exposure] [outcome] mouse model in vivo", "limit": 5})
execute_tool("PubMed_search_articles", {"query": "[exposure] [outcome] mechanism", "max_results": 5})
execute_tool("EuropePMC_search_articles", {"query": "[exposure] [outcome] mouse model in vivo", "limit": 5})
2. Pathway/molecular context — discover tools for your exposure type
2. Pathway/molecular context — discover tools for your exposure type
find_tools("[exposure type] pathway") # e.g., "nutrient pathway", "drug target", "chemical toxicology"
find_tools("[outcome type] gene disease") # e.g., "diabetes gene", "cancer survival", "cardiac risk"
find_tools("[exposure type] pathway") # e.g., "nutrient pathway", "drug target", "chemical toxicology"
find_tools("[outcome type] gene disease") # e.g., "diabetes gene", "cancer survival", "cardiac risk"
3. Gene-disease evidence (if a gene/variant is involved)
3. Gene-disease evidence (if a gene/variant is involved)
find_tools("gene disease association")
find_tools("variant functional annotation")
find_tools("gene disease association")
find_tools("variant functional annotation")
4. Drug/chemical mechanisms (if a drug or chemical is the exposure)
4. Drug/chemical mechanisms (if a drug or chemical is the exposure)
find_tools("drug mechanism target")
find_tools("chemical gene interaction")
**The pattern**: Exposure X → (what molecular pathway?) → (what biological process?) → Outcome Y. Use ToolUniverse tools to fill in the middle steps. This converts a statistical association into a biologically plausible hypothesis.find_tools("drug mechanism target")
find_tools("chemical gene interaction")
**逻辑链**:暴露因素X →(涉及哪些分子通路?)→(涉及哪些生物学过程?)→ 结局Y。使用ToolUniverse工具填补中间环节,将统计关联转化为具有生物学合理性的假设。Step 8: Visualization
步骤8:可视化
Key plots to produce (use matplotlib):
- Forest plot: stratified ORs with 95% CI, vertical reference line at OR=1, log scale x-axis
- Dose-response curve: exposure quartile medians on x-axis vs outcome prevalence/mean on y-axis
- DAG: directed acyclic graph showing assumed causal structure (exposure, confounders, outcome)
- Scatter + regression line: for continuous outcomes with
sns.regplot()
需要生成的核心图表(使用matplotlib):
- 森林图:分层比值比及95%置信区间,x轴为对数刻度,参考线为OR=1
- 剂量-反应曲线:x轴为暴露因素四分位数中位数,y轴为结局患病率/均值
- 有向无环图(DAG):展示假设的因果结构(暴露因素、混杂因素、结局)
- 散点图+回归线:使用展示连续性结局的关系
sns.regplot()
Step 9: Report Structure
步骤9:报告结构
Write the final report in this order:
- Background: What is known, what gap this analysis addresses (cite PubMed searches from Step 7)
- Methods: PECO, data source, inclusion/exclusion, variable definitions, statistical approach, sensitivity analyses
- Results: Table 1, unadjusted and adjusted ORs/HRs, stratified results, dose-response, sensitivity checks
- Discussion: Compare to prior literature (PubMed), biological plausibility (ToolUniverse pathway/mechanism findings), clinical significance of effect size
- Limitations: Study design constraints, unmeasured confounding, selection bias, measurement error, generalizability
Key limitations to always state:
- Cross-sectional design cannot establish temporality (if applicable)
- Residual confounding from unmeasured variables
- Self-reported exposure data may have recall bias
- Survey weights may not fully account for non-response bias
最终报告按以下结构撰写:
- 背景:已有研究进展、本分析填补的研究空白(引用步骤7中的PubMed检索结果)
- 方法:PECO定义、数据源、纳入/排除标准、变量定义、统计方法、敏感性分析
- 结果:表1、未调整及调整后的比值比/风险比、分层分析结果、剂量-反应关系、敏感性检验结果
- 讨论:与既往文献对比(PubMed)、生物学合理性(ToolUniverse通路/机制研究结果)、效应量的临床意义
- 局限性:研究设计限制、未测量的混杂因素、选择偏倚、测量误差、结果的普适性
需明确说明的核心局限性:
- 若采用横断面研究设计,无法确定时间先后关系
- 未测量变量导致的残留混杂
- 自我报告的暴露数据可能存在回忆偏倚
- 调查权重可能无法完全弥补无应答偏倚
Completeness Checklist
完整性检查清单
Before finalizing any epidemiological analysis:
- PECO defined and documented
- Study design matches the research question
- Sample size adequate (power analysis done)
- Missing data reported and handled
- Table 1 produced
- Unadjusted AND adjusted models reported
- Confounders justified (not just statistically selected)
- Assumptions checked (VIF, linearity, model fit)
- At least one sensitivity analysis performed
- Biological plausibility investigated via ToolUniverse
- Effect size interpreted in clinical context (not just p-value)
- Limitations explicitly stated
在完成任何流行病学分析之前,需确认以下事项:
- 已定义并记录PECO框架
- 研究设计与研究问题匹配
- 样本量充足(已完成效能分析)
- 已报告并处理缺失数据
- 已生成表1
- 已报告未调整及调整后的模型
- 混杂因素的选择具有合理性(并非仅基于统计结果)
- 已检验模型假设(VIF、线性关系、模型拟合度)
- 已完成至少一项敏感性分析
- 通过ToolUniverse探究了生物学合理性
- 已结合临床背景解释效应量(而非仅关注p值)
- 已明确说明研究局限性