ln-645-open-source-replacer
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChinesePaths: File paths (,shared/,references/) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.../ln-*
路径说明: 文件路径(、shared/、references/)是相对于技能仓库根目录的。如果在当前工作目录找不到,请定位到SKILL.md文件所在目录,向上跳转一级即为仓库根目录。../ln-*
Open Source Replacer
开源替代工具
L3 Worker that discovers custom modules, analyzes their purpose, and finds battle-tested open-source replacements via MCP Research.
L3 Worker,可识别自定义模块、分析其用途,并通过MCP Research找到经过实际验证的开源替代方案。
Purpose & Scope
目的与适用范围
- Discover significant custom modules (>=100 LOC, utility/integration type)
- Analyze PURPOSE of each module by reading code (goal-based, not pattern-based)
- Search open-source alternatives via WebSearch, Context7, Ref
- Evaluate alternatives: stars, maintenance, license, CVE status, API compatibility
- Score replacement confidence (HIGH/MEDIUM/LOW)
- Generate migration plan for viable replacements
- Output: file-based report to
docs/project/.audit/
Out of Scope (owned by ln-625-dependencies-auditor):
- Pattern-based detection of known reinvented wheels (custom sorting, hand-rolled validation)
- Package vulnerability scanning (CVE/CVSS for existing dependencies)
Out of Scope (owned by ln-511-code-quality-checker):
- Story-level optimality checks via OPT- prefix (ln-511 cross-references ln-645 reports)
- 识别重要的自定义模块(>=100 LOC,工具类/集成类模块)
- 通过阅读代码分析每个模块的用途(基于目标而非模式匹配)
- 通过WebSearch、Context7、Ref搜索开源替代方案
- 评估替代方案:星标数、维护状态、许可证、CVE状态、API兼容性
- 为替代方案的可信度打分(高/中/低)
- 为可行的替代方案生成迁移计划
- 输出:基于文件的报告写入目录
docs/project/.audit/
不适用范围(由ln-625-dependencies-auditor负责):
- 已知重复造轮子的模式匹配检测(自定义排序、手动实现的校验逻辑)
- 包漏洞扫描(现有依赖的CVE/CVSS检测)
不适用范围(由ln-511-code-quality-checker负责):
- 通过OPT-前缀实现的需求层级最优性检查(ln-511会交叉引用ln-645的报告)
Input (from ln-640)
输入(来自ln-640)
- codebase_root: string # Project root
- tech_stack: object # Language, framework, package manager, existing dependencies
- output_dir: string # e.g., "docs/project/.audit"- codebase_root: string # 项目根目录
- tech_stack: object # 语言、框架、包管理器、现有依赖
- output_dir: string # 例如:"docs/project/.audit"Domain-aware (optional, from coordinator)
领域感知(可选,来自协调器)
- domain_mode: "global" | "domain-aware" # Default: "global"
- current_domain: string # e.g., "users", "billing" (only if domain-aware)
- scan_path: string # e.g., "src/users/" (only if domain-aware)
undefined- domain_mode: "global" | "domain-aware" # 默认:"global"
- current_domain: string # 例如:"users", "billing"(仅领域感知模式下生效)
- scan_path: string # 例如:"src/users/"(仅领域感知模式下生效)
undefinedWorkflow
工作流程
Phase 1: Discovery + Classification
阶段1:发现与分类
scan_root = scan_path IF domain_mode == "domain-aware" ELSE codebase_rootscan_root = scan_path IF domain_mode == "domain-aware" ELSE codebase_rootStep 1: Find significant custom files
步骤1:查找符合条件的自定义文件
candidates = []
FOR EACH file IN Glob("**/*.{ts,js,py,rb,go,java,cs}", root=scan_root):
IF file in node_modules/ OR vendor/ OR .venv/ OR dist/ OR build/ OR test/ OR test/:
SKIP
line_count = wc -l {file}
IF line_count >= 100:
candidates.append(file)
candidates = []
FOR EACH file IN Glob("**/*.{ts,js,py,rb,go,java,cs}", root=scan_root):
IF file in node_modules/ OR vendor/ OR .venv/ OR dist/ OR build/ OR test/ OR test/:
SKIP
line_count = wc -l {file}
IF line_count >= 100:
candidates.append(file)
Step 2: Filter to utility/library-like modules
步骤2:过滤出工具类/库类模块
utility_paths = ["utils/", "lib/", "helpers/", "common/", "shared/", "pkg/", "internal/"]
name_patterns = ["parser", "formatter", "validator", "converter", "encoder",
"decoder", "serializer", "logger", "cache", "queue", "scheduler",
"mailer", "http", "client", "wrapper", "adapter", "connector",
"transformer", "mapper", "builder", "factory", "handler"]
modules = []
FOR EACH file IN candidates:
is_utility_path = any(p in file.lower() for p in utility_paths)
is_utility_name = any(p in basename(file).lower() for p in name_patterns)
export_count = count_exports(file) # Grep for export/module.exports/def/class
IF is_utility_path OR is_utility_name OR export_count > 5:
modules.append(file)
utility_paths = ["utils/", "lib/", "helpers/", "common/", "shared/", "pkg/", "internal/"]
name_patterns = ["parser", "formatter", "validator", "converter", "encoder",
"decoder", "serializer", "logger", "cache", "queue", "scheduler",
"mailer", "http", "client", "wrapper", "adapter", "connector",
"transformer", "mapper", "builder", "factory", "handler"]
modules = []
FOR EACH file IN candidates:
is_utility_path = any(p in file.lower() for p in utility_paths)
is_utility_name = any(p in basename(file).lower() for p in name_patterns)
export_count = count_exports(file) # Grep for export/module.exports/def/class
IF is_utility_path OR is_utility_name OR export_count > 5:
modules.append(file)
Step 3: Pre-classification gate
步骤3:预分类校验
FOR EACH module IN modules:
Read first 30 lines to classify
header = Read(module, limit=30)
classify as:
- "utility": generic reusable logic (validation, parsing, formatting, HTTP, caching)
- "integration": wrappers around external services (email, payments, storage)
- "domain-specific": business logic unique to project (scoring, routing, pricing rules)
IF classification == "domain-specific":
no_replacement_found.append({module, reason: "Domain-specific business logic"})
REMOVE from modules
FOR EACH module IN modules:
读取前30行进行分类
header = Read(module, limit=30)
classify as:
- "utility": 通用可复用逻辑(校验、解析、格式化、HTTP、缓存)
- "integration": 外部服务封装(邮件、支付、存储)
- "domain-specific": 项目独有的业务逻辑(评分、路由、定价规则)
IF classification == "domain-specific":
no_replacement_found.append({module, reason: "Domain-specific business logic"})
REMOVE from modules
Cap: analyze max 15 utility/integration modules per invocation
上限:单次调用最多分析15个工具/集成类模块
modules = modules[:15]
undefinedmodules = modules[:15]
undefinedPhase 2: Goal Extraction
阶段2:目标提取
FOR EACH module IN modules:
# Read code (first 200 lines + exports summary)
code = Read(module, limit=200)
exports = Grep("export|module\.exports|def |class |func ", module)
# Extract goal: what problem does this module solve?
goal = {
domain: "email validation" | "HTTP retry" | "CSV parsing" | ...,
inputs: [types],
outputs: [types],
key_operations: ["validates email format", "checks MX records", ...],
complexity_indicators: ["regex", "network calls", "state machine", "crypto", ...],
summary: "Custom email validator with MX record checking and disposable domain filtering"
}FOR EACH module IN modules:
# 读取代码(前200行 + 导出项摘要)
code = Read(module, limit=200)
exports = Grep("export|module\.exports|def |class |func ", module)
# 提取目标:该模块解决什么问题?
goal = {
domain: "email validation" | "HTTP retry" | "CSV parsing" | ...,
inputs: [types],
outputs: [types],
key_operations: ["validates email format", "checks MX records", ...],
complexity_indicators: ["regex", "network calls", "state machine", "crypto", ...],
summary: "Custom email validator with MX record checking and disposable domain filtering"
}Phase 3: Alternative Search (MCP Research)
阶段3:替代方案搜索(MCP Research)
FOR EACH module WHERE module.goal extracted:
# Strategy 1: WebSearch (primary)
WebSearch("{goal.domain} {tech_stack.language} library package 2026")
WebSearch("{goal.summary} open source alternative {tech_stack.language}")
# Strategy 2: Context7 (for known ecosystems)
IF tech_stack.package_manager == "npm":
WebSearch("{goal.domain} npm package weekly downloads")
IF tech_stack.package_manager == "pip":
WebSearch("{goal.domain} python library pypi")
# Strategy 3: Ref (documentation search)
ref_search_documentation("{goal.domain} {tech_stack.language} recommended library")
# Strategy 4: Ecosystem alignment — check if existing project dependencies
# already cover this goal (e.g., project uses Zod → check zod plugins first)
FOR EACH dep IN tech_stack.existing_dependencies:
IF dep.ecosystem overlaps goal.domain:
WebSearch("{dep.name} {goal.domain} plugin extension")
# Collect candidates (max 5 per module)
alternatives = top_5_by_relevance(search_results)FOR EACH module WHERE module.goal extracted:
# 策略1:WebSearch(主要方式)
WebSearch("{goal.domain} {tech_stack.language} library package 2026")
WebSearch("{goal.summary} open source alternative {tech_stack.language}")
# 策略2:Context7(针对已知生态)
IF tech_stack.package_manager == "npm":
WebSearch("{goal.domain} npm package weekly downloads")
IF tech_stack.package_manager == "pip":
WebSearch("{goal.domain} python library pypi")
# 策略3:Ref(文档搜索)
ref_search_documentation("{goal.domain} {tech_stack.language} recommended library")
# 策略4:生态对齐 — 检查项目现有依赖是否已经覆盖该目标
# (例如项目已经使用Zod → 优先检查zod相关插件)
FOR EACH dep IN tech_stack.existing_dependencies:
IF dep.ecosystem overlaps goal.domain:
WebSearch("{dep.name} {goal.domain} plugin extension")
# 收集候选方案(每个模块最多5个)
alternatives = top_5_by_relevance(search_results)Phase 4: Evaluation
阶段4:评估
MANDATORY: Security Gate and License Classification run for EVERY candidate before confidence assignment.
FOR EACH module, FOR EACH alternative:
# 4a. Basic info
info = {
name: "zod" | "email-validator" | ...,
version: "latest stable",
weekly_downloads: N,
github_stars: N,
last_commit: "YYYY-MM-DD",
}
# 4b. Security Gate (mandatory)
WebSearch("{alternative.name} CVE vulnerability security advisory")
IF unpatched HIGH/CRITICAL CVE found:
security_status = "VULNERABLE"
→ Cap confidence at LOW, add warning to Findings
ELIF patched CVE (older version):
security_status = "PATCHED_CVE"
→ Note in report, no confidence cap
ELSE:
security_status = "CLEAN"
# 4c. License Classification
license = detect_license(alternative)
IF license IN ["MIT", "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "ISC", "Unlicense"]:
license_class = "PERMISSIVE"
ELIF license IN ["GPL-2.0", "GPL-3.0", "AGPL-3.0", "LGPL-2.1", "LGPL-3.0"]:
IF project_license is copyleft AND compatible:
license_class = "COPYLEFT_COMPATIBLE"
ELSE:
license_class = "COPYLEFT_INCOMPATIBLE"
ELSE:
license_class = "UNKNOWN"
# 4d. Ecosystem Alignment
ecosystem_match = alternative.name IN tech_stack.existing_dependencies
OR alternative.ecosystem == tech_stack.framework
# Prefer: zod plugin over standalone if project uses zod
# 4e. Feature & API Evaluation
api_surface_match = HIGH | MEDIUM | LOW
feature_coverage = percentage # what % of custom module features covered
migration_effort = S | M | L # S=<4h, M=4-16h, L=>16h
# 4f. Confidence Assignment
# HIGH: >10k stars, active (commit <6mo), >90% coverage,
# PERMISSIVE license, CLEAN security, ecosystem_match preferred
# MEDIUM: >1k stars, maintained (commit <1yr), >70% coverage,
# PERMISSIVE license, no unpatched CRITICAL CVEs
# LOW: <1k stars OR unmaintained OR <70% coverage
# OR COPYLEFT_INCOMPATIBLE OR VULNERABLE强制要求: 分配可信度之前,必须对每个候选方案执行安全校验和许可证分类。
FOR EACH module, FOR EACH alternative:
# 4a. 基础信息
info = {
name: "zod" | "email-validator" | ...,
version: "latest stable",
weekly_downloads: N,
github_stars: N,
last_commit: "YYYY-MM-DD",
}
# 4b. 安全校验(强制)
WebSearch("{alternative.name} CVE vulnerability security advisory")
IF unpatched HIGH/CRITICAL CVE found:
security_status = "VULNERABLE"
→ 可信度最高设为低,在发现项中添加警告
ELIF patched CVE (older version):
security_status = "PATCHED_CVE"
→ 在报告中备注,不限制可信度
ELSE:
security_status = "CLEAN"
# 4c. 许可证分类
license = detect_license(alternative)
IF license IN ["MIT", "Apache-2.0", "BSD-2-Clause", "BSD-3-Clause", "ISC", "Unlicense"]:
license_class = "PERMISSIVE"
ELIF license IN ["GPL-2.0", "GPL-3.0", "AGPL-3.0", "LGPL-2.1", "LGPL-3.0"]:
IF project_license is copyleft AND compatible:
license_class = "COPYLEFT_COMPATIBLE"
ELSE:
license_class = "COPYLEFT_INCOMPATIBLE"
ELSE:
license_class = "UNKNOWN"
# 4d. 生态对齐
ecosystem_match = alternative.name IN tech_stack.existing_dependencies
OR alternative.ecosystem == tech_stack.framework
# 优先选择:如果项目使用zod,优先选zod插件而非独立库
# 4e. 功能与API评估
api_surface_match = HIGH | MEDIUM | LOW
feature_coverage = percentage # 自定义模块功能的覆盖百分比
migration_effort = S | M | L # S=<4h, M=4-16h, L=>16h
# 4f. 可信度分配
# 高:>10k星标,活跃维护(6个月内有提交),>90%功能覆盖,
# 宽松许可证,安全无风险,优先选择生态对齐的方案
# 中:>1k星标,维护中(1年内有提交),>70%功能覆盖,
# 宽松许可证,无未修复的严重CVE
# 低:<1k星标 OR 停止维护 OR <70%功能覆盖
# OR 许可证不兼容 OR 存在安全漏洞Phase 5: Write Report + Migration Plan
阶段5:编写报告+迁移计划
MANDATORY READ: Load for file format.
shared/templates/audit_worker_report_template.mdBuild report in memory, write to .
{output_dir}/645-open-source-replacer[-{domain}].mdmarkdown
undefined强制必读: 加载查看文件格式要求。
shared/templates/audit_worker_report_template.md在内存中构建报告,写入到。
{output_dir}/645-open-source-replacer[-{domain}].mdmarkdown
undefinedOpen Source Replacement Audit Report
开源替代审计报告
<!-- AUDIT-META
worker: ln-645
category: Open Source Replacement
domain: {domain_name|global}
scan_path: {scan_path|.}
score: {X.X}
total_issues: {N}
critical: 0
high: {N}
medium: {N}
low: {N}
status: complete
-->
<!-- AUDIT-META
worker: ln-645
category: Open Source Replacement
domain: {domain_name|global}
scan_path: {scan_path|.}
score: {X.X}
total_issues: {N}
critical: 0
high: {N}
medium: {N}
low: {N}
status: complete
-->
Checks
检查项
| ID | Check | Status | Details |
|---|---|---|---|
| module_discovery | Module Discovery | passed/warning | Found N modules >= 100 LOC |
| classification | Pre-Classification | passed | N utility, M integration, K domain-specific (excluded) |
| goal_extraction | Goal Extraction | passed/warning | Extracted goals for N/M modules |
| alternative_search | Alternative Search | passed/warning | Found alternatives for N modules |
| security_gate | Security Gate | passed/warning | N candidates checked, M clean, K vulnerable |
| evaluation | Replacement Evaluation | passed/failed | N HIGH confidence, M MEDIUM |
| migration_plan | Migration Plan | passed/skipped | Generated for N replacements |
| ID | 检查项 | 状态 | 详情 |
|---|---|---|---|
| module_discovery | 模块发现 | 通过/警告 | 找到N个>=100 LOC的模块 |
| classification | 预分类 | 通过 | N个工具类,M个集成类,K个领域专属类(已排除) |
| goal_extraction | 目标提取 | 通过/警告 | 为N/M个模块提取了目标 |
| alternative_search | 替代方案搜索 | 通过/警告 | 为N个模块找到替代方案 |
| security_gate | 安全校验 | 通过/警告 | 检查N个候选方案,M个安全,K个存在漏洞 |
| evaluation | 替代方案评估 | 通过/失败 | N个高可信度,M个中可信度 |
| migration_plan | 迁移计划 | 通过/跳过 | 为N个替代方案生成了计划 |
Findings
发现项
| Severity | Location | Issue | Principle | Recommendation | Effort |
|---|---|---|---|---|---|
| HIGH | src/utils/email-validator.ts (245 LOC) | Custom email validation with MX checking | Reuse / OSS Available | Replace with zod + zod-email (28k stars, MIT, 95% coverage) | M |
| 严重程度 | 位置 | 问题 | 原则 | 建议 | 工作量 |
|---|---|---|---|---|---|
| 高 | src/utils/email-validator.ts (245 LOC) | 自定义邮件验证逻辑含MX检查 | 复用/已有开源方案 | 替换为zod + zod-email(28k星标,MIT许可证,95%功能覆盖) | M |
Migration Plan
迁移计划
| Priority | Module | Lines | Replacement | Confidence | Effort | Steps |
|---|---|---|---|---|---|---|
| 1 | src/utils/email-validator.ts | 245 | zod + zod-email | HIGH | M | 1. Install 2. Create schema 3. Replace calls 4. Remove module 5. Test |
undefined| 优先级 | 模块 | 行数 | 替代方案 | 可信度 | 工作量 | 步骤 |
|---|---|---|---|---|---|---|
| 1 | src/utils/email-validator.ts | 245 | zod + zod-email | 高 | M | 1. 安装依赖 2. 创建校验schema 3. 替换调用 4. 移除原模块 5. 测试 |
undefinedPhase 6: Return Summary
阶段6:返回摘要
Report written: docs/project/.audit/645-open-source-replacer[-{domain}].md
Score: X.X/10 | Issues: N (C:0 H:N M:N L:N)报告已写入:docs/project/.audit/645-open-source-replacer[-{domain}].md
得分:X.X/10 | 问题数:N (严重:0 高:N 中:N 低:N)Scoring
评分
Uses standard penalty formula from :
shared/references/audit_scoring.mdpenalty = (critical x 2.0) + (high x 1.0) + (medium x 0.5) + (low x 0.2)
score = max(0, 10 - penalty)Severity mapping:
- HIGH: HIGH confidence replacement for module >200 LOC
- MEDIUM: MEDIUM confidence, or HIGH confidence for 100-200 LOC
- LOW: LOW confidence (partial coverage only)
使用中的标准扣分公式:
shared/references/audit_scoring.mdpenalty = (严重问题数 x 2.0) + (高风险问题数 x 1.0) + (中风险问题数 x 0.5) + (低风险问题数 x 0.2)
score = max(0, 10 - penalty)严重程度映射:
- 高: 超过200 LOC的模块的高可信度替代方案
- 中: 中可信度替代方案,或100-200 LOC模块的高可信度替代方案
- 低: 低可信度替代方案(仅部分覆盖功能)
Critical Rules
核心规则
- Goal-based, not pattern-based: Read code to understand PURPOSE before searching alternatives
- MCP Research mandatory: Always search via WebSearch/Context7/Ref, never assume packages exist
- Security gate mandatory: WebSearch for CVEs before recommending any package; never recommend packages with unpatched HIGH/CRITICAL CVEs
- License classification mandatory: Permissive (MIT/Apache/BSD) preferred; copyleft only if project-compatible
- Ecosystem alignment: Prefer packages from project's existing dependency tree (e.g., zod plugin over standalone if project uses zod)
- Pre-classification gate: Categorize modules before analysis; exclude domain-specific business logic
- No auto-fix: Report only, never install packages or modify code
- Effort realism: S = <4h, M = 4-16h, L = >16h (migration effort is larger than simple fixes)
- Cap analysis: Max 15 modules per invocation to stay within token budget
- Evidence always: Include file paths + line counts for every finding
- 基于目标而非模式匹配: 搜索替代方案前必须先阅读代码理解用途
- 必须使用MCP Research: 始终通过WebSearch/Context7/Ref搜索,不要假设包存在
- 必须执行安全校验: 推荐任何包之前都要搜索CVE;永远不要推荐存在未修复的高/严重CVE的包
- 必须执行许可证分类: 优先选择宽松许可证(MIT/Apache/BSD);Copyleft许可证仅在项目兼容时可选
- 生态对齐: 优先选择项目现有依赖生态中的包(例如项目使用zod时优先选zod插件而非独立库)
- 预分类校验: 分析前先对模块分类,排除领域专属业务逻辑
- 不自动修复: 仅生成报告,永远不要安装包或修改代码
- 工作量评估真实: S = <4h, M = 4-16h, L = >16h(迁移工作量高于简单修复)
- 分析上限: 单次调用最多分析15个模块,避免超出token预算
- 始终保留证据: 每个发现项都要包含文件路径和行数
Definition of Done
完成标准
- Custom modules discovered (>= 100 LOC, utility/integration type)
- Pre-classification gate applied: domain-specific modules excluded with documented reason
- Goals extracted for each module (domain, inputs, outputs, operations)
- Open-source alternatives searched via MCP Research (WebSearch, Context7, Ref)
- Security gate passed: all candidates checked for CVEs via WebSearch
- License classified: Permissive/Copyleft/Unknown for each candidate
- Ecosystem alignment checked: existing project dependencies considered
- Confidence scored for each replacement (HIGH/MEDIUM/LOW)
- Migration plan generated for HIGH/MEDIUM confidence replacements
- Report written to
{output_dir}/645-open-source-replacer[-{domain}].md - Summary returned to coordinator
- 已发现自定义模块(>=100 LOC,工具/集成类)
- 已执行预分类校验:领域专属模块已排除并记录原因
- 已为每个模块提取目标(领域、输入、输出、操作)
- 已通过MCP Research(WebSearch、Context7、Ref)搜索开源替代方案
- 已通过安全校验:所有候选方案都通过WebSearch检查了CVE
- 已完成许可证分类:每个候选方案都标记为宽松/Copyleft/未知
- 已检查生态对齐:已考虑项目现有依赖
- 已为每个替代方案分配可信度(高/中/低)
- 已为高/中可信度替代方案生成迁移计划
- 报告已写入
{output_dir}/645-open-source-replacer[-{domain}].md - 已向协调器返回摘要
Reference Files
参考文件
- Worker report template:
shared/templates/audit_worker_report_template.md - Scoring algorithm:
shared/references/audit_scoring.md
Version: 1.0.0
Last Updated: 2026-02-26
- Worker报告模板:
shared/templates/audit_worker_report_template.md - 评分算法:
shared/references/audit_scoring.md
版本: 1.0.0
最后更新: 2026-02-26