chief-ai-officer-advisor

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Chief AI Officer Advisor

首席AI官咨询顾问

Strategic AI leadership for startup CAIOs and founders without one. Four decisions, no AI hype:
  1. Should we use an API, fine-tune, or build our own? — model build-vs-buy with 3-year TCO
  2. Is this AI use case high-risk under regulation, and how do we govern it? — EU AI Act + NIST AI RMF + US state patchwork
  3. When do we switch from API to self-hosted, and at what cost? — token economics with breakeven analysis
  4. What AI role do we hire next? — stage-to-role map (AI engineer ≠ ML engineer ≠ research scientist)
This skill does not cover tactical AI/ML engineering. For RAG implementation, agent design, prompt engineering, eval infrastructure, model deployment, or cost optimization, see
engineering/rag-architect/
,
engineering/agent-designer/
,
engineering/prompt-governance/
,
engineering/self-eval/
,
engineering/llm-cost-optimizer/
.
为初创企业的CAIO及未设立CAIO的创始人提供AI战略领导力支持。四大决策,拒绝AI炒作:
  1. 我们应使用API、微调模型还是自研模型? —— 基于3年总拥有成本(TCO)的模型自建vs采购决策
  2. 该AI用例是否属于监管高风险范畴,我们应如何治理? —— 覆盖《欧盟AI法案》、NIST AI RMF及美国各州零散法规
  3. 我们何时从API切换至自建部署,成本如何? —— 含盈亏平衡点分析的令牌经济模型
  4. 我们下一步应招聘何种AI岗位? —— 阶段-岗位映射表(AI工程师 ≠ ML工程师 ≠ 研究科学家)
本技能不涉及AI/ML战术工程细节。如需RAG实现、Agent设计、提示词工程、评估基础设施、模型部署或成本优化相关内容,请查看
engineering/rag-architect/
engineering/agent-designer/
engineering/prompt-governance/
engineering/self-eval/
engineering/llm-cost-optimizer/

Keywords

关键词

CAIO, chief AI officer, AI strategy, model selection, foundation model, fine-tuning, RLHF, DPO, LoRA, QLoRA, build vs buy, AI build-vs-buy, model risk tier, EU AI Act, AI Act Article 6, Article 9, Article 10, Annex III, prohibited AI, high-risk AI, NIST AI RMF, AI risk management framework, NYC Local Law 144, Colorado SB 21-169, Illinois HB 53, model card, eval set, eval harness, hallucination rate, jailbreak risk, prompt injection, AI red team, AI safety, alignment, model lifecycle, model registry, API-to-self-hosted breakeven, GPU economics, A100, H100, inference cost, fine-tuning cost, AI team, AI engineer, ML engineer, research scientist, MLOps, AI platform
CAIO, chief AI officer, AI战略, 模型选型, 基础模型, fine-tuning, RLHF, DPO, LoRA, QLoRA, 自建vs采购, AI自建vs采购, 模型风险等级, 欧盟AI法案, AI法案第6条, 第9条, 第10条, 附件III, 禁用AI, 高风险AI, NIST AI RMF, AI风险管理框架, NYC Local Law 144, Colorado SB 21-169, Illinois HB 53, 模型卡片, 评估数据集, 评估工具集, 幻觉率, 越狱风险, 提示词注入, AI红队, AI安全, 对齐, 模型生命周期, 模型注册表, API转自建盈亏平衡点, GPU经济, A100, H100, 推理成本, 微调成本, AI团队, AI工程师, ML工程师, 研究科学家, MLOps, AI平台

Quick Start

快速开始

bash
undefined
bash
undefined

Decision A: API vs fine-tune vs build

决策A:API vs 微调 vs 自研

python scripts/model_buildvsbuy_calculator.py # embedded customer-support sample python scripts/model_buildvsbuy_calculator.py path/to/use_case.json
python scripts/model_buildvsbuy_calculator.py # 内置客户支持示例 python scripts/model_buildvsbuy_calculator.py path/to/use_case.json

Decision B: Risk classification under EU AI Act + US state laws

决策B:基于欧盟AI法案及美国各州法规的风险分类

python scripts/ai_risk_classifier.py # embedded hiring-AI sample python scripts/ai_risk_classifier.py path/to/use_case.json
python scripts/ai_risk_classifier.py # 内置AI招聘示例 python scripts/ai_risk_classifier.py path/to/use_case.json

Decision C: API vs self-hosted economics

决策C:API vs 自建部署经济分析

python scripts/ai_cost_economics.py # embedded 5M tokens/day sample python scripts/ai_cost_economics.py path/to/workload.json
undefined
python scripts/ai_cost_economics.py # 内置每日500万令牌示例 python scripts/ai_cost_economics.py path/to/workload.json
undefined

Key Questions (ask these first)

核心问题(优先询问)

  • What does this AI need to be good at, and how would you measure it? (If no eval set, no ship.)
  • What's the SLO on hallucination / error rate? (Without one, "AI quality" is a vibe.)
  • What happens when the model is wrong? (Fallback behavior, human-in-the-loop, blast radius.)
  • What's the risk tier under EU AI Act, and is conformity assessment required? (Determines product launch timeline.)
  • At what monthly token volume does self-hosting beat API? (Almost never below 100M tokens/month at frontier quality.)
  • Are we hiring an AI engineer or an ML research scientist? (Different jobs; founders confuse them.)
  • 该AI需具备哪些能力,如何衡量?(若无评估数据集,请勿上线。)
  • 幻觉/错误率的服务水平目标(SLO)是什么?(若无明确目标,“AI质量”只是主观感受。)
  • 模型出错时如何处理?( fallback机制、人机协同、影响范围控制。)
  • 该用例在《欧盟AI法案》下属于何种风险等级,是否需要合规评估?(决定产品上线时间线。)
  • 月令牌量达到多少时,自建部署成本低于API?(前沿模型质量下,几乎不会低于每月1亿令牌。)
  • 我们要招聘的是AI工程师还是ML研究科学家?(两者岗位不同,创始人常混淆。)

Core Responsibilities

核心职责

1. Model Build-vs-Buy

1. 模型自建vs采购

The decision is not "use AI or not" — it's API vs fine-tune vs in-house for each use case. Each path has a different TCO curve, latency profile, and capability ceiling.
Default path: API (frontier model)
  • Use when: well-served by frontier (Claude, GPT, Gemini), QPS < 100, latency budget > 1s, cost < $50K/month
  • Why: frontier APIs are 10-100x more capable than what most teams can fine-tune in-house
  • Failure mode: API rate limits at scale, vendor lock-in, capability drift between model versions
Fine-tune a smaller model
  • Use when: domain-specific behavior the API can't be prompted into (medical coding, legal redlining), high volume reducing API cost, latency budget < 500ms, specific style/format consistency required
  • Approaches: full fine-tune (rare), LoRA/QLoRA (common), RLHF/DPO (when alignment matters)
  • Failure mode: fine-tuned model lags frontier capability within 6-12 months; ongoing retraining cost
Build from scratch / pre-train
  • Use when: almost never. You're a foundation-model company, OR you have a unique data corpus, $50M+ funding, and 18+ month patience.
  • Failure mode: by the time you ship, frontier models have caught up and your sunk cost is unrecoverable
Run
model_buildvsbuy_calculator.py
for a use-case-specific recommendation with 3-year TCO. See
references/model_buildvsbuy_strategy.md
for full decision tree.
决策核心并非“是否使用AI”,而是针对每个用例选择API调用、微调还是内部自研。每种路径的总拥有成本(TCO)曲线、延迟表现及能力上限均不同。
默认路径:API调用(前沿模型)
  • 适用场景:前沿模型(Claude、GPT、Gemini)可满足需求,QPS < 100,延迟预算 > 1秒,月成本 < 5万美元
  • 原因:前沿API的能力是多数团队内部微调模型的10-100倍
  • 风险点:规模化时遭遇API速率限制、厂商锁定、模型版本间能力漂移
微调小型模型
  • 适用场景:API无法通过提示词实现特定领域行为(如医疗编码、法律红线标注)、高令牌量降低API成本、延迟预算 < 500毫秒、需特定风格/格式一致性
  • 方法:全量微调(罕见)、LoRA/QLoRA(常用)、RLHF/DPO(需对齐时)
  • 风险点:微调模型的能力在6-12个月内会落后于前沿模型;需持续承担再训练成本
从零开始训练/预训练
  • 适用场景:几乎从不适用。除非你是基础模型公司,或拥有独特数据集、5000万美元以上融资,且具备18个月以上的耐心。
  • 风险点:当你完成上线时,前沿模型已追赶上来,沉没成本无法收回
运行
model_buildvsbuy_calculator.py
获取针对特定用例的建议及3年TCO分析。完整决策树请查看
references/model_buildvsbuy_strategy.md

2. AI Risk Classification & Governance

2. AI风险分类与治理

The 2026 question every founder is facing: does this AI use case trigger high-risk regulatory obligations?
EU AI Act (in force 2026) tiers:
TierExamplesObligations
ProhibitedSocial scoring, real-time biometric surveillance, manipulative AICannot deploy in EU
High-riskEmployment screening, credit scoring, education access, critical infrastructure, law enforcement, biometric IDConformity assessment, registration, post-market monitoring, transparency, human oversight
Limited-riskChatbots, deepfakes, emotion recognitionTransparency: user must know they're interacting with AI
Minimal-riskRecommendation systems, spam filters, most B2B SaaS internalsNo specific obligations
Run
ai_risk_classifier.py
to classify a use case and get the required-controls list.
US state patchwork (non-exhaustive):
  • NYC LL 144 — Automated Employment Decision Tools (AEDTs) require annual bias audit + candidate notice
  • Colorado AI Act / SB 21-169 — AI in consumer decisions (credit, insurance, employment, housing)
  • Illinois HB 53 — AI in interview/hiring
  • California SB 1001 — Bot disclosure
  • Texas TCPA — Biometric identifier capture
  • Federal NIST AI RMF — voluntary; increasingly referenced in contracts
Industry-specific overlays:
  • Healthcare: FDA AI/ML guidance (2023), MDR (EU) for medical-device AI, 510(k) pathway for AI/ML-enabled medical devices
  • Financial: NYDFS Reg 23, FTC Section 5, ECOA for credit decisions
  • Insurance: NAIC model bulletin, state insurance commissioner rules
See
references/ai_risk_governance.md
for the full regulatory landscape + governance program checklist.
2026年每位创始人都将面临的问题:该AI用例是否触发高风险监管义务?
《欧盟AI法案》(2026年生效)风险等级:
等级示例义务
禁用社会评分、实时生物特征监控、操纵性AI不得在欧盟部署
高风险招聘筛选、信用评分、教育准入、关键基础设施、执法、生物特征识别合规评估、注册、上市后监控、透明度要求、人工监督
有限风险聊天机器人、深度伪造、情绪识别透明度要求:用户必须知晓正在与AI交互
低风险推荐系统、垃圾邮件过滤器、大多数B2B SaaS内部工具无特定义务
运行
ai_risk_classifier.py
对用例进行分类并获取所需控制措施清单。
美国各州零散法规(非 exhaustive):
  • NYC LL 144 — 自动化就业决策工具(AEDTs)需每年进行偏见审计并通知候选人
  • Colorado AI Act / SB 21-169 — 消费者决策中的AI应用(信用、保险、就业、住房)
  • Illinois HB 53 — 面试/招聘中的AI应用
  • California SB 1001 — 机器人披露要求
  • Texas TCPA — 生物特征识别捕获规定
  • 联邦NIST AI RMF — 自愿性标准;在合同中被引用的频率日益增加
行业特定叠加要求:
  • 医疗健康:FDA AI/ML指南(2023)、欧盟MDR针对医疗设备AI的规定、AI/ML赋能医疗设备的510(k)路径
  • 金融:NYDFS第23号法规、FTC第5条、信用决策中的ECOA规定
  • 保险:NAIC模型公告、州保险专员规则
完整监管格局及治理计划清单请查看
references/ai_risk_governance.md

3. AI Cost Economics

3. AI成本经济分析

The breakeven question: at what monthly token volume does self-hosted inference beat API costs?
Key components:
  • API cost — variable, per-token. Frontier models 2026: Claude Sonnet 4.6 ~$3/$15 per M tokens (input/output), GPT-4o ~$2.50/$10, Gemini 2.5 ~$1.25/$5
  • Self-hosted cost — fixed (GPU commitment) + variable (electricity). H100 spot ~$2-5/hour, A100 spot ~$1-3/hour. Llama 3.1 70B / Qwen 2.5 72B: ~$0.50-2.00 per million output tokens at 70% utilization
  • Hidden costs of self-hosting — ops on-call, monitoring, model updates, scaling overhead, idle time penalty
  • Hidden costs of API — rate limits requiring multi-vendor failover, vendor lock-in, capability drift between versions, data residency
Typical breakeven (frontier-quality): 100M–500M tokens/month, depending on model size and acceptable quality tradeoff. Below this, API wins. Above this, run the calculator.
Run
ai_cost_economics.py
with workload characteristics for a breakeven point + sensitivity to GPU rates and model size.
See
references/ai_cost_economics.md
for the full economics model and operational considerations.
盈亏平衡问题: 月令牌量达到多少时,自建推理成本低于API成本?
核心组件:
  • API成本 — 可变成本,按令牌计费。2026年前沿模型:Claude Sonnet 4.6 ~每百万令牌3/15美元(输入/输出),GPT-4o ~2.50/10美元,Gemini 2.5 ~1.25/5美元
  • 自建成本 — 固定成本(GPU投入)+ 可变成本(电力)。H100按需实例 ~2-5美元/小时,A100按需实例 ~1-3美元/小时。Llama 3.1 70B / Qwen 2.5 72B:70%利用率下,每百万输出令牌成本约0.50-2.00美元
  • 自建的隐性成本 — 运维值班、监控、模型更新、扩容开销、闲置时间损失
  • API的隐性成本 — 速率限制需多厂商故障转移、厂商锁定、版本间能力漂移、数据驻留要求
典型盈亏平衡点(前沿模型质量): 每月1亿–5亿令牌,具体取决于模型大小及可接受的质量权衡。低于此阈值,API更划算;高于此阈值,请运行计算器分析。
运行
ai_cost_economics.py
并输入工作负载特征,获取盈亏平衡点及对GPU价格、模型大小的敏感性分析。
完整经济模型及运营考量请查看
references/ai_cost_economics.md

4. AI Team Org Evolution

4. AI团队组织演进

The wrong question: "Should we hire an ML engineer or a research scientist?" The right question: "What's the next AI capability we need to ship, and what role unblocks that?"
Stage-to-role map:
StageFirst AI hireThenThen
Pre-PMFFounder + 1 ML-curious engineer playing with prompts
Series AAI engineer (applied, full-stack; owns prompts/evals/deployment)Second AI engineer for evals/quality
Series BAI/ML platform engineer (inference, evals, observability)Third AI engineer for production reliabilityData scientist if model is core IP
Series CManager of AIML research scientist (only if model IS the product)AI safety / red team (if customer-facing AI)
Late-stageHead of AI → CAIOMultiple research scientists, platform team, safety/red teamFederated AI leads per business unit
Critical distinctions:
  • AI engineerML engineerresearch scientist
    • AI engineer: full-stack + prompts + evals + deployment. Most startups need this, not the others.
    • ML engineer: production deployment, monitoring, retraining infrastructure. Hire after data engineer.
    • Research scientist: model invention, novel architectures. Only at Series C+ if model is core IP.
Centralize-vs-embed for AI: AI starts centralized (one team) and stays there longer than data team, because the surface area is smaller. Embed only when AI is being deployed in 4+ product surfaces.
See
references/ai_team_org_evolution.md
.
错误问题:“我们应招聘ML工程师还是研究科学家?” 正确问题:“我们下一步需要交付何种AI能力,哪种岗位能解锁该能力?”
阶段-岗位映射表:
阶段首个AI招聘岗位后续招聘后续招聘
产品市场契合前创始人 + 1名熟悉ML的工程师负责提示词测试
A轮AI工程师(应用型,全栈;负责提示词/评估/部署)第二名AI工程师负责评估/质量
B轮AI/ML平台工程师(推理、评估、可观测性)第三名AI工程师负责生产可靠性若模型为核心IP则招聘数据科学家
C轮AI经理ML研究科学家(仅当模型为产品核心时)AI安全/红队(若AI面向客户)
后期AI主管 → CAIO多名研究科学家、平台团队、安全/红队各业务单元设联邦AI负责人
关键区别:
  • AI工程师ML工程师研究科学家
    • AI工程师:全栈 + 提示词 + 评估 + 部署。大多数初创企业需要的是这类人才,而非其他两类。
    • ML工程师:负责生产部署、监控、再训练基础设施。应在数据工程师之后招聘。
    • 研究科学家:负责模型创新、新型架构。仅当模型为核心IP时,在C轮及以后招聘。
AI团队集中化vs嵌入: AI团队初期需集中化(单一团队),且集中化时间比数据团队更长,因为其覆盖范围更小。仅当AI部署于4个以上产品场景时,才考虑嵌入业务单元。
详情请查看
references/ai_team_org_evolution.md

Workflows

工作流程

Workflow 1: Model Selection Decision (1 hour)

工作流程1:模型选型决策(1小时)

Goal: Decide whether a specific use case should use API, fine-tune, or build.
bash
undefined
目标: 决定特定用例应采用API、微调还是自研模型。
bash
undefined

1. Define use_case.json (volume, latency, accuracy, team size, budget)

1. 定义use_case.json(令牌量、延迟、准确率、团队规模、预算)

python scripts/model_buildvsbuy_calculator.py use_case.json
python scripts/model_buildvsbuy_calculator.py use_case.json

2. Review 3-year TCO + breakeven

2. 查看3年TCO + 盈亏平衡点

3. Cross-check with cs-cfo-advisor on budget commitment

3. 与cs-cfo-advisor核对预算承诺

4. Cross-check with cs-cto-advisor on engineering capacity (esp. for fine-tune)

4. 与cs-cto-advisor核对工程能力(尤其是微调相关)

5. Log via /cs:decide; consider /cs:freeze 60 on multi-year vendor commitment

5. 通过/cs:decide记录决策;若涉及多年厂商承诺,考虑使用/cs:freeze 60

undefined
undefined

Workflow 2: AI Risk Classification (2-4 hours)

工作流程2:AI风险分类(2-4小时)

Goal: Classify a use case under EU AI Act + US state laws, identify required controls.
bash
undefined
目标: 根据《欧盟AI法案》及美国各州法规对用例分类,确定所需控制措施。
bash
undefined

1. Define use_case.json (decisions affected, users, geography, sector)

1. 定义use_case.json(影响的决策、用户群体、地域、行业)

python scripts/ai_risk_classifier.py use_case.json
python scripts/ai_risk_classifier.py use_case.json

2. For HIGH-RISK: budget conformity assessment + registration

2. 若为高风险:预算合规评估 + 注册费用

3. For LIMITED-RISK: implement transparency requirements

3. 若为有限风险:落实透明度要求

4. Cross-check with cs-general-counsel-advisor on contractual implications

4. 与cs-general-counsel-advisor核对合同影响

5. Cross-check with cs-ciso-advisor on technical safeguards

5. 与cs-ciso-advisor核对技术保障措施

6. Log via /cs:decide

6. 通过/cs:decide记录决策

undefined
undefined

Workflow 3: API-to-Self-Hosted Breakeven (1 day)

工作流程3:API转自建盈亏分析(1天)

Goal: Decide when (and whether) to migrate from API to self-hosted inference.
bash
undefined
目标: 决定何时(及是否)从API迁移至自建推理。
bash
undefined

1. Build workload.json (tokens/day, model size, latency, quality tolerance)

1. 构建workload.json(每日令牌量、模型大小、延迟、质量容忍度)

python scripts/ai_cost_economics.py workload.json
python scripts/ai_cost_economics.py workload.json

2. Run sensitivity scenarios (low/mid/high GPU rates)

2. 运行敏感性场景(低/中/高GPU价格)

3. Estimate migration cost (engineering time + risk)

3. 估算迁移成本(工程时间 + 风险)

4. Cross-check with cs-cfo-advisor on capex commitment

4. 与cs-cfo-advisor核对资本支出承诺

5. Cross-check with cs-cto-advisor on platform readiness

5. 与cs-cto-advisor核对平台就绪情况

6. Log via /cs:decide; pair with /cs:freeze if signing GPU commitment

6. 通过/cs:decide记录决策;若签署GPU承诺,搭配使用/cs:freeze

undefined
undefined

Workflow 4: AI Team Roadmap (1 week)

工作流程4:AI团队 roadmap(1周)

Goal: Sequence next 18 months of AI hires aligned to capabilities to ship.
  1. List top 5 AI capabilities the product needs in 12 months
  2. Map each capability to the role that ships it (see
    ai_team_org_evolution.md
    )
  3. Sequence hires (one role at a time, ramp before next)
  4. Cross-check with cs-chro-advisor on comp + leveling
  5. Identify the centralize-vs-embed trigger
目标: 规划未来18个月的AI招聘顺序,与需交付的能力对齐。
  1. 列出产品在12个月内需要的前5项AI能力
  2. 将每项能力映射至负责交付的岗位(查看
    ai_team_org_evolution.md
  3. 规划招聘顺序(一次招聘一个岗位,完成磨合后再招下一个)
  4. 与cs-chro-advisor核对薪酬 + 职级
  5. 确定集中化vs嵌入的触发点

Output Standards

输出标准

**Bottom Line:** [one sentence — decision and rationale]
**The Decision:** [one of: model selection | risk classification | economics | next hire]
**The Evidence:** [numbers from the tool, not adjectives]
**How to Act:** [3 concrete next steps]
**Your Decision:** [the call only the founder can make]
**核心结论:** [一句话——决策及理由]
**决策类型:** [以下之一:模型选型 | 风险分类 | 经济分析 | 下一个招聘岗位]
**依据:** [来自工具的数据,而非形容词]
**行动步骤:** [3个具体下一步]
**创始人决策:** [仅创始人能做出的最终决定]

Adjacent Skills

关联技能

  • ../chief-data-officer-advisor/
    — Training data rights, data product strategy (chains directly to model decisions)
  • ../cto-advisor/
    — Architecture capacity, scaling cliffs (esp. for self-hosted inference)
  • ../ciso-advisor/
    — Threat modeling for AI (prompt injection, jailbreak, training data poisoning)
  • ../general-counsel-advisor/
    — AI contracts (vendor liability, output ownership, training-data licensing)
  • ../cfo-advisor/
    — Build-vs-buy TCO math, multi-year vendor commitments
  • ../chro-advisor/
    — AI team hiring + comp
  • ../../../engineering/rag-architect/
    — Tactical RAG implementation
  • ../../../engineering/agent-designer/
    — Tactical agent architecture
  • ../../../engineering/prompt-governance/
    — Tactical prompt management
  • ../../../engineering/self-eval/
    — Tactical eval infrastructure
  • ../../../engineering/llm-cost-optimizer/
    — Tactical inference cost optimization
  • ../chief-data-officer-advisor/
    — 训练数据权利、数据产品战略(直接关联模型决策)
  • ../cto-advisor/
    — 架构容量、扩容瓶颈(尤其是自建推理场景)
  • ../ciso-advisor/
    — AI威胁建模(提示词注入、越狱、训练数据投毒)
  • ../general-counsel-advisor/
    — AI合同(厂商责任、输出所有权、训练数据许可)
  • ../cfo-advisor/
    — 自建vs采购TCO计算、多年厂商承诺
  • ../chro-advisor/
    — AI团队招聘 + 薪酬
  • ../../../engineering/rag-architect/
    — 战术RAG实现
  • ../../../engineering/agent-designer/
    — 战术Agent架构
  • ../../../engineering/prompt-governance/
    — 战术提示词管理
  • ../../../engineering/self-eval/
    — 战术评估基础设施
  • ../../../engineering/llm-cost-optimizer/
    — 战术推理成本优化

References

参考资料

  • model_buildvsbuy_strategy.md — Full decision tree + 3-year TCO components + when each path fails
  • ai_risk_governance.md — EU AI Act + NIST AI RMF + US state patchwork + industry overlays + governance program
  • ai_cost_economics.md — API pricing 2026 + GPU rental economics + utilization realities + migration cost
  • ai_team_org_evolution.md — Stage-to-role map + role definitions (AI engineer ≠ ML engineer ≠ scientist) + anti-patterns

Version: 1.0.0 Status: Production Ready Disclaimer: AI regulation is evolving rapidly. This skill surfaces decisions and tradeoffs as of 2026 but cannot replace qualified AI counsel for binding compliance decisions, especially under EU AI Act conformity assessments.
  • model_buildvsbuy_strategy.md — 完整决策树 + 3年TCO组件 + 各路径失效场景
  • ai_risk_governance.md — 《欧盟AI法案》+ NIST AI RMF + 美国各州法规 + 行业叠加要求 + 治理方案
  • ai_cost_economics.md — 2026年API定价 + GPU租赁经济 + 利用率实际情况 + 迁移成本
  • ai_team_org_evolution.md — 阶段-岗位映射表 + 岗位定义(AI工程师 ≠ ML工程师 ≠ 科学家) + 反模式

版本: 1.0.0 状态: 可投入生产 免责声明: AI监管法规正在快速演变。本技能基于2026年的现状呈现决策及权衡,但无法替代合格的AI法律顾问,尤其是在《欧盟AI法案》合规评估等具有约束力的合规决策中。