solo-deploy
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese/deploy
/deploy
Deploy the project to its hosting platform. Reads the stack template YAML () for exact deploy config (platform, CLI tools, infra tier, CI/CD, monitoring), detects installed CLI tools, sets up database and environment, pushes code, and verifies deployment is live.
templates/stacks/{stack}.yaml将项目部署到对应的托管平台。读取栈模板YAML文件()获取精确的部署配置(平台、CLI工具、基础设施层级、CI/CD、监控),检测已安装的CLI工具,设置数据库和环境,推送代码,并验证部署是否上线。
templates/stacks/{stack}.yamlReferences
参考文档
- — CI/CD, secrets, DNS, shared infra rules (solo-factory)
templates/principles/dev-principles.md - — Stack templates with deploy, infra, ci_cd, monitoring fields (solo-factory)
templates/stacks/*.yaml
Paths are relative to. If not found, trysolo-factory/(solopreneur KB symlinks).1-methodology/
- — CI/CD、密钥、DNS、共享基础设施规则(solo-factory)
templates/principles/dev-principles.md - — 包含部署、基础设施、ci_cd、监控字段的栈模板(solo-factory)
templates/stacks/*.yaml
路径相对于。如果未找到,请尝试solo-factory/(个体创业者知识库符号链接)。1-methodology/
When to use
使用时机
After has completed all tasks (build stage is complete). This is the deployment engine.
/buildPipeline: → →
/build/deploy/review在/build完成所有任务(构建阶段结束)之后使用。这是部署引擎。
流水线:/build → → /review
/deployMCP Tools (use if available)
MCP工具(如有可用则使用)
- — find how similar projects were deployed before
session_search(query) - — find deployment patterns across projects
project_code_search(query, project) - — check project dependencies and stack
codegraph_query(query)
If MCP tools are not available, fall back to Glob + Grep + Read.
- — 查找之前类似项目的部署方式
session_search(query) - — 跨项目查找部署模式
project_code_search(query, project) - — 检查项目依赖和技术栈
codegraph_query(query)
如果MCP工具不可用,回退使用Glob + Grep + Read。
Pre-flight Checks
预检查
1. Verify build is complete
1. 验证构建已完成
- Check exists.
.solo/states/build - If not found: "Build not complete. Run first."
/build
- 检查是否存在。
.solo/states/build - 如果未找到:“构建未完成,请先运行/build。”
2. Detect available CLI tools
2. 检测可用的CLI工具
Run in parallel — detect what's installed locally:
bash
vercel --version 2>/dev/null && echo "VERCEL_CLI=yes" || echo "VERCEL_CLI=no"
wrangler --version 2>/dev/null && echo "WRANGLER_CLI=yes" || echo "WRANGLER_CLI=no"
npx supabase --version 2>/dev/null && echo "SUPABASE_CLI=yes" || echo "SUPABASE_CLI=no"
fly version 2>/dev/null && echo "FLY_CLI=yes" || echo "FLY_CLI=no"
sst version 2>/dev/null && echo "SST_CLI=yes" || echo "SST_CLI=no"
gh --version 2>/dev/null && echo "GH_CLI=yes" || echo "GH_CLI=no"Record which tools are available. Use them directly when found — do NOT if CLI is already installed globally.
npx并行运行以下命令,检测本地已安装的工具:
bash
vercel --version 2>/dev/null && echo "VERCEL_CLI=yes" || echo "VERCEL_CLI=no"
wrangler --version 2>/dev/null && echo "WRANGLER_CLI=yes" || echo "WRANGLER_CLI=no"
npx supabase --version 2>/dev/null && echo "SUPABASE_CLI=yes" || echo "SUPABASE_CLI=no"
fly version 2>/dev/null && echo "FLY_CLI=yes" || echo "FLY_CLI=no"
sst version 2>/dev/null && echo "SST_CLI=yes" || echo "SST_CLI=no"
gh --version 2>/dev/null && echo "GH_CLI=yes" || echo "GH_CLI=no"记录可用的工具。如果找到已安装的CLI工具,请直接使用——不要在全局已安装的情况下使用。
npx3. Load project context (parallel reads)
3. 加载项目上下文(并行读取)
- — stack name, architecture, deploy platform
CLAUDE.md - — product requirements, deployment notes
docs/prd.md - — CI/CD policy (if exists)
docs/workflow.md - or
package.json— dependencies, scriptspyproject.toml - ,
fly.toml,wrangler.toml— platform configs (if exist)sst.config.ts - — active plan (look for deploy-related phases/tasks)
docs/plan/*/plan.md
Plan-driven deploy: If the active plan contains deploy phases or tasks (e.g. "deploy Python backend to VPS", "run deploy.sh", "set up Docker on server"), treat those as primary deploy instructions. The plan knows the project-specific deploy targets that the generic stack YAML may not cover. Execute plan deploy tasks in addition to (or instead of) the standard platform deploy below.
- — 栈名称、架构、部署平台
CLAUDE.md - — 产品需求、部署说明
docs/prd.md - — CI/CD策略(如果存在)
docs/workflow.md - 或
package.json— 依赖项、脚本pyproject.toml - 、
fly.toml、wrangler.toml— 平台配置文件(如果存在)sst.config.ts - — 活跃计划(查找与部署相关的阶段/任务)
docs/plan/*/plan.md
基于计划的部署: 如果活跃计划包含部署阶段或任务(例如“将Python后端部署到VPS”“运行deploy.sh”“在服务器上设置Docker”),将这些作为主要部署指令。计划包含了通用栈YAML可能未覆盖的项目特定部署目标。除了下面的标准平台部署步骤外,还要执行计划中的部署任务(或替代标准步骤)。
4. Read stack template YAML
4. 读取栈模板YAML
Extract the stack name from (look for field or tech stack section).
CLAUDE.mdstack:Read the stack template to get exact deploy configuration:
Search order (first found wins):
- — relative to this skill's repo (solo-factory)
templates/stacks/{stack}.yaml - — solopreneur KB symlink
1-methodology/stacks/{stack}.yaml - — user's local overrides (from
.solo/stacks/{stack}.yaml)/init
Extract these fields from the YAML:
- — target platform(s):
deploy,vercel,cloudflare_workers,cloudflare_pages,fly.io,docker,app_store,play_storelocal - — CLI tools and their use cases (e.g.
deploy_cli)vercel (local preview, env vars, promote) - — infrastructure tool and tier (e.g.
infra)sst (sst.config.ts) — Tier 1 - — CI/CD system (e.g.
ci_cd)github_actions - — monitoring/analytics (e.g.
monitoring)posthog - /
database— database and ORM if any (affects migration step)orm - — storage services if any (R2, D1, KV, etc.)
storage - — stack-specific deployment notes
notes
Use the YAML values as the source of truth for all deploy decisions below. The YAML overrides the fallback tier matrix.
从中提取栈名称(查找字段或技术栈部分)。
CLAUDE.mdstack:读取栈模板以获取精确的部署配置:
搜索顺序(找到第一个即停止):
- — 相对于本技能的仓库(solo-factory)
templates/stacks/{stack}.yaml - — 个体创业者知识库符号链接
1-methodology/stacks/{stack}.yaml - — 用户本地覆盖配置(来自/init)
.solo/stacks/{stack}.yaml
从YAML中提取以下字段:
- — 目标平台:
deploy、vercel、cloudflare_workers、cloudflare_pages、fly.io、docker、app_store、play_storelocal - — CLI工具及其用例(例如
deploy_cli)vercel (local preview, env vars, promote) - — 基础设施工具和层级(例如
infra)sst (sst.config.ts) — Tier 1 - — CI/CD系统(例如
ci_cd)github_actions - — 监控/分析工具(例如
monitoring)posthog - /
database— 数据库和ORM(如果有,影响迁移步骤)orm - — 存储服务(如果有,R2、D1、KV等)
storage - — 栈特定的部署说明
notes
将YAML值作为以下所有部署决策的唯一依据。YAML会覆盖回退层级矩阵。
5. Detect platform (fallback if no YAML)
5. 检测平台(无YAML时回退)
If stack YAML was not found, use this fallback matrix:
| Stack | Platform | Tier |
|---|---|---|
| Vercel + Supabase | Tier 1 |
| Cloudflare Workers (wrangler) | Tier 1 |
| Cloudflare Pages (wrangler) | Tier 1 |
| Fly.io (quick) or Pulumi + Hetzner (production) | Tier 2/4 |
| skip (CLI tool, no hosting needed) | — |
| skip (App Store is manual) | — |
| skip (Play Store is manual) | — |
If specifies a platform, use that instead of auto-detection or YAML.
$ARGUMENTSAuto-deploy platforms (from YAML field or fallback):
deploy- /
vercel— auto-deploy on push. Push to GitHub is sufficient if project is already linked. Only run manual deploy for initial setup.cloudflare_pages - —
cloudflare_workersneeded (no git-based auto-deploy for Workers).wrangler deploy - —
fly.ioneeded.fly deploy
如果未找到栈YAML,使用以下回退矩阵:
| 技术栈 | 平台 | 层级 |
|---|---|---|
| Vercel + Supabase | Tier 1 |
| Cloudflare Workers(wrangler) | Tier 1 |
| Cloudflare Pages(wrangler) | Tier 1 |
| Fly.io(快速部署)或Pulumi + Hetzner(生产环境) | Tier 2/4 |
| 跳过(CLI工具,无需托管) | — |
| 跳过(App Store需手动操作) | — |
| 跳过(Play Store需手动操作) | — |
如果指定了平台,使用指定平台而非自动检测或YAML配置。
$ARGUMENTS自动部署平台(来自YAML的字段或回退矩阵):
deploy- /
vercel— 推送代码时自动部署。如果项目已关联,推送到GitHub即可。仅在初始设置时运行手动部署命令。cloudflare_pages - — 需要
cloudflare_workers(Workers不支持基于Git的自动部署)。wrangler deploy - — 需要
fly.io。fly deploy
Deployment Steps
部署步骤
Step 1. Git — Clean State + Push
步骤1. Git — 清理状态 + 推送
bash
git status
git log --oneline -5If dirty, commit remaining changes:
bash
git add -A
git commit -m "chore: pre-deploy cleanup"Ensure remote exists and push:
bash
git remote -v
git push origin mainIf no remote, create GitHub repo:
bash
gh repo create {project-name} --private --source=. --pushFor platforms with auto-deploy (Vercel, CF Pages): pushing to main triggers deployment automatically. Skip manual deploy commands if project is already linked.
bash
git status
git log --oneline -5如果工作区有未提交的更改,提交剩余更改:
bash
git add -A
git commit -m "chore: pre-deploy cleanup"确保远程仓库存在并推送:
bash
git remote -v
git push origin main如果没有远程仓库,创建GitHub仓库:
bash
gh repo create {project-name} --private --source=. --push对于支持自动部署的平台(Vercel、CF Pages): 推送到main分支会自动触发部署。如果项目已关联,跳过手动部署命令。
Step 2. Database Setup
步骤2. 数据库设置
Supabase (if dir or Supabase deps detected):
supabase/bash
undefinedSupabase(如果检测到目录或Supabase依赖):
supabase/bash
undefinedIf supabase CLI available:
如果Supabase CLI可用:
supabase db push # apply migrations
supabase gen types --lang=typescript --local > db/types.ts # optional: regenerate types
If no CLI: guide user to Supabase dashboard for migration.
**Drizzle ORM** (if `drizzle.config.ts` exists):
```bash
npx drizzle-kit push # push schema to database
npx drizzle-kit generate # generate migration files (if needed)D1 (Cloudflare) (if has D1 bindings):
wrangler.tomlbash
wrangler d1 migrations apply {db-name}If database is not configured yet, list what's needed and continue — don't block on it.
supabase db push # 应用迁移
supabase gen types --lang=typescript --local > db/types.ts # 可选:重新生成类型
如果没有CLI:引导用户到Supabase控制台进行迁移。
**Drizzle ORM**(如果`drizzle.config.ts`存在):
```bash
npx drizzle-kit push # 将架构推送到数据库
npx drizzle-kit generate # 生成迁移文件(如果需要)D1(Cloudflare)(如果包含D1绑定):
wrangler.tomlbash
wrangler d1 migrations apply {db-name}如果数据库尚未配置,列出所需内容并继续——不要因数据库问题阻塞部署。
Step 3. Environment Variables
步骤3. 环境变量
Read or to identify required variables.
.env.example.env.local.exampleGenerate platform-specific instructions:
Vercel:
bash
undefined读取或以识别所需变量。
.env.example.env.local.example生成平台特定的操作说明:
Vercel:
bash
undefinedIf vercel CLI is available and project is linked:
如果Vercel CLI可用且项目已关联:
vercel env ls # show current env vars
vercel env ls # 显示当前环境变量
Guide user:
引导用户:
echo "Set env vars: vercel env add VARIABLE_NAME"
echo "Or via dashboard: https://vercel.com/[team]/[project]/settings/environment-variables"
**Cloudflare:**
```bash
wrangler secret put VARIABLE_NAME # interactive prompt for valueecho "设置环境变量:vercel env add VARIABLE_NAME"
echo "或通过控制台:https://vercel.com/[team]/[project]/settings/environment-variables"
**Cloudflare:**
```bash
wrangler secret put VARIABLE_NAME # 交互式输入变量值Or in wrangler.toml [vars] section for non-secret values
对于非机密值,也可在wrangler.toml的[vars]部分设置
**Fly.io:**
```bash
fly secrets set VARIABLE_NAME=value
fly secrets listDo NOT create or modify files with real secrets.
List what's needed, let user set values.
.env
**Fly.io:**
```bash
fly secrets set VARIABLE_NAME=value
fly secrets list请勿创建或修改包含真实密钥的文件。列出所需变量,让用户设置值。
.envStep 4. Platform Deploy
步骤4. 平台部署
Vercel (if not auto-deploying):
bash
vercel link # first time: link to project
vercel # deploy preview
vercel --prod # deploy production (after verifying preview)Cloudflare Workers/Pages:
bash
wrangler deploy # Workers
wrangler pages deploy ./out # Pages (check build output dir)Fly.io:
bash
fly launch # first time — creates app, sets region
fly deploy # subsequent deploysSST (if sst.config.ts exists):
bash
sst deploy --stage prod # production
sst deploy --stage dev # stagingVercel(如果不使用自动部署):
bash
vercel link # 首次使用:关联到项目
vercel # 部署预览版本
vercel --prod # 部署生产版本(验证预览版本后)Cloudflare Workers/Pages:
bash
wrangler deploy # Workers部署
wrangler pages deploy ./out # Pages部署(检查构建输出目录)Fly.io:
bash
fly launch # 首次使用——创建应用,设置区域
fly deploy # 后续部署SST(如果存在sst.config.ts):
bash
sst deploy --stage prod # 生产环境部署
sst deploy --stage dev # 预发布环境部署Step 5. Verify Deployment
步骤5. 验证部署
After deployment, verify it actually works:
bash
undefined部署完成后,验证其是否正常运行:
bash
undefined1. HTTP status check
1. HTTP状态检查
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://{deployment-url})
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://{deployment-url})
2. Check for runtime errors in page body
2. 检查页面内容中是否有运行时错误
BODY=$(curl -s https://{deployment-url} | head -200)
BODY=$(curl -s https://{deployment-url} | head -200)
3. Check Vercel deployment logs for errors
3. 检查Vercel部署日志中的错误
vercel logs --output=short 2>&1 | tail -30
**If HTTP status is not 200, or page contains error messages:**
1. Check `vercel env ls` — are all required env vars set on the platform?
2. If env vars missing: add them with `vercel env add NAME production <<< "value"`
3. If env vars set but wrong: `vercel env rm NAME production` then re-add
4. After fixing env vars: redeploy with `vercel --prod --yes`
5. Re-check HTTP status and page content
**Common runtime errors and fixes:**
- "Supabase URL/Key required" → add `NEXT_PUBLIC_SUPABASE_URL` + `NEXT_PUBLIC_SUPABASE_ANON_KEY` to Vercel
- "DATABASE_URL not set" → add `DATABASE_URL` to Vercel
- "STRIPE_SECRET_KEY missing" → add Stripe keys or remove Stripe code if not ready
- Blank page / hydration error → check build logs, may need `vercel --prod` redeploy
**Do NOT output `<solo:done/>` until the live URL returns HTTP 200 and page loads without errors.** If you cannot fix the issue, output `<solo:redo/>` to go back to build.vercel logs --output=short 2>&1 | tail -30
**如果HTTP状态不是200,或页面包含错误消息:**
1. 检查`vercel env ls`——平台上是否设置了所有必需的环境变量?
2. 如果缺少环境变量:使用`vercel env add NAME production <<< "value"`添加
3. 如果环境变量已设置但值错误:`vercel env rm NAME production`然后重新添加
4. 修复环境变量后:使用`vercel --prod --yes`重新部署
5. 重新检查HTTP状态和页面内容
**常见运行时错误及修复方法:**
- "Supabase URL/Key required" → 在Vercel上添加`NEXT_PUBLIC_SUPABASE_URL` + `NEXT_PUBLIC_SUPABASE_ANON_KEY`
- "DATABASE_URL not set" → 在Vercel上添加`DATABASE_URL`
- "STRIPE_SECRET_KEY missing" → 添加Stripe密钥,或如果未准备好则移除Stripe相关代码
- 空白页面 / hydration错误 → 检查构建日志,可能需要`vercel --prod`重新部署
**在实时URL返回HTTP 200且页面无错误加载之前,请勿输出`<solo:done/>`**。如果无法修复问题,输出`<solo:redo/>`返回构建阶段。Step 6. Post-Deploy Log Monitoring
步骤6. 部署后日志监控
After verifying HTTP 200, tail production logs to catch runtime errors that only appear under real conditions (missing env vars, DB connection issues, SSR crashes, API timeouts).
Read the field from the stack YAML to get platform-specific commands:
logsVercel (Next.js):
bash
vercel logs --output=short 2>&1 | tail -50Look for: , , , , unhandled rejections.
ErrorFUNCTION_INVOCATION_FAILEDEDGE_FUNCTION_INVOCATION_FAILED504 GATEWAY_TIMEOUTCloudflare Workers:
bash
wrangler tail --format=pretty 2>&1 | head -100Look for: , uncaught exceptions, D1 query failures, R2 access errors.
ErrorCloudflare Pages (Astro):
bash
wrangler pages deployment tail --project-name={name} 2>&1 | head -100Fly.io (Python API):
bash
fly logs --app {name} 2>&1 | tail -50
fly status --app {name}Look for: , , unhealthy instances, OOM kills, connection refused.
ERRORCRITICALSupabase Edge Functions (if used):
bash
supabase functions logs --scroll 2>&1 | tail -30What to do with log errors:
- Env var missing → fix with platform CLI (see Step 3), redeploy
- DB connection error → check connection string, IP allowlist
- Runtime crash / unhandled error → output to go back to build with fix
<solo:redo/> - No errors in 30 lines of logs → proceed to report
If logs show zero traffic (fresh deploy), make a few test requests:
bash
curl -s https://{deployment-url}/ # homepage
curl -s https://{deployment-url}/api/health # API health (if exists)Then re-check logs for any errors triggered by these requests.
验证HTTP 200后,跟踪生产日志以捕获仅在真实环境下才会出现的运行时错误(缺少环境变量、数据库连接问题、SSR崩溃、API超时)。
从栈YAML的字段中读取平台特定的命令:
logsVercel(Next.js):
bash
vercel logs --output=short 2>&1 | tail -50查找:、、、、未处理的Promise拒绝。
ErrorFUNCTION_INVOCATION_FAILEDEDGE_FUNCTION_INVOCATION_FAILED504 GATEWAY_TIMEOUTCloudflare Workers:
bash
wrangler tail --format=pretty 2>&1 | head -100查找:、未捕获的异常、D1查询失败、R2访问错误。
ErrorCloudflare Pages(Astro):
bash
wrangler pages deployment tail --project-name={name} 2>&1 | head -100Fly.io(Python API):
bash
fly logs --app {name} 2>&1 | tail -50
fly status --app {name}查找:、、不健康实例、OOM终止、连接被拒绝。
ERRORCRITICALSupabase Edge Functions(如果使用):
bash
supabase functions logs --scroll 2>&1 | tail -30日志错误的处理方式:
- 缺少环境变量 → 使用平台CLI修复(见步骤3),重新部署
- 数据库连接错误 → 检查连接字符串、IP白名单
- 运行时崩溃 / 未处理错误 → 输出返回构建阶段进行修复
<solo:redo/> - 30行日志中无错误 → 继续生成报告
如果日志显示无流量(新部署),发送几次测试请求:
bash
curl -s https://{deployment-url}/ # 首页
curl -s https://{deployment-url}/api/health # API健康检查(如果存在)然后重新检查日志,查看这些请求是否触发了任何错误。
Step 7. Post-Deploy Report
步骤7. 部署后报告
Deployment: {project-name}
Platform: {platform}
URL: {deployment-url}
Branch: main
Commit: {sha}
Done:
- [x] Code pushed to GitHub
- [x] Deployed to {platform}
- [x] Database migrations applied (or N/A)
Manual steps remaining:
- [ ] Set environment variables (listed above)
- [ ] Custom domain (optional)
- [ ] PostHog / analytics setup (optional)
Next: /review — final quality gate部署项目:{project-name}
平台: {platform}
URL: {deployment-url}
分支: main
提交: {sha}
已完成:
- [x] 代码推送到GitHub
- [x] 部署到{platform}
- [x] 数据库迁移已应用(或不适用)
剩余手动步骤:
- [ ] 设置环境变量(如上所列)
- [ ] 自定义域名(可选)
- [ ] PostHog / 分析工具设置(可选)
下一步:/review — 最终质量检查Completion
完成信号
Signal completion
—
Output this exact tag ONCE and ONLY ONCE — the pipeline detects the first occurrence:
<solo:done/>Do NOT repeat the signal tag anywhere else in the response. One occurrence only.
仅输出一次以下精确标签——流水线会检测第一个出现的标签:
<solo:done/>请勿在响应的其他地方重复该信号标签。仅出现一次。
Error Handling
错误处理
CLI not found
CLI未找到
Cause: Platform CLI not installed.
Fix: Install the specific CLI: , , , .
npm i -g vercelnpm i -g wranglerbrew install flyctlbrew install supabase/tap/supabase原因: 未安装平台CLI。
修复: 安装对应的CLI:、、、。
npm i -g vercelnpm i -g wranglerbrew install flyctlbrew install supabase/tap/supabaseDeploy fails — build error
部署失败——构建错误
Cause: Build works locally but fails on platform (different Node version, missing env vars).
Fix: Check platform build logs. Ensure in package.json matches platform. Set missing env vars.
engines原因: 本地构建正常,但平台构建失败(Node版本不同、缺少环境变量)。
修复: 检查平台构建日志。确保package.json中的与平台匹配。设置缺少的环境变量。
enginesDatabase connection fails
数据库连接失败
Cause: DATABASE_URL not set or network rules block connection.
Fix: Check connection string, platform's DB dashboard, IP allowlist.
原因: 未设置DATABASE_URL或网络规则阻止连接。
修复: 检查连接字符串、平台的数据库控制台、IP白名单。
Git push rejected
Git推送被拒绝
Cause: Remote has diverged.
Fix: , resolve conflicts, push again.
git pull --rebase origin main原因: 远程仓库已更新,本地与远程分歧。
修复: ,解决冲突,重新推送。
git pull --rebase origin mainVerification Gate
验证关卡
Before reporting "deployment successful":
- Run against the deployment URL.
curl -s -o /dev/null -w "%{http_code}" - Verify HTTP 200 (not 404, 500, or redirect loop).
- Check the actual page content matches expectations (not a blank page or error).
- Only then report the deployment as successful.
Never say "deployment should be live" — verify it IS live.
在报告“部署成功”之前:
- 运行 检查部署URL。
curl -s -o /dev/null -w "%{http_code}" - 验证 HTTP状态为200(不是404、500或重定向循环)。
- 检查 实际页面内容符合预期(不是空白页面或错误页面)。
- 只有在满足以上条件后,才报告部署成功。
永远不要说“部署应该已上线”——要验证它确实已上线。
Critical Rules
关键规则
- Use installed CLIs — detect ,
vercel,wrangler,supabase,flybefore falling back tosst.npx - Auto-deploy aware — if platform auto-deploys on push, just push. Don't run manual deploy commands unnecessarily.
- NEVER commit secrets — no .env files with real values, no API keys in code.
- Preview before production — deploy preview first, verify, then promote to prod.
- Check build locally first — /
pnpm build(or equivalent) before deploying.uv build - Check production logs — always tail logs after deploy, catch runtime errors before declaring success.
- Report all URLs — deployment URL + platform dashboard links.
- Infrastructure in repo — prefer or
sst.config.tsover manual dashboard config (see infra-prd.md).fly.toml - Verify before claiming done — HTTP 200 from the live URL + clean logs, not just "deploy command succeeded".
- 使用已安装的CLI — 在回退到之前,先检测
npx、vercel、wrangler、supabase、fly。sst - 支持自动部署 — 如果平台在推送时自动部署,只需推送代码。不要不必要地运行手动部署命令。
- 绝对不要提交密钥 — 不要包含真实值的.env文件,不要在代码中包含API密钥。
- 先预览再发布到生产环境 — 先部署预览版本,验证后再升级到生产环境。
- 先在本地检查构建 — 部署前先运行/
pnpm build(或等效命令)。uv build - 检查生产日志 — 部署后始终跟踪日志,在宣布成功前捕获运行时错误。
- 报告所有URL — 部署URL + 平台控制台链接。
- 基础设施即代码 — 优先使用或
sst.config.ts而非手动控制台配置(见infra-prd.md)。fly.toml - 验证后再标记完成 — 实时URL返回HTTP 200 + 日志无错误,而不仅仅是“部署命令执行成功”。