/cheat-init — First Onboarding
Enable users to go from zero to running their first prediction in ≤ 5 minutes (for users without posting history) or ≤ 10 minutes (for users with existing posts who need to import history).
Overview
[User says "Initialize" for the first time]
↓
[Phase 0: Detect current state]
↓
[Phase 1: First screen copy - Applicability + Expectation management]
↓
[Phase 2: 6 questions (Q1-Q5 are all asked; Q2 determines whether to proceed with user-history import)]
↓
[Phase 2.5: Benchmark account - Highly recommended (must ask for cold-start, optional for users with existing posts)]
↓
[Phase 3: Create scaffolding (includes empty directories scripts/ + videos/ + samples/ + template files including benchmark.md)]
↓
[Phase 3.5: User-history import process (only if Q2=has posted history + user agrees)]
↓
[Phase 4: Test if hooks are effective]
↓
[Phase 5: Provide a list of "what to say next"]
Constants
- DEFAULT_RETRO_WINDOW_DAYS = 3
- INSTALL_HOOKS = ask — Ask by default; users select to install directly; to skip installation
- TREND_DEFAULT_SOURCES = ["manual-paste"]
Inputs
None. All information is collected from 6 conversational questions.
Workflow
Phase 0: Detect current state
- Read the user's current working directory (the user's content project, not cheat-on-content itself)
- Check if already exists:
- Exists → Prompt: "The project seems to have been initialized (state file exists). Reinitializing will overwrite existing configurations — confirm?" Only proceed after the user explicitly confirms
- Does not exist → Proceed to Phase 1
- Check if core files such as / exist — if they exist but the state file does not → it's a "semi-initialized" state, prompt the user and ask: "Do you want to infer the state from existing files or reset?"
Phase 1: First screen explicit expectation setting (including applicability verification)
Output to the user (word-for-word, no softening):
🎯 Cheat on Content / Influencer Cheat — Initialization
Creating content is essentially about cheating — whoever sees through the rules first gets the traffic.
This tool helps you figure out **the rules of your account**.
In the next 5-10 minutes, I will ask you 5-6 questions to understand what you do, what you have, and how to use this tool.
Two things to clarify upfront:
1. **Early predictions will be inaccurate** — the accuracy of the first 5 pieces will be around ±50%, this is a mathematical fact.
The tool uses 🔴🟠🟡🟢🔵 to mark confidence levels, no hidden numbers —
You decide for yourself whether to trust the prediction this time.
2. **Highly recommended to import benchmark accounts** — 5-10 benchmark videos, and the tool will immediately have an anchor.
Otherwise, the first batch of predictions is basically like astrology. I will ask again in Q5 later.
Are you ready to start?
If the user answers "Continue" or similar positive responses → Phase 2.
No longer reject continuation based on content_form — all forms are allowed, only mark the
field as true, and cheat-status will continuously prompt the user "Your content form may require bump weight adjustment".
Phase 2: 6 questions (one question at a time, do not ask in batches)
Q1: Content form
"Which of the following does your content resemble more?
a) Opinion video (commentary / current affairs commentary / argumentation / topic discussion / personal opinion) — directly matches the built-in rubric
b) Long essay (WeChat Official Account / Substack / Medium) — can start with the opinion video rubric, adjust weights during bump
c) Short text / thread (X / Weibo / Jike) — same as above
d) Podcast / long-form video (YouTube long video / podcast) — same as above
e) Tutorial / tool teaching / Builder (teaching others how to use X tool / how to do Y project) — same as above
f) Other (gaming / food / makeup tutorial / news / drama) — workflow is universal, but rubric dimensions need adjustment
(The ER/SR/HP set may not be predictive for your form, you need to break down suitable dimensions yourself)
g) Mixed"
Q1 → enum mapping (
must store enum values, not letters):
| User's answer | Value written to |
|---|
| a | |
| b | |
| c | |
| d | |
| e | |
| f | |
| g | |
- Select a →
- Select b/c/d/e/f/g → , cheat-status will continuously prompt "Your content form may require bump weight adjustment"
- No more "severe mismatch" category — all forms can run the workflow, only some rubrics require more aggressive bump
Q1.5: Typical duration (only asked when Q1=a/d/f)
"What's the typical duration of your videos?
a) 30 seconds - 1 minute b) 1-3 minutes c) 3-5 minutes (recommended for getting started)
d) 5-10 minutes e) Over 10 minutes"
Record to
(30 / 90 / 240 / 450 / 900).
Q1.6: Publishing frequency
"How often do you plan to publish?
a) Daily b) Every other day c) Weekly d) Flexible / irregular (disable buffer monitoring)"
Record to
target_publish_cadence_days
(1 / 2 / 7 / null).
Q2: Have you posted videos on this channel before?
"a) No — I will help you brainstorm 5 candidates based on interests + hot topics + write 5 drafts
b) Yes — whether it's 1 or 100 videos, I will help you fetch historical data to make subsequent brainstorming more aligned with what you've done"
If select a → write
to state,
skip Phase 3.5, proceed directly to Phase 4.
If select b → proceed to
Q2.1.
Q2.1: Platform + Fetch plan (only when Q2=b)
"Which platform is your content mainly on?
a) Douyin — install douyin-session adapter (Playwright + scan QR code to log in to Douyin Creator Center)
b) YouTube — install youtube-data-api adapter (requires API key)
c) Bilibili — bilibili-stat adapter
d) Other / multiple platforms — use manual paste mode"
If select a/b/c → ask Q2.2; if select d → skip to Q2.3 manual.
Q2.2: Adapter installation timing (only when Q2.1=a/b/c)
"Install the adapter now for automatic fetching, or tell me manually first?
- Install now — guide you to install Playwright + scan QR code → fetch the latest N pieces of data
- Install later — use manual mode first, mark state as 'pending_adapter_setup',
cheat-status will continuously prompt to install"
If select "Install now" → proceed with adapter installation guidance (see each adapter README) → verify fetch is available → Q2.3.
If select "Later" → skip to Q2.3 manual.
Q2.3: Fetch scope / historical scale
If adapter is installed and verified to be available:
"How many of your latest videos can I fetch as the basis?
(Recommended 10-25 videos; the more samples, the more accurate the baseline. Up to the actual number of videos on your account)"
→ User provides number N, fetch N videos in Phase 3.5
If in manual mode:
"Approximately how many videos have you posted? Just give a range (e.g., '5-10 videos' / '20+ videos'),
This is only used to mark the estimated value of calibration_samples, no need to be accurate."
→ User provides an estimated value, skip fetching in Phase 3.5, write the estimated value to calibration_samples
Q3: Data collection method
"How to retrieve data for T+3 day retrospective?
a) Manual paste — backup option. You must paste the top 20+ comments (with like counts), not just view counts.
Comments are the real signal — meme explosions like 'she's different' can only be seen from comments,
View counts will never tell you what content truly resonates with the audience.
b) [Recommended default] Adapter automatic fetch — get all comments + data.
If you haven't installed the adapter yet, it's okay, mark state as 'pending_adapter_setup',
Install it before the first publish (cheat-status will continuously remind you to install).
Installation guide is in adapters/perf-data/<platform>/README.md."
| User's answer | Value written to |
|---|
| a | |
| b (default) | |
Default recommendation is b — unless the user explicitly says "a I want manual".
Q4: Candidate topics
"Do you have a list of candidate topics now? (e.g., maintained in external markdown / Notion)
a) No (default) — I will help you brainstorm later, or use /cheat-trends to fetch topics daily
b) Yes, markdown list
c) Yes, Notion / other"
| User's answer | Value written to |
|---|
| a (default) | |
| b | |
| c | |
Q5: Install a few hooks (install by default, no need for you to decide)
"Q5: I will install a few hooks for you, reply 'yes' or press 'enter' to install:
-
Prediction lock — After we complete the prediction together, the file will be locked. Neither you nor I can modify the prediction section.
Retrospective can only be appended to the lower half of the same file, without polluting the judgment in the upper half.
(Without this lock, it's almost inevitable that you or I will want to 'fix the prediction after seeing the data' later — we all make this mistake)
-
SessionStart automatic report — Display buffer / pending retrospective / top candidates at the top of each new session
-
Silent usage log — Asynchronously record usage frequency, non-blocking, for future diagnosis
Install all three together. You can choose not to install (reply 'no') but you will lose the prediction lock, and the calibration value will decrease.
Reply yes / no."
| User's answer | Value written to |
|---|
| yes / enter / default | (bool, not the string ) |
| no | |
Default is yes — unless the user explicitly says no.
Phase 2.5: Benchmark account (Ask all users, highly recommended for cold-start)
The most important signal source for the tool in the early stage is benchmark accounts — after you finish init without data, the equal-weight v0 rubric is like astrology.
But if you can find an account you want to emulate, import 5-10 of its high / medium / low performance samples, the tool will have an anchor.
Ask:
🎯 Benchmark Account
Can you find a benchmark account? At least 3 videos from this account.
- You **have no posting history at all** (Q2=a) → **Highly recommended** — the rubric has no anchor and relies entirely on benchmarks.
If you don't find one, use the universal v0 equal-weight rubric, and the accuracy of the first 5 pieces will be worse for longer
- You **have posting history** (Q2=b) → **Optional** — you can also use your own history for calibration;
But it's recommended to import at least 1 benchmark for sanity check (to see if your account truly deviates from the benchmark direction)
a) Find now → Immediately enter /cheat-learn-from (5-15 minutes, depending on how prepared your materials are)
b) Find later → Mark state as `benchmark_status: pending`, cheat-status will continuously remind you
c) Don't find → Mark state as `benchmark_status: none`, start with universal v0
Reply a / b / c.
Actions:
- Select a → After scaffolding is created in Phase 3, automatically dispatch to /cheat-learn-from (don't let the user run it manually — it's already part of the init process). Return to init Phase 4 after completion
- Select b → Mark state as
benchmark_status: pending
+
- Select c → Mark state as
Record to
/
(if a is selected, write in cheat-learn-from).
Phase 3: Create scaffolding (explain each item)
Create in order and explain the purpose of each item:
-
"Creating .cheat-state.json — the place where all sub-skills share context.
All answers collected during this init will be written here."
Write (
all placeholders must be replaced with specific enum values by checking the Q mapping table above, never store letters directly):
json
{
"schema_version": "1.1",
"skill_version": "1.0.0",
"rubric_version": "v0",
"content_form": "<Check Q1 mapping table, write enum string like \"opinion-video\">",
"typical_duration_seconds": <Derived from Q1.5: 30/90/240/450/900>,
"target_publish_cadence_days": <Derived from Q1.6: 1/2/7/null>,
"rubric_form_mismatch": <Q1=a→false; others→true>,
"benchmark_status": "<Derived from Phase 2.5 answer: a→\"imported\"/b→\"pending\"/c→\"none\">",
"benchmark_name": <String name if imported, otherwise null>,
"benchmark_sample_count": <Number if imported, otherwise 0>,
"baseline_plays": null,
"calibration_samples": <Q2=a→0; Q2=b→Estimated value from Q2.3 or import count>,
"data_collection": "<Check Q3 mapping table, write \"manual\" or \"adapter\">",
"pool_status": "<Check Q4 mapping table, write \"none\"/\"markdown\"/\"notion\">",
"data_layer": "markdown",
"hooks_installed": <Check Q5 mapping table, write bool true/false>,
"enabled_trend_sources": ["manual-paste"],
"enabled_perf_adapters": <Q2.1=a→[\"douyin-session\"]; b→[\"youtube-data-api\"]; c→[\"bilibili-stat\"]; others→[]>,
"last_bump_at": null,
"last_bump_self_audited": false,
"last_published_at": null,
"last_published_file": null,
"last_retro_at": null,
"last_trends_run_at": null,
"last_trends_added_count": 0,
"consecutive_directional_errors": [],
"pending_retros": [],
"shoots": [],
"in_progress_session": null,
"initialized_at": "<Local ISO 8601 with time zone, e.g., \"2026-05-05T20:11:13+08:00\", **do not use UTC Z suffix**>"
}
-
"Creating rubric_notes.md — the real source of your scoring dimensions.
Uses v0 placeholder rubric — equal-weight 7 dimensions (each dimension is equally important).
Why call it v0: v0 is a placeholder before calibration. The true weights for your account must be inferred from your
data, not preset. After running 5 pieces of content with data, it will automatically propose
upgrading to \"Calibrated v1\" (your first truly calibrated rubric)."
- Copy
cheat-on-content/starter-rubrics/<form>-zero.md
(cold-start) or (still referable when there is existing data)
-
"Creating script_patterns.md — the accumulation of your writing patterns (decoupled from rubric).
rubric_notes.md teaches Claude how to score;
script_patterns.md teaches Claude how to write."
- Copy
cheat-on-content/templates/script_patterns.template.md
-
Four directories:
+
+
+
(all add
)
"Creating four directories:
scripts/ — Drafts before filming (written by cheat-seed or you)
predictions/ — Immutable prediction logs (protected by hooks)
videos/ — Working directory after filming (cheat-shoot creates subdirectories)
samples/ — Benchmark account videos / transcripts (cheat-learn-from creates subdirectories)
The first three use the same <date>_<id>_<short> naming convention to correlate with each other.
samples/ is grouped by benchmark account name: samples/<account-name>/<video-id>/.
4.5.
(only when Phase 2.5 selects a/b)
"Copying benchmark.md placeholder template (actual content filled by cheat-learn-from) —
This is the central reference for your benchmark account.
In the early stage, the tool's rubric / pattern / topic direction are largely derived from here;
In the later stage when N≥10, the influence fades, but it remains for sanity check."
- Copy
cheat-on-content/templates/benchmark.template.md
→
- Do not create if Phase 2.5 selects c → benchmark.md does not exist, mark state as
-
- Copy corresponding files from templates/
-
If Q5=yes → Install hooks
- Read (create empty if it does not exist)
- Merge from
hooks/prediction-immutability.json
- Merge from
- Merge hooks from (if enabled at the same time)
- Copy
prediction-immutability.sh
+ + to , chmod +x
- Use
${CLAUDE_PROJECT_DIR}/.cheat-hooks/
as the command path in settings.json
-
(Pool option c—Notion) Only record
to the state file, handle it later when cheat-trends is called
Phase 3.5: Import process (only if Q2=b and user agrees to fetch)
If Q2.2=Install now → Proceed with adapter installation + login (see adapters/perf-data/<platform>/README.md).
After successful fetching, for each posted video:
- Create video folder:
videos/<date>_<id>_<short>/
- = Actual video release date
- = 12-digit hash, generated by sha256 for (title + platform ID)
- = First 3-8 characters of the title
- Write report.md: Fill in data fetched from the adapter (views / likes / comments / shares / top comments)
- Ask user for original script: "Do you still have the original script for video "{title}"?"
- Yes → User provides → Save as
- No → Mark (still create the video folder, but script.md is missing)
- Write reconstructed prediction:
predictions/<date>_<id>_<short>.md
- Header marked with
**Reconstructed retrospective — NOT a blind prediction**
- 7-dimensional scoring is reverse-calculated based on script + actual retrospective data — clearly state this is for non-calibration purposes
- Do not count towards calibration_samples (this is imported history, not accumulated calibration samples)
After import is completed:
- Derive = Median views of fetched videos → Write to state file
- Derive confidence level → Directly use when writing predictions in subsequent cheat-predict
- Output summary: "Imported N pieces of history. The latest one has Xk views, median Yk views, created N video folders + reconstructed predictions"
Phase 4: Test if hooks are effective (only if Q5=yes)
Run a fake Edit interception test:
- Create temporary file
predictions/_test_hook.md
, containing ## Prediction\n[test]\n## Retrospective\n
- Attempt to Edit the section of this file
- The hook should exit 1 to block → Report "✅ Immutability hook is effective"
- Delete the test file
- SessionStart hook verification: Directly call
bash .cheat-hooks/session-start.sh
→ Should output a report (even if it's empty)
If hooks are not effective → Do not pretend success, clearly tell the user: "Hook installation failed, possibly because the .claude/settings.json configuration did not take effect. It is recommended to check manually or restart Claude Code."
Phase 4.5: If Phase 2.5 selects a → Dispatch to /cheat-learn-from
If the user selected a (Find now) in Phase 2.5 → Automatically trigger /cheat-learn-from:
✅ Scaffolding + hooks installed.
Next, immediately enter /cheat-learn-from to help you import the benchmark account —
You selected \"Find now\" during init, so you don't need to start another session to run it.
[invoke /cheat-learn-from]
Return to init Phase 5 after cheat-learn-from is completed.
If Phase 2.5 selects b/c → Skip Phase 4.5, proceed directly to Phase 5.
Phase 5: Provide a list of "what to say next"
✅ Initialization completed (rubric: v0, calibration_samples: <N>, confidence: <emoji level>)
Next time you can directly say these:
📝 After writing a draft → \"Score this script scripts/<...>.md\"
🎬 Before preparing to publish → \"Start prediction for scripts/<...>.md\"
🎬 After filming → \"Filmed scripts/<...>.md\" → Create video folder + buffer +1
🚀 After publishing → \"Published https://...\"
📊 T+3 days → \"Retrospect videos/<...>/\"
📈 Anytime → \"Status\" (view full dashboard)
<If Q4=No candidate topics:>
🌱 Run /cheat-seed to find topics now?
- For users without posting history: Pure brainstorming (interests × hot topics)
- For users with posting history (imported): Brainstorming will provide recommendations based on what you've done before
Reply \"yes, seed\" to run immediately, reply \"no\" to think on your own.
💡 Your confidence level is <current level> — it will automatically improve as you run more retrospectives.
Don't skip predictions because of low confidence — the discipline of prediction itself is the core of the tool,
The \"value\" of early predictions is data collection, not decision-making. After the 5th retrospective, the rubric will be calibrated for the first time,
Confidence will enter 🟡 low; after the 10th, 🟢 medium.
Key Rules
- Do not pretend success: If any step fails → Clearly tell the user which step went wrong. Never write "✅ Initialization completed" if it's not actually completed
- Do not ask in batches: Ask the 5 questions one at a time
- Do not silently mkdir: Explain the purpose of each file when creating it
- Do not force SQLite: Provide all users with markdown, just mention "Will recommend upgrading when reaching 30 pieces"
- Unified state fields: Remove enum fields such as mode / prediction_complexity / bucket_scheme — Use only calibration_samples integer + confidence derivation
- Import failure does not block: If Q2=b but adapter installation fails / fetch fails → Gracefully degrade to "Mark calibration_samples as estimated value, do not import historical video folders"
Refusals
- "Skip Q1-Q5, just create all files for me" → Refuse. The answers to the questions directly affect default configurations (content_form, cadence, hooks)
- "I've already initialized elsewhere, sync the configuration from that project" → Be cautious. Prompt the user to manually cp existing and , do not automatically sync across projects
- "Don't install hooks but keep the immutability promise" → Allow, mark state as , cheat-status will continuously prompt "Your immutability is a gentleman's agreement"
Integration
- After completion, the main SKILL.md route unlocks all other sub-skills
- reads the field in to determine which confidence level to display
- If Q2=b and import is done → Historical reconstructed predictions are stored in and , but do not count towards calibration_samples (not true calibration samples)
- reads all historical reconstructed predictions in → Know "what the user has done before" during brainstorming
State Field Write List
| Field | Write timing | Source |
|---|
| Phase 3 | Hardcoded "1.1" |
| Phase 3 | Hardcoded "1.0.0" |
| Phase 3 | "v0" |
| Phase 3 | Q1 → Check mapping table to replace with enum value (not letters) |
| Phase 3 | Derived from Q1.5 |
target_publish_cadence_days
| Phase 3 | Derived from Q1.6 |
| Phase 3 | Q1≠a → true |
| Phase 3 / 2.5 | Derived from Q2.5 answer |
| Phase 3 / 2.5 | Provided by user in Q2.5 |
| Phase 3 / 2.5 | Backfilled after cheat-learn-from import |
| Phase 3.5 (if import successful) | Median of imported data; otherwise null |
| Phase 3 / Phase 3.5 | Q2=a→0; Q2=b→Estimated value from Q2.3 or import count |
| Phase 3 | Q3 → Check mapping table to replace with enum value |
| Phase 3 | Q4 → Check mapping table to replace with enum value |
| Phase 3 | Derived from Q2.1 (if Q2=a then ) |
| Phase 3-4 | Q5 → bool (not string) |
| / / / / | Phase 3 | All |
| Phase 3 | |
| Phase 3 | |
| Phase 3 | now() local ISO 8601, with time zone, do not use UTC |