interview-simulator

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Interview Simulator

面试模拟器

Platform architecture and coaching system for realistic mock interview practice. This skill serves two purposes: (1) it coaches candidates on how to structure effective practice sessions, and (2) it specifies the full-stack architecture for building an automated interview simulation platform with voice AI, collaborative whiteboard, gaze-tracking proctoring, and mobile companion.
The other 7 interview skills define WHAT to practice. This skill defines HOW to practice it -- with realistic conditions, adaptive difficulty, and measurable progress.

用于高仿真模拟面试练习的平台架构与辅导系统。本skill有两个用途:(1)指导求职者如何搭建高效的练习场次,(2)明确搭建自动化面试模拟平台的全栈架构,该平台具备语音AI、协作白板、视线跟踪监考和移动端配套功能。
其他7个面试skill定义了「练习什么」,本skill定义了「如何练习」——通过仿真环境、自适应难度和可量化的进度跟踪实现高效练习。

When to Use

适用场景

Use for:
  • Designing or building a mock interview simulation platform
  • Configuring realistic practice sessions with voice, whiteboard, and proctoring
  • Implementing adaptive difficulty that targets weaknesses automatically
  • Building a scoring and debrief system that tracks progress across sessions
  • Setting up spaced repetition for concept review and story rehearsal
  • Establishing a daily/weekly practice protocol
  • Cost analysis and optimization for practice infrastructure
NOT for:
  • Practicing a specific round type in isolation (use the round-specific skill)
  • Building a prep timeline or study plan (use
    interview-loop-strategist
    )
  • Resume or career narrative work (use
    cv-creator
    or
    career-biographer
    )
  • Salary negotiation or offer evaluation
  • Conference talk preparation (different evaluation criteria)

适用范围:
  • 设计或搭建模拟面试仿真平台
  • 配置具备语音、白板、监考功能的高仿真练习场次
  • 实现针对薄弱点自动调整的难度自适应功能
  • 搭建可跟踪多场次练习进度的评分与复盘系统
  • 为知识点复习和经历复述设置间隔重复机制
  • 制定每日/每周练习规范
  • 练习基础设施的成本分析与优化
不适用范围:
  • 单独练习某一特定轮次的面试(请使用对应轮次的skill)
  • 制定备考时间线或学习计划(请使用
    interview-loop-strategist
  • 简历或职业经历梳理(请使用
    cv-creator
    career-biographer
  • 薪资谈判或offer评估
  • 会议演讲准备(评估标准不同)

System Architecture

系统架构

mermaid
graph TB
    subgraph Client["Client Layer"]
        MOBILE["Mobile App<br/>React Native + Expo<br/>Flash cards, voice drills,<br/>progress dashboard"]
        DESKTOP["Desktop Web<br/>Next.js<br/>Full sessions, whiteboard,<br/>proctoring"]
    end

    subgraph Engines["Engine Layer"]
        VOICE["Voice Engine<br/>Hume AI EVI<br/>Emotion-sensitive<br/>interviewer voice"]
        BOARD["Whiteboard Engine<br/>tldraw + Claude Vision<br/>Diagram evaluation<br/>and scoring"]
        PROCTOR["Proctor Engine<br/>MediaPipe Face Mesh<br/>Gaze tracking,<br/>attention monitoring"]
    end

    subgraph Orchestrator["Session Orchestrator — Node.js"]
        ROUND["Round Selector<br/>Weakness-weighted<br/>random selection"]
        ADAPT["Adaptive Difficulty<br/>Performance-based<br/>question scaling"]
        DEBRIEF["Debrief Generator<br/>Transcript + emotion +<br/>proctor + whiteboard<br/>scored rubric"]
        SM2["SM-2 Scheduler<br/>Spaced repetition<br/>for concepts and stories"]
    end

    subgraph Data["Data Layer — Supabase"]
        SESSIONS[("sessions<br/>recordings, transcripts")]
        SCORES[("scores<br/>per-dimension breakdowns")]
        STORIES[("story_bank<br/>STAR-L entries")]
        CARDS[("flash_cards<br/>SM-2 intervals")]
    end

    MOBILE --> Orchestrator
    DESKTOP --> Orchestrator
    Orchestrator --> VOICE
    Orchestrator --> BOARD
    Orchestrator --> PROCTOR
    Orchestrator --> Data
    VOICE --> DEBRIEF
    BOARD --> DEBRIEF
    PROCTOR --> DEBRIEF
mermaid
graph TB
    subgraph Client["Client Layer"]
        MOBILE["Mobile App<br/>React Native + Expo<br/>Flash cards, voice drills,<br/>progress dashboard"]
        DESKTOP["Desktop Web<br/>Next.js<br/>Full sessions, whiteboard,<br/>proctoring"]
    end

    subgraph Engines["Engine Layer"]
        VOICE["Voice Engine<br/>Hume AI EVI<br/>Emotion-sensitive<br/>interviewer voice"]
        BOARD["Whiteboard Engine<br/>tldraw + Claude Vision<br/>Diagram evaluation<br/>and scoring"]
        PROCTOR["Proctor Engine<br/>MediaPipe Face Mesh<br/>Gaze tracking,<br/>attention monitoring"]
    end

    subgraph Orchestrator["Session Orchestrator — Node.js"]
        ROUND["Round Selector<br/>Weakness-weighted<br/>random selection"]
        ADAPT["Adaptive Difficulty<br/>Performance-based<br/>question scaling"]
        DEBRIEF["Debrief Generator<br/>Transcript + emotion +<br/>proctor + whiteboard<br/>scored rubric"]
        SM2["SM-2 Scheduler<br/>Spaced repetition<br/>for concepts and stories"]
    end

    subgraph Data["Data Layer — Supabase"]
        SESSIONS[("sessions<br/>recordings, transcripts")]
        SCORES[("scores<br/>per-dimension breakdowns")]
        STORIES[("story_bank<br/>STAR-L entries")]
        CARDS[("flash_cards<br/>SM-2 intervals")]
    end

    MOBILE --> Orchestrator
    DESKTOP --> Orchestrator
    Orchestrator --> VOICE
    Orchestrator --> BOARD
    Orchestrator --> PROCTOR
    Orchestrator --> Data
    VOICE --> DEBRIEF
    BOARD --> DEBRIEF
    PROCTOR --> DEBRIEF

Component Selection Rationale

组件选型依据

mermaid
flowchart TD
    V{Voice AI?}
    V -->|"Emotion detection needed"| HUME["Hume AI EVI<br/>Emotion callbacks,<br/>adaptive persona,<br/>WebSocket streaming"]
    V -->|"Voice only, no emotion"| ELEVEN["ElevenLabs<br/>Fallback: high-quality<br/>TTS, no affect reading"]
    V -->|"Cost-constrained"| OPENAI_RT["OpenAI Realtime API<br/>Cheaper per minute,<br/>no emotion detection"]

    W{Whiteboard?}
    W -->|"React ecosystem, extensible"| TLDRAW["tldraw<br/>MIT license, React native,<br/>rich API, snapshot export"]
    W -->|"Simpler, self-hosted"| EXCALI["Excalidraw<br/>Good but harder to<br/>integrate programmatic<br/>screenshot capture"]

    P{Proctoring?}
    P -->|"Privacy-first, free"| MEDIAPIPE["MediaPipe Face Mesh<br/>Browser-based, 468 landmarks,<br/>iris tracking, no cloud"]
    P -->|"Commercial accuracy"| COMMERCIAL["Commercial proctoring<br/>Expensive, privacy concerns,<br/>overkill for self-practice"]

    style HUME fill:#2d5016,stroke:#333,color:#fff
    style TLDRAW fill:#2d5016,stroke:#333,color:#fff
    style MEDIAPIPE fill:#2d5016,stroke:#333,color:#fff
Why Hume over OpenAI Realtime API: Hume's EVI provides emotion callbacks (nervousness, confidence, hesitation) that enable adaptive interviewer behavior. OpenAI's Realtime API is voice-only with no affect detection. For interview simulation, emotion awareness is the differentiator -- a real interviewer adjusts based on your emotional state.
Why tldraw over Excalidraw: tldraw is a React component with a rich programmatic API. You can call
editor.getSnapshot()
to capture the canvas state, export to image, and send to Claude Vision for evaluation. Excalidraw's API is more limited for programmatic interaction.
Why MediaPipe over commercial proctoring: This is self-practice, not exam proctoring. MediaPipe runs entirely in the browser (no cloud), processes 468 face landmarks including iris position for gaze estimation, and costs nothing. Commercial proctoring (ProctorU, ExamSoft) is designed for adversarial exam settings with privacy trade-offs that make no sense for personal practice.

mermaid
flowchart TD
    V{Voice AI?}
    V -->|"Emotion detection needed"| HUME["Hume AI EVI<br/>Emotion callbacks,<br/>adaptive persona,<br/>WebSocket streaming"]
    V -->|"Voice only, no emotion"| ELEVEN["ElevenLabs<br/>Fallback: high-quality<br/>TTS, no affect reading"]
    V -->|"Cost-constrained"| OPENAI_RT["OpenAI Realtime API<br/>Cheaper per minute,<br/>no emotion detection"]

    W{Whiteboard?}
    W -->|"React ecosystem, extensible"| TLDRAW["tldraw<br/>MIT license, React native,<br/>rich API, snapshot export"]
    W -->|"Simpler, self-hosted"| EXCALI["Excalidraw<br/>Good but harder to<br/>integrate programmatic<br/>screenshot capture"]

    P{Proctoring?}
    P -->|"Privacy-first, free"| MEDIAPIPE["MediaPipe Face Mesh<br/>Browser-based, 468 landmarks,<br/>iris tracking, no cloud"]
    P -->|"Commercial accuracy"| COMMERCIAL["Commercial proctoring<br/>Expensive, privacy concerns,<br/>overkill for self-practice"]

    style HUME fill:#2d5016,stroke:#333,color:#fff
    style TLDRAW fill:#2d5016,stroke:#333,color:#fff
    style MEDIAPIPE fill:#2d5016,stroke:#333,color:#fff
为什么选择Hume而非OpenAI Realtime API: Hume的EVI可提供情感回调(紧张、自信、犹豫),支持面试官行为自适应调整。OpenAI的Realtime API仅支持语音功能,无情感检测能力。对于面试模拟而言,情感感知是核心差异点——真实的面试官会根据你的情绪状态调整提问方式。
为什么选择tldraw而非Excalidraw: tldraw是一个React组件,具备丰富的可编程API。你可以调用
editor.getSnapshot()
捕获画布状态、导出为图片,再发送给Claude Vision进行评估。Excalidraw的API对可编程交互的支持更为有限。
为什么选择MediaPipe而非商业监考系统: 本平台用于自我练习,而非考试监考。MediaPipe完全在浏览器端运行(无需云端处理),可处理包含虹膜位置在内的468个人脸关键点用于视线估计,且完全免费。商业监考系统(ProctorU、ExamSoft)是为对抗性考试场景设计的,存在隐私权衡,对于个人练习而言完全没有必要。

Session Flow

场次流程

mermaid
sequenceDiagram
    participant U as User
    participant O as Orchestrator
    participant V as Voice Engine
    participant W as Whiteboard
    participant P as Proctor
    participant D as Debrief

    U->>O: Start session
    O->>O: Select round type<br/>(weakness-weighted)
    O->>U: Confirm: ML Design, Difficulty 3/5,<br/>Persona: Collaborative
    U->>O: Accept / override

    O->>V: Initialize interviewer persona
    O->>P: Activate gaze tracking
    alt Design or Coding Round
        O->>W: Open whiteboard
    end

    loop During Session (30-45 min)
        V->>U: Ask question / follow-up
        U->>V: Respond (voice)
        V->>O: Emotion data (confidence, hesitation)
        O->>V: Adjust difficulty / tone
        P->>O: Gaze flags (second monitor, notes)
        alt Design Round
            W-->>O: Periodic screenshot (every 30s active)
            O-->>W: Evaluate diagram (Claude Vision)
        end
    end

    U->>O: End session
    O->>D: Compile transcript + emotion<br/>timeline + proctor flags +<br/>whiteboard evaluations
    D->>U: Scored debrief with<br/>strengths, weaknesses,<br/>specific improvement actions
    O->>O: Update weakness tracker,<br/>adjust next session focus
mermaid
sequenceDiagram
    participant U as User
    participant O as Orchestrator
    participant V as Voice Engine
    participant W as Whiteboard
    participant P as Proctor
    participant D as Debrief

    U->>O: Start session
    O->>O: Select round type<br/>(weakness-weighted)
    O->>U: Confirm: ML Design, Difficulty 3/5,<br/>Persona: Collaborative
    U->>O: Accept / override

    O->>V: Initialize interviewer persona
    O->>P: Activate gaze tracking
    alt Design or Coding Round
        O->>W: Open whiteboard
    end

    loop During Session (30-45 min)
        V->>U: Ask question / follow-up
        U->>V: Respond (voice)
        V->>O: Emotion data (confidence, hesitation)
        O->>V: Adjust difficulty / tone
        P->>O: Gaze flags (second monitor, notes)
        alt Design Round
            W-->>O: Periodic screenshot (every 30s active)
            O-->>W: Evaluate diagram (Claude Vision)
        end
    end

    U->>O: End session
    O->>D: Compile transcript + emotion<br/>timeline + proctor flags +<br/>whiteboard evaluations
    D->>U: Scored debrief with<br/>strengths, weaknesses,<br/>specific improvement actions
    O->>O: Update weakness tracker,<br/>adjust next session focus

Session Configuration Options

场次配置选项

ParameterOptionsDefault
Round typeCoding, ML Design, Behavioral, Tech Presentation, HM, Technical Deep DiveAuto (weakness-weighted)
Difficulty1 (warm-up) to 5 (adversarial)3
Interviewer personaFriendly, Neutral, Adversarial, SocraticNeutral
Proctor strictnessOff, Training (lenient), Simulation (strict)Training
Session length15 / 30 / 45 / 60 min45 min
WhiteboardOn / OffAuto (on for design rounds)
RecordingAudio only / Audio + Video / OffAudio only

参数可选值默认值
面试轮次类型编程、ML设计、行为面、技术演示、HM、技术深度面自动(按薄弱点权重分配)
难度1(热身)到5(压力面)3
面试官人设友好、中立、施压型、苏格拉底式中立
监考严格度关闭、训练模式(宽松)、模拟模式(严格)训练模式
场次时长15 / 30 / 45 / 60 分钟45 分钟
白板功能开启 / 关闭自动(设计轮次默认开启)
录制仅音频 / 音视频 / 关闭仅音频

Daily Practice Protocol

日常练习规范

Morning Mobile Session (10 minutes)

早间移动端练习(10分钟)

07:00  Open mobile app
07:00  3 flash cards — spaced repetition surfaces weakest concepts
       (ML concepts, system design patterns, Anthropic-specific topics)
07:05  1 behavioral story rehearsal — voice, 3 minutes max
       App plays the prompt, you respond aloud, app records duration
07:08  Quick self-check — rate confidence 1-5 on today's cards
07:10  Done — push notification schedules evening session
07:00  打开移动端应用
07:00  3张闪卡——间隔重复机制推送最薄弱的知识点
       (ML概念、系统设计模式、Anthropic-specific topics)
07:05  1次行为面经历复述——语音作答,最长3分钟
       应用播放问题,你大声作答,应用记录时长
07:08  快速自我检查——对当日闪卡的掌握度打1-5分
07:10  练习完成——推送通知提醒晚间练习场次

Evening Desktop Session (30-60 minutes, 3-4x/week)

晚间桌面端练习(30-60分钟,每周3-4次)

19:00  Open desktop app, orchestrator selects round type
19:02  Configure: confirm round, set proctor to Training mode
19:05  Session begins — voice AI drives conversation
       Whiteboard opens for design rounds
       Proctor tracks gaze, flags second monitor use
19:35  Session ends (30 min) or 19:50 (45 min)
19:35  Debrief displays: scored rubric, emotion timeline,
       proctor flags, whiteboard evaluation (if applicable)
19:45  Review debrief — spend 1/3 of practice time here
19:55  Update story bank with any new insights
20:00  Done — weakness tracker updated automatically
19:00  打开桌面端应用,编排器选择面试轮次类型
19:02  配置:确认轮次,将监考设置为训练模式
19:05  练习开始——语音AI主导对话
       设计轮次自动打开白板
       监考模块跟踪视线,标记使用第二屏幕的行为
19:35  练习结束(30分钟)或19:50(45分钟)
19:35  展示复盘报告:评分表、情绪时间线、
       监考标记、白板评估(如适用)
19:45  复盘报告 review——投入总练习时间的1/3在此环节
19:55  用新的感悟更新经历库
20:00  练习完成——薄弱点跟踪器自动更新

Weekend Loop Simulation (2 hours, 1x/week)

周末全流程模拟(2小时,每周1次)

10:00  Full loop: 2-3 back-to-back rounds (different types)
       5-minute breaks between rounds (no phone, no notes)
       Proctor set to Simulation (strict) mode
11:30  Energy management practice — track cognitive fatigue
11:45  Cross-round story coherence review
       Did you tell the same project consistently across rounds?
12:00  Comprehensive weekly debrief — pattern analysis across sessions

10:00  全流程模拟:2-3个连续的不同类型面试轮次
       轮次间休息5分钟(不能用手机,不能看笔记)
       监考设置为模拟(严格)模式
11:30  精力管理练习——跟踪认知疲劳情况
11:45  跨轮次经历一致性检查
       你在不同轮次中描述的项目经历是否一致?
12:00  综合周度复盘——跨场次的模式分析

Scoring and Progress Tracking

评分与进度跟踪

Per-Session Scoring Dimensions

单场次评分维度

DimensionWeightMeasurement Source
Technical accuracy25%Debrief AI evaluation of transcript
Communication clarity20%Emotion data (hesitation rate, filler words)
Time management15%Section timing vs target budget
Structured thinking15%Whiteboard evaluation (design rounds) or verbal structure
Composure under pressure10%Emotion timeline stability, recovery from stumbles
Question handling10%Follow-up depth reached (levels 1-6 per values-behavioral)
Proctor compliance5%Flag count (gaze deviations, note references)
维度权重数据来源
技术准确性25%复盘AI对作答文本的评估
沟通清晰度20%情感数据(犹豫率、填充词占比)
时间管理15%各环节用时 vs 目标用时
结构化思维15%白板评估(设计轮次)或口头表达结构
压力下的表现10%情绪时间线稳定性、失误后的恢复能力
问题应对能力10%跟进问题的回答深度(values-behavioral问题分为1-6级)
监考合规性5%标记次数(视线偏离、参考笔记)

Progress Visualization

进度可视化

Track these metrics over time on the dashboard:
  • Composite score per session (0-100) with trend line
  • Dimension radar chart showing strengths and weaknesses
  • Streak tracker (consecutive days with at least one practice activity)
  • Weakness heat map showing which round types and dimensions lag
  • Story readiness gauge per story in bank (how many follow-up levels prepared)
  • Spaced repetition coverage (percentage of flash cards at "mature" interval)

在仪表盘上跟踪以下长期指标:
  • 综合得分 每场次(0-100)及趋势线
  • 维度雷达图 展示优势与薄弱点
  • 连续打卡 tracker(连续每日至少完成1次练习活动的天数)
  • 薄弱点热力图 展示滞后的轮次类型和能力维度
  • 经历准备度 gauge 对应经历库中的每条经历(准备好的跟进问题等级)
  • 间隔重复覆盖率(处于「成熟」间隔阶段的闪卡占比)

Setup Guide

搭建指南

Prerequisites

前置要求

ComponentWhat You NeedWhere to Get It
Hume AI API keyEVI access for voice + emotionhttps://hume.ai — apply for developer access
Anthropic API keyClaude for debrief + whiteboard evalhttps://console.anthropic.com
Supabase projectDatabase + auth + storagehttps://supabase.com — free tier works initially
Node.js 20+Session orchestrator runtimehttps://nodejs.org
React Native + ExpoMobile companion app
npx create-expo-app
组件所需内容获取地址
Hume AI API keyEVI access for voice + emotionhttps://hume.ai — 申请开发者访问权限
Anthropic API keyClaude for debrief + whiteboard evalhttps://console.anthropic.com
Supabase project数据库 + 认证 + 存储https://supabase.com — 免费层级初期足够使用
Node.js 20+场次编排器运行环境https://nodejs.org
React Native + Expo移动端配套应用
npx create-expo-app

First-Run Experience

首次运行指引

bash
undefined
bash
undefined

1. Clone the simulator repo

1. 克隆模拟器仓库

git clone <your-simulator-repo> cd interview-simulator
git clone <your-simulator-repo> cd interview-simulator

2. Install dependencies

2. 安装依赖

npm install
npm install

3. Configure environment

3. 配置环境变量

cp .env.example .env.local
cp .env.example .env.local

Edit .env.local with your API keys:

编辑 .env.local 填入你的API keys:

HUME_API_KEY=...

HUME_API_KEY=...

HUME_SECRET_KEY=...

HUME_SECRET_KEY=...

ANTHROPIC_API_KEY=...

ANTHROPIC_API_KEY=...

NEXT_PUBLIC_SUPABASE_URL=...

NEXT_PUBLIC_SUPABASE_URL=...

SUPABASE_SERVICE_KEY=...

SUPABASE_SERVICE_KEY=...

4. Initialize database

4. 初始化数据库

npx supabase db push
npx supabase db push

5. Run first calibration session

5. 运行首次校准场次

npm run dev
npm run dev

Navigate to localhost:3000/calibrate

访问 localhost:3000/calibrate

10-minute session to establish baseline scores

10分钟的校准场次,用于建立基准评分

undefined
undefined

Calibration Session

校准场次

The first session is a calibration round: 10 minutes, mixed questions across all round types, no proctoring, friendly persona. This establishes baseline scores for each dimension so the adaptive difficulty has a starting point. Without calibration, the system defaults to difficulty 3 for all dimensions.

首次运行的是校准轮次:10分钟,覆盖所有轮次类型的混合问题,无监考,友好面试官人设。该场次会为每个维度建立基准评分,为难度自适应机制提供起始点。若不进行校准,系统默认所有维度难度为3。

Cost Analysis

成本分析

ComponentMonthly UsageUnit CostMonthly Total
Hume AI EVI20 evening sessions x 35 min + 30 morning drills x 3 min~$0.07/min$60-80
Claude (debrief)20 sessions x 1 debrief~$0.15/debrief$3
Claude Vision (whiteboard)10 design sessions x 5 evals~$0.03/eval$1.50
SupabaseFree tier (< 500MB, < 50K auth)$0 free / $25 pro$0-25
MediaPipeAll sessions, runs locally$0$0
ElevenLabs (mobile fallback)30 morning voice drills x 3 min~$0.05/min$4.50
Total$70-115/mo
组件月使用量单位成本月总成本
Hume AI EVI20次晚间练习 × 35分钟 + 30次早间练习 × 3分钟~$0.07/min$60-80
Claude(复盘)20次练习 × 1次复盘~$0.15/debrief$3
Claude Vision(白板评估)10次设计轮次 × 5次评估~$0.03/eval$1.50
Supabase免费层级(< 500MB,< 50K auth)$0 free / $25 pro$0-25
MediaPipe所有场次,本地运行$0$0
ElevenLabs(移动端 fallback)30次早间语音练习 × 3分钟~$0.05/min$4.50
总计$70-115/mo

Cost Optimization Strategies

成本优化策略

  1. Session length caps: Hard-stop at configured time to prevent runaway voice costs
  2. Whiteboard eval batching: Evaluate every 30s during active drawing, every 2min during discussion (not continuously)
  3. Debrief caching: If same question type + similar transcript, reuse rubric structure with specific details swapped
  4. Mobile voice: Use ElevenLabs (cheaper) for morning drills where emotion detection is unnecessary
  5. Free tier Supabase: Sufficient for single-user practice; upgrade only for multi-user or heavy recording storage

  1. 场次时长限制:在配置的时间点强制结束,避免语音成本失控
  2. 白板评估批处理:活跃绘制期间每30秒评估一次,讨论期间每2分钟评估一次(非连续评估)
  3. 复盘缓存:如果问题类型+作答文本相似,复用评分表结构,仅替换具体细节
  4. 移动端语音:早间练习不需要情感检测,使用成本更低的ElevenLabs
  5. Supabase免费层级:单用户练习完全足够;仅在多用户或大量录制存储需求时升级

Anti-Patterns

反模式

Practice Without Proctoring

无监考练习

Novice: Practices with notes open on a second monitor, browser tabs with answers visible, phone in hand for quick lookups. Builds false confidence from sessions where external resources masked knowledge gaps. In the real interview, stripped of supports, performance drops 30-40%.
Expert: Activates proctoring from the first session, even in Training (lenient) mode. Treats every practice as an approximation of real conditions. Clears desk, closes irrelevant tabs, puts phone face-down. Uses strict Simulation mode for weekend loop simulations. Understands that the discomfort of being watched IS the training.
Detection: Session history shows zero proctor flags across all sessions (impossibly clean), or proctor is consistently set to "Off." Compare self-reported confidence to actual debrief scores -- large gap indicates practice conditions are too easy.
新手: 练习时第二屏幕开着笔记,浏览器标签页放着答案,手机放在手边随时查询。这种练习会建立虚假自信,因为外部资源掩盖了知识缺口。真实面试中失去支持后,表现会下降30-40%。
高手: 从第一次练习就开启监考,即便是训练(宽松)模式。将每一次练习都尽可能贴近真实场景。清理桌面,关闭无关标签页,手机反面放置。周末全流程模拟时使用严格的模拟模式。明白被监督的不适感本身就是训练的一部分。
检测方式: 场次历史显示零监考标记(不可能过于完美),或监考始终设置为「关闭」。对比自我报告的信心和实际复盘评分——差距大说明练习条件过于宽松。

Comfort Zone Looping

舒适区循环

Novice: Manually selects the same round type repeatedly -- always behavioral (because stories are polished), always coding (because it feels productive), always the round they are already good at. Avoids design rounds because whiteboard evaluation is harsh. Avoids values rounds because deep follow-ups are uncomfortable.
Expert: Lets the orchestrator select rounds based on weakness analysis. Trusts the SM-2 algorithm to surface the uncomfortable topics at optimal intervals. When manually selecting, deliberately picks the lowest-scoring round type. Tracks round type distribution in the progress dashboard and rebalances if any type exceeds 40% of sessions.
Detection: Session history shows >50% of sessions are the same round type. Weakness heat map has persistent cold spots that never improve. Flash card review skips entire categories.
新手: 手动反复选择同一种轮次类型——总是选行为面(因为经历已经打磨得很成熟),总是选编程题(因为感觉有产出),总是选自己已经擅长的轮次。逃避设计轮次因为白板评估太严格,逃避价值观面因为深度跟进问题让人不舒服。
高手: 让编排器基于薄弱点分析选择轮次。信任SM-2算法会在最优间隔推送你不擅长的主题。手动选择时,故意选得分最低的轮次类型。在进度仪表盘跟踪轮次类型分布,如果某类占比超过40%就重新平衡。
检测方式: 场次历史显示>50%的场次是同一种轮次类型。薄弱点热力图有持续的冷区从未改善。闪卡复习跳过整个类别的知识点。

Feedback Ignored

忽视反馈

Novice: Runs sessions back-to-back without reviewing debriefs. Treats mock interviews as reps to complete rather than learning opportunities. Session count is high but scores plateau. The debrief tab has a <50% read rate. Improvement actions from debriefs are never attempted.
Expert: Spends one-third of total practice time on debrief review. After each session, reads the full scored rubric, highlights one specific improvement action, and practices that action in the next session. Reviews weekly pattern analysis to identify cross-session trends. Keeps a "lessons learned" document updated after every debrief.
Detection: Debrief read rate below 50% (tracked via time-on-page). Same weaknesses flagged in debriefs 3+ sessions in a row without improvement. No improvement actions logged.

新手: 连续参加练习却不看复盘报告。把模拟面试当成要完成的任务,而非学习机会。练习次数很多但得分停滞。复盘页面的阅读率低于50%。复盘给出的改进建议从未尝试。
高手: 总练习时间的三分之一用于复盘review。每次练习后,阅读完整的评分表,选出一个具体的改进点,在下一次练习中刻意练习。查看每周模式分析,识别跨场次的共性问题。每次复盘后更新「经验教训」文档。
检测方式: 复盘阅读率低于50%(通过页面停留时间跟踪)。同样的薄弱点在连续3+场次的复盘中被标记,没有改进。没有记录任何改进行动。

Integration with Round-Specific Skills

与轮次专属skill的集成

The simulator does not contain round-type content. It delegates to the 7 specialist skills for questions, rubrics, and evaluation criteria.
Round TypeContent SkillWhat Simulator Gets
Coding
senior-coding-interview
Problem archetypes, follow-up ladders, senior signals checklist
ML System Design
ml-system-design-interview
7-stage framework, canonical problems, whiteboard strategy
Behavioral / Values
values-behavioral-interview
Follow-up ladder depth, STAR-L format, negative framing patterns
Tech Presentation
tech-presentation-interview
Narrative arc, depth calibration, Q&A stress test questions
Hiring Manager
hiring-manager-deep-dive
Scope-of-impact evaluation, leadership signal rubric
Anthropic Technical
anthropic-technical-deep-dive
Topic areas, opinion evaluation criteria, safety depth
Full Loop
interview-loop-strategist
Round sequencing, energy management, story coherence matrix

本模拟器不包含轮次相关的内容,它会委托7个专业skill提供问题、评分表和评估标准。
轮次类型内容skill模拟器获取的内容
编程面
senior-coding-interview
问题原型、跟进问题层级、高级工程师能力点清单
ML系统设计
ml-system-design-interview
7-stage framework、经典问题、白板策略
行为面/价值观
values-behavioral-interview
跟进问题深度、STAR-L格式、负面提问模式
技术演示
tech-presentation-interview
叙事结构、深度校准、Q&A压力测试问题
招聘经理面
hiring-manager-deep-dive
影响范围评估、领导力能力点评分表
Anthropic技术面
anthropic-technical-deep-dive
主题范围、观点评估标准、安全问题深度
全流程模拟
interview-loop-strategist
轮次排序、精力管理、经历一致性矩阵

Reference Files

参考文件

FileConsult When
references/voice-engine-setup.md
Integrating Hume AI EVI, configuring interviewer personas, emotion-adaptive logic, WebSocket connection setup, ElevenLabs fallback
references/whiteboard-engine-setup.md
Setting up tldraw for diagram evaluation, Claude Vision scoring prompts, periodic screenshot strategy, cost per evaluation
references/proctor-engine-setup.md
MediaPipe Face Mesh setup, gaze vector calculation, suspicion thresholds, privacy configuration, flag integration with debrief
references/mobile-app-architecture.md
React Native + Expo stack, SM-2 spaced repetition implementation, push notifications, offline mode, data sync strategy
references/session-orchestration.md
Round selection algorithm, adaptive difficulty, performance tracking schema, SM-2 details, debrief generation prompts, weakness detection
文件适用场景
references/voice-engine-setup.md
集成Hume AI EVI、配置面试官人设、情感自适应逻辑、WebSocket连接设置、ElevenLabs降级方案
references/whiteboard-engine-setup.md
搭建用于图表评估的tldraw、Claude Vision评分提示词、定期截图策略、单次评估成本
references/proctor-engine-setup.md
MediaPipe Face Mesh搭建、视线向量计算、异常阈值、隐私配置、标记与复盘的集成
references/mobile-app-architecture.md
React Native + Expo技术栈、SM-2间隔重复实现、推送通知、离线模式、数据同步策略
references/session-orchestration.md
轮次选择算法、难度自适应、表现跟踪schema、SM-2细节、复盘生成提示词、薄弱点检测