build-persona

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese
You are building a reader persona for the user based on their Readwise Reader library. This persona file is used by other skills (triage, quiz, etc.) to personalize their experience.
你将基于用户的Readwise Reader图书馆为其构建阅读人物画像。这份人物画像文件将被其他技能(分类筛选、测试等)用于提供个性化体验。

Readwise Access

Readwise 访问权限

Check if Readwise MCP tools are available (e.g.
mcp__readwise__reader_list_documents
). If they are, use them throughout (and pass this context to the subagent). If not, use the equivalent
readwise
CLI commands instead (e.g.
readwise list
,
readwise read <id>
,
readwise search <query>
,
readwise highlights <query>
). The instructions below reference MCP tool names — translate to CLI equivalents as needed.
检查Readwise MCP工具是否可用(例如
mcp__readwise__reader_list_documents
)。如果可用,请全程使用这些工具(并将此上下文传递给子代理)。如果不可用,请改用等效的
readwise
CLI命令(例如
readwise list
readwise read <id>
readwise search <query>
readwise highlights <query>
)。以下说明引用了MCP工具名称——必要时请转换为等效的CLI命令。

Welcome

欢迎语

Open with a brief introduction:
Build Persona · Readwise Reader
I'll analyze your reading history — saves, highlights, and tags — and build a
reader_persona.md
profile in the current directory. Other skills (triage, quiz) will use this to personalize their output to you.
I'll start with a quick pass (~1-2 min) and then you can decide if you want a deeper analysis.
以简短介绍开头:
构建人物画像 · Readwise Reader
我将分析你的阅读历史——已保存内容、高亮标记和标签——并在当前目录中生成一份
reader_persona.md
档案。其他技能(分类筛选、测试等)将利用此档案为你提供个性化输出。
我会先快速扫描一次(约1-2分钟),之后你可以决定是否需要更深入的分析。

Process

流程

IMPORTANT: This skill involves fetching a lot of data. To keep the main conversation context clean, launch a Task subagent to do all the heavy lifting.
重要提示: 这个技能需要获取大量数据。为保持主对话上下文简洁,请启动一个**任务子代理(Task subagent)**来处理所有繁重工作。

Phase 1: Quick Pass

阶段1:快速扫描

The subagent should do a focused scan to build a solid initial persona fast:
  1. Gather data. Run ALL of these in parallel (one batch of tool calls):
    • 4 highlight searches:
      mcp__readwise__readwise_search_highlights
      with 4 broad queries (e.g. "ideas strategy product", "learning technology culture", "writing craft creativity", "business leadership growth") with
      limit=50
      each. These are semantic/vector searches so broad multi-word queries work well. Highlights are cheap and high-signal — cast a wide net.
    • 4 document lists:
      mcp__readwise__reader_list_documents
      from each non-feed location:
      location="new"
      ,
      location="later"
      ,
      location="shortlist"
      , and
      location="archive"
      with
      limit=100
      each. If the combined results are very sparse (< 20 docs total), also try without a location filter or with
      location="feed"
      as a fallback. Only fetch metadata:
      response_fields=["title", "author", "category", "tags", "site_name", "summary", "saved_at", "published_date"]
      . Do NOT fetch full content.
    • Tags:
      mcp__readwise__reader_list_tags
      to understand their organizational system.
  2. Parse results efficiently. The JSON responses from document lists can be large (25k+ tokens). Do NOT try to read them with the Read tool — it will hit token limits and waste retries. Instead, use a single Bash call with a python3 script to extract and summarize all the data at once. The script should parse all result files together and output:
    • Document counts by category
    • Top 20 sites, authors, and tags
    • Save velocity by month
    • All docs saved in the last 3 weeks (title, category, author, date)
    • A representative sample of highlight texts with their source titles/authors
  3. Write the persona. Write
    reader_persona.md
    to the current working directory with these sections:
    • Identity & Role — Who they appear to be (profession, role, industry)
    • Core Interests — Top themes and topics, ranked by frequency and recency
    • Reading Personality — How they read (saves a lot but reads selectively? highlights heavily? prefers short or long-form?)
    • Current Obsessions — What they've been saving/reading most in the last 2-3 weeks
    • Goals & Aspirations — What they seem to be working toward, inferred from patterns
    • Taste & Sensibility — Thinkers and styles they gravitate toward (contrarian? practical? philosophical? technical?)
    • Anti-interests — Topics notably absent or avoided
    • Triage Guidance — Specific instructions for how to pitch documents to this person (e.g. "lead with practical applicability", "connect to their interest in X", "bar is high for AI content — flag when it's genuinely novel")
  4. Return a brief summary (3-5 sentences) of the persona AND the absolute path to the file.
Subagent speed rules:
  • Do NOT call
    readwise_list_highlights
    — it often errors and is redundant with search.
  • Do NOT try to Read large JSON tool-result files — parse them with python3 via Bash.
  • Combine all analysis into ONE python script, not multiple sequential scripts.
  • Maximize parallel tool calls. Every API fetch in step 1 should be a single parallel batch.
子代理应进行聚焦扫描,快速构建一份扎实的初始人物画像:
  1. 收集数据。并行运行所有以下操作(一批工具调用):
    • 4次高亮搜索:使用
      mcp__readwise__readwise_search_highlights
      执行4个宽泛的查询(例如"ideas strategy product"、"learning technology culture"、"writing craft creativity"、"business leadership growth"),每个查询的
      limit=50
      。这些是语义/向量搜索,因此宽泛的多词查询效果很好。高亮内容成本低且信号强——请扩大搜索范围。
    • 4份文档列表:从每个非订阅源位置调用
      mcp__readwise__reader_list_documents
      location="new"
      location="later"
      location="shortlist"
      location="archive"
      ,每个位置的
      limit=100
      。如果合并后的结果非常少(总计<20篇文档),也可以尝试不使用位置筛选器或改用
      location="feed"
      作为备选方案。仅获取元数据:
      response_fields=["title", "author", "category", "tags", "site_name", "summary", "saved_at", "published_date"]
      。请勿获取完整内容。
    • 标签:调用
      mcp__readwise__reader_list_tags
      以了解用户的分类系统。
  2. 高效解析结果。文档列表的JSON响应可能很大(25000+令牌)。请勿尝试使用读取工具来读取这些文件——这会触发令牌限制并浪费重试机会。相反,请使用一次Bash调用搭配python3脚本,一次性提取并汇总所有数据。该脚本应一起解析所有结果文件并输出:
    • 按分类统计的文档数量
    • 排名前20的网站、作者和标签
    • 按月份统计的保存频率
    • 过去3周内保存的所有文档(标题、分类、作者、日期)
    • 带有来源标题/作者的代表性高亮文本样本
  3. 生成人物画像。在当前工作目录中写入
    reader_persona.md
    ,包含以下部分:
    • 身份与角色——用户的身份(职业、角色、行业)
    • 核心兴趣——按频率和时效性排序的热门主题
    • 阅读风格——用户的阅读习惯(保存很多但选择性阅读?大量高亮?偏好短篇还是长篇?)
    • 近期关注点——过去2-3周内用户最常保存/阅读的内容
    • 目标与愿景——从模式中推断出用户的努力方向
    • 品味与偏好——用户倾向的思考方式和风格(逆向思维?务实?哲学?技术向?)
    • 非兴趣领域——明显缺失或避开的主题
    • 分类筛选指南——针对如何向该用户推荐文档的具体说明(例如“以实用性为切入点”、“关联其对X的兴趣”、“AI内容门槛较高——仅标记真正新颖的内容”)
  4. 返回人物画像的简短摘要(3-5句话)以及文件的绝对路径。
子代理速度规则:
  • 请勿调用
    readwise_list_highlights
    ——该命令经常出错,且与搜索功能重复。
  • 请勿尝试读取大型JSON工具结果文件——通过Bash使用python3进行解析。
  • 将所有分析整合到一个python脚本中,而非多个连续脚本。
  • 最大化并行工具调用。步骤1中的每个API请求都应作为单个并行批次执行。

Phase 2: Deep Pass (optional)

阶段2:深度扫描(可选)

After the quick-pass subagent returns, show the user the results and ask if they want a deeper analysis. If yes, launch a second subagent that:
  • Fetches 4-6 more highlight searches with different, more specific queries informed by what phase 1 found (e.g. if the persona shows interest in AI tooling, search "AI agents workflows automation"; if they read fiction, search "fiction narrative storytelling") with
    limit=50
    each
  • Paginates beyond the first 100 docs per location using
    next_page_cursor
    from phase 1 results — fetch the next 100-200 per location to build a much larger sample
  • Reads the existing
    reader_persona.md
    and enriches/rewrites it with the additional data — more nuanced sections, stronger evidence, sharper triage guidance
  • Returns a summary of what changed
快速扫描的子代理返回结果后,向用户展示结果并询问是否需要更深入的分析。如果需要,启动第二个子代理,该子代理将:
  • 根据阶段1的发现,执行4-6次更具体的不同高亮搜索(例如,如果人物画像显示用户对AI工具感兴趣,则搜索“AI agents workflows automation”;如果用户阅读小说,则搜索“fiction narrative storytelling”),每个查询的
    limit=50
  • 使用阶段1结果中的
    next_page_cursor
    ,在每个位置的前100篇文档之后进行分页——每个位置再获取100-200篇文档,以构建更大的样本库
  • 读取已有的
    reader_persona.md
    并利用新增数据丰富/重写档案——增加更细致的章节、更充分的依据、更明确的分类筛选指南
  • 返回变更内容的摘要

After Each Subagent Returns

子代理返回结果后的操作

  1. Show the file link. Always tell the user:
    reader_persona.md
    was written to
    {absolute_path}
    . Display the full path so they can open it.
  2. Show a summary of the persona (use the subagent's returned summary).
  3. After phase 1: Ask if they want the deep pass or if the quick version is good enough. Also ask if they want to adjust anything.
  4. After phase 2 (if run): Show what changed and ask if they want to adjust anything.
  5. If adjustments needed, edit the file directly based on their feedback.
  6. Confirm saved. Tell them the file is saved and which skills will now use it (triage, quiz, feed-catchup, etc.).
  1. 显示文件链接。务必告知用户:
    reader_persona.md
    已保存至
    {absolute_path}
    。显示完整路径以便用户打开。
  2. 展示摘要——人物画像的摘要(使用子代理返回的摘要)。
  3. 阶段1结束后:询问用户是否需要深度扫描,还是快速版本已足够。同时询问用户是否需要调整任何内容。
  4. 阶段2结束后(若执行):展示变更内容并询问用户是否需要调整任何内容。
  5. 若需要调整,根据用户反馈直接编辑文件。
  6. 确认保存。告知用户文件已保存,以及哪些技能将使用该档案(分类筛选、测试、订阅源补看等)。