post-engagers

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

LinkedIn Post Engagers

LinkedIn帖子互动者提取工具

Turn LinkedIn post engagement into a prospecting list. Extract commenters, reactors, and reposters from any LinkedIn post — then enrich and upload to Extruct for outreach.
将LinkedIn帖子互动数据转化为潜在客户列表。提取任意LinkedIn帖子的评论者、点赞者和转发者——然后丰富数据并上传至Extruct用于客户开发。

Related Skills

相关技能

post-engagers → email-search → email-generation → campaign-sending
This skill produces a people table. The next step (
email-search
) gets verified emails, then
email-generation
drafts personalized outreach.
post-engagers → email-search → email-generation → campaign-sending
该技能生成一个人员表。下一步(
email-search
)将获取验证过的邮箱,随后
email-generation
会生成个性化的客户开发邮件。

Extruct API Operations

Extruct API操作说明

This skill delegates all Extruct API calls to the
extruct-api
skill.
For all Extruct API operations, read and follow the instructions in
skills/extruct-api/SKILL.md
.
Table creation, row uploads, and data fetching are handled by the extruct-api skill. This skill focuses on scraping LinkedIn engagers and preparing the data — the extruct-api skill handles the API execution.
本技能将所有Extruct API调用委托给
extruct-api
技能。
所有Extruct API操作,请阅读并遵循
skills/extruct-api/SKILL.md
中的说明。
表格创建、行上传和数据获取由extruct-api技能处理。本技能专注于爬取LinkedIn互动者数据预处理——extruct-api技能负责API执行。

Inputs

输入项

InputSourceRequired
LinkedIn post URL(s)User providesyes
Engagement types to scrapeUser choice: comments, reactions, reposts (default: all)no
LinkedIn scraping providerUser choice (see provider list below)yes
Existing people table IDExtruct table to append to (or create new)no
输入项来源是否必填
LinkedIn帖子URL用户提供
要爬取的互动类型用户选择:评论、点赞、转发(默认:全部)
LinkedIn爬取服务商用户选择(见下方服务商列表)
现有人员表ID要追加数据的Extruct表格(或新建表格)

LinkedIn Scraping Providers

LinkedIn爬取服务商

This skill does not mandate a specific provider. Ask the user which LinkedIn scraping tool they want to use. Below are known options — the user may have others.
ProviderEngagement typesAuthNotes
Anysite MCPComments, reactions, repostsMCP connectionBuilt into Claude Code via MCP. Tools:
get_linkedin_post_comments
,
get_linkedin_post_reactions
,
get_linkedin_post_reposts
RapidAPI (LinkedIn scrapers)Comments, reactions, reposts
X-RapidAPI-Key
header
Multiple scrapers available (e.g. Fresh LinkedIn Profile Data, LinkedIn Bulk Data Scraper). Check endpoint docs per scraper
ApifyComments, reactions, reposts
APIFY_API_TOKEN
Actors:
curious_coder/linkedin-post-commentors
,
supreme_coder/linkedin-post-likers
. Run via Apify API
PhantombusterComments, reactions
PHANTOMBUSTER_API_KEY
Phantoms: "LinkedIn Post Commenters", "LinkedIn Post Likers"
Custom / self-hostedVariesVariesUser may have their own scraping setup
If the user doesn't know where to start:
  • Anysite MCP is the simplest if they have it connected — no extra credentials needed
  • Apify is a good general choice with pay-per-use pricing
  • RapidAPI has multiple scrapers with free tiers
本技能不强制使用特定服务商。询问用户想使用哪款LinkedIn爬取工具。以下是已知选项——用户可能有其他选择。
服务商支持的互动类型认证方式说明
Anysite MCP评论、点赞、转发MCP连接已通过MCP集成到Claude Code。工具:
get_linkedin_post_comments
,
get_linkedin_post_reactions
,
get_linkedin_post_reposts
RapidAPI (LinkedIn scrapers)评论、点赞、转发
X-RapidAPI-Key
请求头
有多个爬取工具可用(如Fresh LinkedIn Profile Data、LinkedIn Bulk Data Scraper)。请查看对应工具的接口文档
Apify评论、点赞、转发
APIFY_API_TOKEN
可用Actor:
curious_coder/linkedin-post-commentors
,
supreme_coder/linkedin-post-likers
。通过Apify API调用
Phantombuster评论、点赞
PHANTOMBUSTER_API_KEY
可用Phantom工具:"LinkedIn Post Commenters"、"LinkedIn Post Likers"
自定义/自托管视情况而定视情况而定用户可能有自己的爬取设置
如果用户不知道如何选择:
  • Anysite MCP是最简单的选择(如果已连接MCP)——无需额外凭证
  • Apify是通用型的优质选择,采用按使用量付费定价
  • RapidAPI有多个爬取工具,提供免费层级

Workflow

工作流程

Step 1: Collect post URLs and choose provider

步骤1:收集帖子URL并选择服务商

  1. Get the LinkedIn post URL(s) from the user. Accept one or multiple.
  2. Ask which engagement types to scrape: comments, reactions, reposts, or all three.
  3. Ask which LinkedIn scraping provider they want to use (see table above).
  4. If the provider requires credentials, confirm they're available.
Extract the activity URN from each post URL. The numeric ID is typically after
activity-
or
ugcPost-
in the URL (e.g.
activity:7433261939285385217
).
  1. 从用户处获取LinkedIn帖子URL,支持单个或多个URL。
  2. 询问用户要爬取的互动类型:评论点赞转发,或全部三种。
  3. 询问用户想使用的LinkedIn爬取服务商(见上方表格)。
  4. 如果服务商需要凭证,确认用户已准备好。
从每个帖子URL中提取活动URN。数字ID通常位于URL中的
activity-
ugcPost-
之后(例如
activity:7433261939285385217
)。

Step 2: Scrape engagers

步骤2:爬取互动者

Use the chosen provider to fetch engagement data. The approach varies by provider:
If using Anysite MCP:
  • Comments:
    mcp__claude_ai_Anysite__get_linkedin_post_comments
    with
    urn: "activity:{id}"
    ,
    count: 1500
  • Reactions:
    mcp__claude_ai_Anysite__get_linkedin_post_reactions
    with
    urn: "activity:{id}"
    ,
    count: 1500
  • Reposts:
    mcp__claude_ai_Anysite__get_linkedin_post_reposts
    with
    urn: "activity:{id}"
    ,
    count: 1500
If using another provider:
  • Read or fetch the provider's API documentation
  • Identify the endpoint, input format, and response structure
  • Implement the scraping calls accordingly
For each engager, extract (field names vary by provider):
python
{
    "full_name": "...",
    "linkedin_url": "...",        # profile URL
    "headline": "...",            # job title / headline
    "engagement_type": "...",     # comment / reaction / repost
    "post_url": "...",            # which post they engaged with
}
If scraping multiple posts, tag each engager with the
post_url
they engaged with.
使用选定的服务商获取互动数据。具体方式因服务商而异:
如果使用Anysite MCP:
  • 评论:调用
    mcp__claude_ai_Anysite__get_linkedin_post_comments
    ,参数为
    urn: "activity:{id}"
    ,
    count: 1500
  • 点赞:调用
    mcp__claude_ai_Anysite__get_linkedin_post_reactions
    ,参数为
    urn: "activity:{id}"
    ,
    count: 1500
  • 转发:调用
    mcp__claude_ai_Anysite__get_linkedin_post_reposts
    ,参数为
    urn: "activity:{id}"
    ,
    count: 1500
如果使用其他服务商:
  • 阅读或获取服务商的API文档
  • 确定接口地址、输入格式和响应结构
  • 相应地实现爬取调用
为每个互动者提取以下字段(字段名称因服务商而异):
python
{
    "full_name": "...",
    "linkedin_url": "...",        # 个人主页URL
    "headline": "...",            # 职位头衔/简介
    "engagement_type": "...",     # 评论/点赞/转发
    "post_url": "...",            # 互动的帖子URL
}
如果爬取多个帖子,为每个互动者标记他们所互动的
post_url

Step 3: Deduplicate and classify

步骤3:去重与分类

  1. Deduplicate by
    linkedin_url
    across all posts and engagement types. If someone both commented and reacted, keep both engagement types as a comma-separated value.
  2. Apply the segment classifier to job titles (first match wins):
PriorityPatternSegment
1
founder|co-founder|ceo|owner
Founders / CEOs
2
cto|vp.*eng|head of eng|director.*eng
Engineering Leadership
3
cmo|vp.*market|head of market|director.*market
Marketing Leadership
4
cro|vp.*sales|head of sales|director.*sales|head of revenue
Sales Leadership
5
director|vp|vice president|head of|chief
Directors / VPs / Heads
6
revops|revenue ops|sales ops|growth ops|gtm ops|gtm eng
RevOps / Growth Ops
7
product manag|head of product|product lead
Product
8
data scien|machine learn|ml eng|ai eng|data eng
Data / ML
9
account exec|sdr|bdr|sales dev|business dev|sales rep
Sales ICs
10
market|content|brand|growth|demand gen|copywrite
Marketing / Content
11
sales|commercial|partnerships|revenue
Sales (General)
12
ai|automat|gpt|llm|agent|no.?code
AI / Automation Builders
13
consult|freelanc|advisor|coach|mentor|agenc
Consultants / Agencies
14
engineer|develop|software|fullstack|backend|frontend
Engineering / Product / Data
(no match)Other
  1. Present a segment breakdown to the user before proceeding:
Engager Summary:
- Total unique engagers: N
- Comments: N | Reactions: N | Reposts: N

Segment Breakdown:
  Founders / CEOs:         N (X%)
  Sales Leadership:        N (X%)
  Marketing Leadership:    N (X%)
  ...
  Other:                   N (X%)
  1. Ask the user: "Want to filter to specific segments before uploading? (e.g. only Founders + Leadership)"
  1. 去重:基于
    linkedin_url
    在所有帖子和互动类型中去重。如果同一用户既评论又点赞,将两种互动类型用逗号分隔保存。
  2. 应用群体分类器到职位头衔(匹配到第一个模式即停止):
优先级匹配模式细分群体
1
founder|co-founder|ceo|owner
创始人/CEO
2
cto|vp.*eng|head of eng|director.*eng
技术管理层
3
cmo|vp.*market|head of market|director.*market
营销管理层
4
cro|vp.*sales|head of sales|director.*sales|head of revenue
销售管理层
5
director|vp|vice president|head of|chief
总监/副总裁/负责人
6
revops|revenue ops|sales ops|growth ops|gtm ops|gtm eng
营收运营/增长运营
7
product manag|head of product|product lead
产品团队
8
data scien|machine learn|ml eng|ai eng|data eng
数据/机器学习
9
account exec|sdr|bdr|sales dev|business dev|sales rep
销售执行人员
10
market|content|brand|growth|demand gen|copywrite
营销/内容团队
11
sales|commercial|partnerships|revenue
销售(通用)
12
ai|automat|gpt|llm|agent|no.?code
AI/自动化构建者
13
consult|freelanc|advisor|coach|mentor|agenc
咨询师/代理机构
14
engineer|develop|software|fullstack|backend|frontend
技术/产品/数据团队
无匹配其他
  1. 在继续操作前,向用户展示群体细分统计
互动者摘要:
- 唯一互动者总数: N
- 评论数: N | 点赞数: N | 转发数: N

群体细分:
  创始人/CEO:         N (X%)
  销售管理层:        N (X%)
  营销管理层:    N (X%)
  ...
  其他:                   N (X%)
  1. 询问用户:"是否要在上传前筛选特定群体?(例如仅保留创始人+管理层)"

Step 4: Upload to Extruct people table

步骤4:上传至Extruct人员表

Create a new Extruct generic table or append to an existing one. Delegate to the extruct-api skill.
If creating a new table:
json
{
  "name": "{user-provided name or 'Post Engagers - {date}'}",
  "kind": "generic",
  "column_configs": [
    {"kind": "input", "name": "Full Name", "key": "full_name"},
    {"kind": "input", "name": "LinkedIn URL", "key": "linkedin_url"},
    {"kind": "input", "name": "Job Title", "key": "job_title"},
    {"kind": "input", "name": "Segment", "key": "segment"},
    {"kind": "input", "name": "Engagement Type", "key": "engagement_type"},
    {"kind": "input", "name": "Source Post", "key": "source_post"},
    {"kind": "input", "name": "Company", "key": "company"},
    {"kind": "input", "name": "Domain", "key": "domain"}
  ]
}
Upload rows in batches of 50 via the extruct-api skill.
If appending to an existing table:
  • Fetch existing rows to deduplicate against current
    linkedin_url
    values
  • Upload only new engagers
创建新的Extruct通用表格或追加到现有表格。委托给extruct-api技能处理。
如果创建新表格:
json
{
  "name": "{用户提供的名称或'帖子互动者 - {日期}'}",
  "kind": "generic",
  "column_configs": [
    {"kind": "input", "name": "全名", "key": "full_name"},
    {"kind": "input", "name": "LinkedIn主页", "key": "linkedin_url"},
    {"kind": "input", "name": "职位头衔", "key": "job_title"},
    {"kind": "input", "name": "细分群体", "key": "segment"},
    {"kind": "input", "name": "互动类型", "key": "engagement_type"},
    {"kind": "input", "name": "来源帖子", "key": "source_post"},
    {"kind": "input", "name": "企业", "key": "company"},
    {"kind": "input", "name": "域名", "key": "domain"}
  ]
}
通过extruct-api技能批量上传数据,每批50行。
如果追加到现有表格:
  • 获取现有行数据,基于当前
    linkedin_url
    值去重
  • 仅上传新的互动者

Step 5: Review and next steps

步骤5:审核与后续步骤

Present upload summary:
Upload Complete:
- Engagers uploaded: N
- Table: {table_name}
- URL: https://app.extruct.ai/tables/{table_id}

Segment Breakdown (uploaded):
  Founders / CEOs:     N
  Sales Leadership:    N
  ...
Suggest next steps:
  • "Get emails" → run
    email-search
    on the people table to enrich with verified emails
  • "Enrich companies" → run
    list-enrichment
    to add company data (industry, size, funding)
  • "Draft outreach" → run
    email-generation
    after emails are found
  • "Monitor more posts" → re-run with additional post URLs and deduplicate against this table
展示上传摘要:
上传完成:
- 已上传互动者数: N
- 表格: {table_name}
- 链接: https://app.extruct.ai/tables/{table_id}

已上传群体细分:
  创始人/CEO:     N
  销售管理层:    N
  ...
建议后续步骤:
  • "获取邮箱" → 在人员表上运行
    email-search
    ,丰富验证过的邮箱数据
  • "丰富企业数据" → 运行
    list-enrichment
    ,添加企业数据(行业、规模、融资情况)
  • "生成开发邮件" → 获取邮箱后运行
    email-generation
  • "监控更多帖子" → 使用更多帖子URL重新运行,并与当前表格去重

Tips

提示

  • Multiple posts = richer list. Scrape 3-5 recent posts from the same account to build a larger pool. Engagers across multiple posts are highly engaged — flag them.
  • Repeat engagers are warmer leads. If someone engaged on 2+ posts, note that in the data — they're more likely to respond to outreach.
  • Filter aggressively. Not all engagers are prospects. Use segment filtering to focus on decision makers and skip students, recruiters, etc.
  • Respect rate limits. LinkedIn scraping providers have varying rate limits. Don't hammer the API — space out requests if scraping many posts.
  • 多帖子=更丰富的列表。爬取同一账号的3-5条最新帖子,构建更大的潜在客户池。在多个帖子中互动的用户参与度更高——标记他们。
  • 重复互动者是更优质的线索。如果用户在2条及以上帖子中互动,在数据中注明——他们更有可能回复客户开发邮件。
  • 严格筛选。并非所有互动者都是潜在客户。使用群体筛选聚焦决策者,排除学生、招聘人员等。
  • 遵守速率限制。不同LinkedIn爬取服务商的速率限制不同。爬取大量帖子时,不要频繁调用API——分散请求时间。

Output

输出项

OutputFormatLocation
People tableExtruct generic table
https://app.extruct.ai/tables/{table_id}
Engagers CSVCSV backup
claude-code-gtm/csv/input/{campaign}/post_engagers.csv
输出项格式位置
人员表Extruct通用表格
https://app.extruct.ai/tables/{table_id}
互动者CSVCSV备份
claude-code-gtm/csv/input/{campaign}/post_engagers.csv