post-engagers
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseLinkedIn Post Engagers
LinkedIn帖子互动者提取工具
Turn LinkedIn post engagement into a prospecting list. Extract commenters, reactors, and reposters from any LinkedIn post — then enrich and upload to Extruct for outreach.
将LinkedIn帖子互动数据转化为潜在客户列表。提取任意LinkedIn帖子的评论者、点赞者和转发者——然后丰富数据并上传至Extruct用于客户开发。
Related Skills
相关技能
post-engagers → email-search → email-generation → campaign-sendingThis skill produces a people table. The next step () gets verified emails, then drafts personalized outreach.
email-searchemail-generationpost-engagers → email-search → email-generation → campaign-sending该技能生成一个人员表。下一步()将获取验证过的邮箱,随后会生成个性化的客户开发邮件。
email-searchemail-generationExtruct API Operations
Extruct API操作说明
This skill delegates all Extruct API calls to the skill.
extruct-apiFor all Extruct API operations, read and follow the instructions in .
skills/extruct-api/SKILL.mdTable creation, row uploads, and data fetching are handled by the extruct-api skill. This skill focuses on scraping LinkedIn engagers and preparing the data — the extruct-api skill handles the API execution.
本技能将所有Extruct API调用委托给技能。
extruct-api所有Extruct API操作,请阅读并遵循中的说明。
skills/extruct-api/SKILL.md表格创建、行上传和数据获取由extruct-api技能处理。本技能专注于爬取LinkedIn互动者和数据预处理——extruct-api技能负责API执行。
Inputs
输入项
| Input | Source | Required |
|---|---|---|
| LinkedIn post URL(s) | User provides | yes |
| Engagement types to scrape | User choice: comments, reactions, reposts (default: all) | no |
| LinkedIn scraping provider | User choice (see provider list below) | yes |
| Existing people table ID | Extruct table to append to (or create new) | no |
| 输入项 | 来源 | 是否必填 |
|---|---|---|
| LinkedIn帖子URL | 用户提供 | 是 |
| 要爬取的互动类型 | 用户选择:评论、点赞、转发(默认:全部) | 否 |
| LinkedIn爬取服务商 | 用户选择(见下方服务商列表) | 是 |
| 现有人员表ID | 要追加数据的Extruct表格(或新建表格) | 否 |
LinkedIn Scraping Providers
LinkedIn爬取服务商
This skill does not mandate a specific provider. Ask the user which LinkedIn scraping tool they want to use. Below are known options — the user may have others.
| Provider | Engagement types | Auth | Notes |
|---|---|---|---|
| Anysite MCP | Comments, reactions, reposts | MCP connection | Built into Claude Code via MCP. Tools: |
| RapidAPI (LinkedIn scrapers) | Comments, reactions, reposts | | Multiple scrapers available (e.g. Fresh LinkedIn Profile Data, LinkedIn Bulk Data Scraper). Check endpoint docs per scraper |
| Apify | Comments, reactions, reposts | | Actors: |
| Phantombuster | Comments, reactions | | Phantoms: "LinkedIn Post Commenters", "LinkedIn Post Likers" |
| Custom / self-hosted | Varies | Varies | User may have their own scraping setup |
If the user doesn't know where to start:
- Anysite MCP is the simplest if they have it connected — no extra credentials needed
- Apify is a good general choice with pay-per-use pricing
- RapidAPI has multiple scrapers with free tiers
本技能不强制使用特定服务商。询问用户想使用哪款LinkedIn爬取工具。以下是已知选项——用户可能有其他选择。
| 服务商 | 支持的互动类型 | 认证方式 | 说明 |
|---|---|---|---|
| Anysite MCP | 评论、点赞、转发 | MCP连接 | 已通过MCP集成到Claude Code。工具: |
| RapidAPI (LinkedIn scrapers) | 评论、点赞、转发 | | 有多个爬取工具可用(如Fresh LinkedIn Profile Data、LinkedIn Bulk Data Scraper)。请查看对应工具的接口文档 |
| Apify | 评论、点赞、转发 | | 可用Actor: |
| Phantombuster | 评论、点赞 | | 可用Phantom工具:"LinkedIn Post Commenters"、"LinkedIn Post Likers" |
| 自定义/自托管 | 视情况而定 | 视情况而定 | 用户可能有自己的爬取设置 |
如果用户不知道如何选择:
- Anysite MCP是最简单的选择(如果已连接MCP)——无需额外凭证
- Apify是通用型的优质选择,采用按使用量付费定价
- RapidAPI有多个爬取工具,提供免费层级
Workflow
工作流程
Step 1: Collect post URLs and choose provider
步骤1:收集帖子URL并选择服务商
- Get the LinkedIn post URL(s) from the user. Accept one or multiple.
- Ask which engagement types to scrape: comments, reactions, reposts, or all three.
- Ask which LinkedIn scraping provider they want to use (see table above).
- If the provider requires credentials, confirm they're available.
Extract the activity URN from each post URL. The numeric ID is typically after or in the URL (e.g. ).
activity-ugcPost-activity:7433261939285385217- 从用户处获取LinkedIn帖子URL,支持单个或多个URL。
- 询问用户要爬取的互动类型:评论、点赞、转发,或全部三种。
- 询问用户想使用的LinkedIn爬取服务商(见上方表格)。
- 如果服务商需要凭证,确认用户已准备好。
从每个帖子URL中提取活动URN。数字ID通常位于URL中的或之后(例如)。
activity-ugcPost-activity:7433261939285385217Step 2: Scrape engagers
步骤2:爬取互动者
Use the chosen provider to fetch engagement data. The approach varies by provider:
If using Anysite MCP:
- Comments: with
mcp__claude_ai_Anysite__get_linkedin_post_comments,urn: "activity:{id}"count: 1500 - Reactions: with
mcp__claude_ai_Anysite__get_linkedin_post_reactions,urn: "activity:{id}"count: 1500 - Reposts: with
mcp__claude_ai_Anysite__get_linkedin_post_reposts,urn: "activity:{id}"count: 1500
If using another provider:
- Read or fetch the provider's API documentation
- Identify the endpoint, input format, and response structure
- Implement the scraping calls accordingly
For each engager, extract (field names vary by provider):
python
{
"full_name": "...",
"linkedin_url": "...", # profile URL
"headline": "...", # job title / headline
"engagement_type": "...", # comment / reaction / repost
"post_url": "...", # which post they engaged with
}If scraping multiple posts, tag each engager with the they engaged with.
post_url使用选定的服务商获取互动数据。具体方式因服务商而异:
如果使用Anysite MCP:
- 评论:调用,参数为
mcp__claude_ai_Anysite__get_linkedin_post_comments,urn: "activity:{id}"count: 1500 - 点赞:调用,参数为
mcp__claude_ai_Anysite__get_linkedin_post_reactions,urn: "activity:{id}"count: 1500 - 转发:调用,参数为
mcp__claude_ai_Anysite__get_linkedin_post_reposts,urn: "activity:{id}"count: 1500
如果使用其他服务商:
- 阅读或获取服务商的API文档
- 确定接口地址、输入格式和响应结构
- 相应地实现爬取调用
为每个互动者提取以下字段(字段名称因服务商而异):
python
{
"full_name": "...",
"linkedin_url": "...", # 个人主页URL
"headline": "...", # 职位头衔/简介
"engagement_type": "...", # 评论/点赞/转发
"post_url": "...", # 互动的帖子URL
}如果爬取多个帖子,为每个互动者标记他们所互动的。
post_urlStep 3: Deduplicate and classify
步骤3:去重与分类
- Deduplicate by across all posts and engagement types. If someone both commented and reacted, keep both engagement types as a comma-separated value.
linkedin_url - Apply the segment classifier to job titles (first match wins):
| Priority | Pattern | Segment |
|---|---|---|
| 1 | | Founders / CEOs |
| 2 | | Engineering Leadership |
| 3 | | Marketing Leadership |
| 4 | | Sales Leadership |
| 5 | | Directors / VPs / Heads |
| 6 | | RevOps / Growth Ops |
| 7 | | Product |
| 8 | | Data / ML |
| 9 | | Sales ICs |
| 10 | | Marketing / Content |
| 11 | | Sales (General) |
| 12 | | AI / Automation Builders |
| 13 | | Consultants / Agencies |
| 14 | | Engineering / Product / Data |
| — | (no match) | Other |
- Present a segment breakdown to the user before proceeding:
Engager Summary:
- Total unique engagers: N
- Comments: N | Reactions: N | Reposts: N
Segment Breakdown:
Founders / CEOs: N (X%)
Sales Leadership: N (X%)
Marketing Leadership: N (X%)
...
Other: N (X%)- Ask the user: "Want to filter to specific segments before uploading? (e.g. only Founders + Leadership)"
- 去重:基于在所有帖子和互动类型中去重。如果同一用户既评论又点赞,将两种互动类型用逗号分隔保存。
linkedin_url - 应用群体分类器到职位头衔(匹配到第一个模式即停止):
| 优先级 | 匹配模式 | 细分群体 |
|---|---|---|
| 1 | | 创始人/CEO |
| 2 | | 技术管理层 |
| 3 | | 营销管理层 |
| 4 | | 销售管理层 |
| 5 | | 总监/副总裁/负责人 |
| 6 | | 营收运营/增长运营 |
| 7 | | 产品团队 |
| 8 | | 数据/机器学习 |
| 9 | | 销售执行人员 |
| 10 | | 营销/内容团队 |
| 11 | | 销售(通用) |
| 12 | | AI/自动化构建者 |
| 13 | | 咨询师/代理机构 |
| 14 | | 技术/产品/数据团队 |
| — | 无匹配 | 其他 |
- 在继续操作前,向用户展示群体细分统计:
互动者摘要:
- 唯一互动者总数: N
- 评论数: N | 点赞数: N | 转发数: N
群体细分:
创始人/CEO: N (X%)
销售管理层: N (X%)
营销管理层: N (X%)
...
其他: N (X%)- 询问用户:"是否要在上传前筛选特定群体?(例如仅保留创始人+管理层)"
Step 4: Upload to Extruct people table
步骤4:上传至Extruct人员表
Create a new Extruct generic table or append to an existing one. Delegate to the extruct-api skill.
If creating a new table:
json
{
"name": "{user-provided name or 'Post Engagers - {date}'}",
"kind": "generic",
"column_configs": [
{"kind": "input", "name": "Full Name", "key": "full_name"},
{"kind": "input", "name": "LinkedIn URL", "key": "linkedin_url"},
{"kind": "input", "name": "Job Title", "key": "job_title"},
{"kind": "input", "name": "Segment", "key": "segment"},
{"kind": "input", "name": "Engagement Type", "key": "engagement_type"},
{"kind": "input", "name": "Source Post", "key": "source_post"},
{"kind": "input", "name": "Company", "key": "company"},
{"kind": "input", "name": "Domain", "key": "domain"}
]
}Upload rows in batches of 50 via the extruct-api skill.
If appending to an existing table:
- Fetch existing rows to deduplicate against current values
linkedin_url - Upload only new engagers
创建新的Extruct通用表格或追加到现有表格。委托给extruct-api技能处理。
如果创建新表格:
json
{
"name": "{用户提供的名称或'帖子互动者 - {日期}'}",
"kind": "generic",
"column_configs": [
{"kind": "input", "name": "全名", "key": "full_name"},
{"kind": "input", "name": "LinkedIn主页", "key": "linkedin_url"},
{"kind": "input", "name": "职位头衔", "key": "job_title"},
{"kind": "input", "name": "细分群体", "key": "segment"},
{"kind": "input", "name": "互动类型", "key": "engagement_type"},
{"kind": "input", "name": "来源帖子", "key": "source_post"},
{"kind": "input", "name": "企业", "key": "company"},
{"kind": "input", "name": "域名", "key": "domain"}
]
}通过extruct-api技能批量上传数据,每批50行。
如果追加到现有表格:
- 获取现有行数据,基于当前值去重
linkedin_url - 仅上传新的互动者
Step 5: Review and next steps
步骤5:审核与后续步骤
Present upload summary:
Upload Complete:
- Engagers uploaded: N
- Table: {table_name}
- URL: https://app.extruct.ai/tables/{table_id}
Segment Breakdown (uploaded):
Founders / CEOs: N
Sales Leadership: N
...Suggest next steps:
- "Get emails" → run on the people table to enrich with verified emails
email-search - "Enrich companies" → run to add company data (industry, size, funding)
list-enrichment - "Draft outreach" → run after emails are found
email-generation - "Monitor more posts" → re-run with additional post URLs and deduplicate against this table
展示上传摘要:
上传完成:
- 已上传互动者数: N
- 表格: {table_name}
- 链接: https://app.extruct.ai/tables/{table_id}
已上传群体细分:
创始人/CEO: N
销售管理层: N
...建议后续步骤:
- "获取邮箱" → 在人员表上运行,丰富验证过的邮箱数据
email-search - "丰富企业数据" → 运行,添加企业数据(行业、规模、融资情况)
list-enrichment - "生成开发邮件" → 获取邮箱后运行
email-generation - "监控更多帖子" → 使用更多帖子URL重新运行,并与当前表格去重
Tips
提示
- Multiple posts = richer list. Scrape 3-5 recent posts from the same account to build a larger pool. Engagers across multiple posts are highly engaged — flag them.
- Repeat engagers are warmer leads. If someone engaged on 2+ posts, note that in the data — they're more likely to respond to outreach.
- Filter aggressively. Not all engagers are prospects. Use segment filtering to focus on decision makers and skip students, recruiters, etc.
- Respect rate limits. LinkedIn scraping providers have varying rate limits. Don't hammer the API — space out requests if scraping many posts.
- 多帖子=更丰富的列表。爬取同一账号的3-5条最新帖子,构建更大的潜在客户池。在多个帖子中互动的用户参与度更高——标记他们。
- 重复互动者是更优质的线索。如果用户在2条及以上帖子中互动,在数据中注明——他们更有可能回复客户开发邮件。
- 严格筛选。并非所有互动者都是潜在客户。使用群体筛选聚焦决策者,排除学生、招聘人员等。
- 遵守速率限制。不同LinkedIn爬取服务商的速率限制不同。爬取大量帖子时,不要频繁调用API——分散请求时间。
Output
输出项
| Output | Format | Location |
|---|---|---|
| People table | Extruct generic table | |
| Engagers CSV | CSV backup | |
| 输出项 | 格式 | 位置 |
|---|---|---|
| 人员表 | Extruct通用表格 | |
| 互动者CSV | CSV备份 | |