post-engagers
Original:🇺🇸 English
Translated
Extract people who engage (comment, react, repost) on any LinkedIn post, enrich their emails and company data, and upload to an Extruct people table for outreach. Supports multiple LinkedIn scraping providers (Anysite MCP, RapidAPI, Apify, Phantombuster, etc.). Triggers on: "post engagers", "linkedin engagers", "who commented on", "who liked", "who reacted", "linkedin post engagers", "scrape post", "extract engagers", "post commenters".
2installs
Sourceextruct-ai/gtm-skills
Added on
NPX Install
npx skill4agent add extruct-ai/gtm-skills post-engagersTags
Translated version includes tags in frontmatterSKILL.md Content
View Translation Comparison →LinkedIn Post Engagers
Turn LinkedIn post engagement into a prospecting list. Extract commenters, reactors, and reposters from any LinkedIn post — then enrich and upload to Extruct for outreach.
Related Skills
post-engagers → email-search → email-generation → campaign-sendingThis skill produces a people table. The next step () gets verified emails, then drafts personalized outreach.
email-searchemail-generationExtruct API Operations
This skill delegates all Extruct API calls to the skill.
extruct-apiFor all Extruct API operations, read and follow the instructions in .
skills/extruct-api/SKILL.mdTable creation, row uploads, and data fetching are handled by the extruct-api skill. This skill focuses on scraping LinkedIn engagers and preparing the data — the extruct-api skill handles the API execution.
Inputs
| Input | Source | Required |
|---|---|---|
| LinkedIn post URL(s) | User provides | yes |
| Engagement types to scrape | User choice: comments, reactions, reposts (default: all) | no |
| LinkedIn scraping provider | User choice (see provider list below) | yes |
| Existing people table ID | Extruct table to append to (or create new) | no |
LinkedIn Scraping Providers
This skill does not mandate a specific provider. Ask the user which LinkedIn scraping tool they want to use. Below are known options — the user may have others.
| Provider | Engagement types | Auth | Notes |
|---|---|---|---|
| Anysite MCP | Comments, reactions, reposts | MCP connection | Built into Claude Code via MCP. Tools: |
| RapidAPI (LinkedIn scrapers) | Comments, reactions, reposts | | Multiple scrapers available (e.g. Fresh LinkedIn Profile Data, LinkedIn Bulk Data Scraper). Check endpoint docs per scraper |
| Apify | Comments, reactions, reposts | | Actors: |
| Phantombuster | Comments, reactions | | Phantoms: "LinkedIn Post Commenters", "LinkedIn Post Likers" |
| Custom / self-hosted | Varies | Varies | User may have their own scraping setup |
If the user doesn't know where to start:
- Anysite MCP is the simplest if they have it connected — no extra credentials needed
- Apify is a good general choice with pay-per-use pricing
- RapidAPI has multiple scrapers with free tiers
Workflow
Step 1: Collect post URLs and choose provider
- Get the LinkedIn post URL(s) from the user. Accept one or multiple.
- Ask which engagement types to scrape: comments, reactions, reposts, or all three.
- Ask which LinkedIn scraping provider they want to use (see table above).
- If the provider requires credentials, confirm they're available.
Extract the activity URN from each post URL. The numeric ID is typically after or in the URL (e.g. ).
activity-ugcPost-activity:7433261939285385217Step 2: Scrape engagers
Use the chosen provider to fetch engagement data. The approach varies by provider:
If using Anysite MCP:
- Comments: with
mcp__claude_ai_Anysite__get_linkedin_post_comments,urn: "activity:{id}"count: 1500 - Reactions: with
mcp__claude_ai_Anysite__get_linkedin_post_reactions,urn: "activity:{id}"count: 1500 - Reposts: with
mcp__claude_ai_Anysite__get_linkedin_post_reposts,urn: "activity:{id}"count: 1500
If using another provider:
- Read or fetch the provider's API documentation
- Identify the endpoint, input format, and response structure
- Implement the scraping calls accordingly
For each engager, extract (field names vary by provider):
python
{
"full_name": "...",
"linkedin_url": "...", # profile URL
"headline": "...", # job title / headline
"engagement_type": "...", # comment / reaction / repost
"post_url": "...", # which post they engaged with
}If scraping multiple posts, tag each engager with the they engaged with.
post_urlStep 3: Deduplicate and classify
- Deduplicate by across all posts and engagement types. If someone both commented and reacted, keep both engagement types as a comma-separated value.
linkedin_url - Apply the segment classifier to job titles (first match wins):
| Priority | Pattern | Segment |
|---|---|---|
| 1 | | Founders / CEOs |
| 2 | | Engineering Leadership |
| 3 | | Marketing Leadership |
| 4 | | Sales Leadership |
| 5 | | Directors / VPs / Heads |
| 6 | | RevOps / Growth Ops |
| 7 | | Product |
| 8 | | Data / ML |
| 9 | | Sales ICs |
| 10 | | Marketing / Content |
| 11 | | Sales (General) |
| 12 | | AI / Automation Builders |
| 13 | | Consultants / Agencies |
| 14 | | Engineering / Product / Data |
| — | (no match) | Other |
- Present a segment breakdown to the user before proceeding:
Engager Summary:
- Total unique engagers: N
- Comments: N | Reactions: N | Reposts: N
Segment Breakdown:
Founders / CEOs: N (X%)
Sales Leadership: N (X%)
Marketing Leadership: N (X%)
...
Other: N (X%)- Ask the user: "Want to filter to specific segments before uploading? (e.g. only Founders + Leadership)"
Step 4: Upload to Extruct people table
Create a new Extruct generic table or append to an existing one. Delegate to the extruct-api skill.
If creating a new table:
json
{
"name": "{user-provided name or 'Post Engagers - {date}'}",
"kind": "generic",
"column_configs": [
{"kind": "input", "name": "Full Name", "key": "full_name"},
{"kind": "input", "name": "LinkedIn URL", "key": "linkedin_url"},
{"kind": "input", "name": "Job Title", "key": "job_title"},
{"kind": "input", "name": "Segment", "key": "segment"},
{"kind": "input", "name": "Engagement Type", "key": "engagement_type"},
{"kind": "input", "name": "Source Post", "key": "source_post"},
{"kind": "input", "name": "Company", "key": "company"},
{"kind": "input", "name": "Domain", "key": "domain"}
]
}Upload rows in batches of 50 via the extruct-api skill.
If appending to an existing table:
- Fetch existing rows to deduplicate against current values
linkedin_url - Upload only new engagers
Step 5: Review and next steps
Present upload summary:
Upload Complete:
- Engagers uploaded: N
- Table: {table_name}
- URL: https://app.extruct.ai/tables/{table_id}
Segment Breakdown (uploaded):
Founders / CEOs: N
Sales Leadership: N
...Suggest next steps:
- "Get emails" → run on the people table to enrich with verified emails
email-search - "Enrich companies" → run to add company data (industry, size, funding)
list-enrichment - "Draft outreach" → run after emails are found
email-generation - "Monitor more posts" → re-run with additional post URLs and deduplicate against this table
Tips
- Multiple posts = richer list. Scrape 3-5 recent posts from the same account to build a larger pool. Engagers across multiple posts are highly engaged — flag them.
- Repeat engagers are warmer leads. If someone engaged on 2+ posts, note that in the data — they're more likely to respond to outreach.
- Filter aggressively. Not all engagers are prospects. Use segment filtering to focus on decision makers and skip students, recruiters, etc.
- Respect rate limits. LinkedIn scraping providers have varying rate limits. Don't hammer the API — space out requests if scraping many posts.
Output
| Output | Format | Location |
|---|---|---|
| People table | Extruct generic table | |
| Engagers CSV | CSV backup | |