competitor-content-tracker

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Competitor Content Tracker

竞品内容追踪工具

Monitor competitor content activity across three channels — blog, LinkedIn, Twitter/X — and produce a consolidated digest highlighting what's new, what's getting traction, and where you have a content gap.
监控竞争对手在博客、LinkedIn、Twitter/X三个渠道的内容动态,并生成整合摘要,突出新发布内容、高热度内容以及你的品牌存在的内容空白。

When to Use

适用场景

  • "Track what [competitor] is publishing"
  • "Show me what my competitors posted this week"
  • "What topics are competitors winning on?"
  • "I want a weekly competitor content digest"
  • "追踪[竞争对手]的发布内容"
  • "展示我的竞争对手本周发布的内容"
  • "竞争对手在哪些话题上表现出色?"
  • "我需要一份每周竞品内容摘要"

Phase 0: Intake

阶段0:信息收集

Competitors to Track

需追踪的竞争对手

  1. List of competitor company names + blog URLs (e.g.,
    https://clay.com/blog
    )
  2. LinkedIn profile URLs of competitor founders/CMOs to track (optional but high-value)
  3. Twitter/X handles of the competitors or their founders (optional)
  1. 竞争对手公司名称 + 博客URL列表(例如:
    https://clay.com/blog
  2. 需追踪的竞争对手创始人/CMO的LinkedIn主页URL(可选,但价值较高)
  3. 竞争对手或其创始人的Twitter/X账号(可选)

Scope

范围

  1. How far back? (default: 7 days for weekly digest, 30 days for first run)
  2. Any topics/keywords you care most about? (used to surface relevant posts first)
  1. 回溯时间?(默认:每周摘要回溯7天,首次运行回溯30天)
  2. 你最关注的话题/关键词?(用于优先展示相关内容)

Output

输出格式

  1. Format preference: full digest (everything) or highlights only (top 3-5 per competitor)?
Save config to
clients/<client-name>/configs/competitor-content-tracker.json
.
json
{
  "competitors": [
    {
      "name": "Clay",
      "blog_url": "https://clay.com/blog",
      "linkedin_profiles": ["https://www.linkedin.com/in/kareem-amin/"],
      "twitter_handles": ["@clay_hq", "@kareemamin"]
    }
  ],
  "days_back": 7,
  "keywords": ["GTM", "outbound", "AI agents", "growth"],
  "output_mode": "highlights"
}
  1. 格式偏好:完整摘要(所有内容)或仅重点内容(每个竞争对手top3-5)?
将配置保存至
clients/<client-name>/configs/competitor-content-tracker.json
json
{
  "competitors": [
    {
      "name": "Clay",
      "blog_url": "https://clay.com/blog",
      "linkedin_profiles": ["https://www.linkedin.com/in/kareem-amin/"],
      "twitter_handles": ["@clay_hq", "@kareemamin"]
    }
  ],
  "days_back": 7,
  "keywords": ["GTM", "outbound", "AI agents", "growth"],
  "output_mode": "highlights"
}

Phase 1: Scrape Blog Content

阶段1:爬取博客内容

Run
blog-scraper
for each competitor blog URL:
bash
python3 skills/blog-scraper/scripts/scrape_blogs.py \
  --urls "<competitor_blog_url>" \
  --days <days_back> \
  --keywords "<keywords>" \
  --output summary
Collect: post title, publish date, URL, excerpt.
为每个竞争对手的博客URL运行
blog-scraper
工具:
bash
python3 skills/blog-scraper/scripts/scrape_blogs.py \
  --urls "<competitor_blog_url>" \
  --days <days_back> \
  --keywords "<keywords>" \
  --output summary
收集信息:文章标题、发布日期、URL、摘要。

Phase 2: Scrape LinkedIn Posts

阶段2:爬取LinkedIn帖子

Run
linkedin-profile-post-scraper
for each tracked founder/executive LinkedIn URL:
bash
python3 skills/linkedin-profile-post-scraper/scripts/scrape_linkedin_posts.py \
  --profiles "<linkedin_url_1>,<linkedin_url_2>" \
  --days <days_back> \
  --max-posts 20 \
  --output summary
Collect: post text preview, date, reactions, comments, post URL.
为每个追踪的创始人/高管LinkedIn URL运行
linkedin-profile-post-scraper
工具:
bash
python3 skills/linkedin-profile-post-scraper/scripts/scrape_linkedin_posts.py \
  --profiles "<linkedin_url_1>,<linkedin_url_2>" \
  --days <days_back> \
  --max-posts 20 \
  --output summary
收集信息:帖子文本预览、发布日期、互动数、评论数、帖子URL。

Phase 3: Scrape Twitter/X

阶段3:爬取Twitter/X内容

Run
twitter-scraper
for each handle:
bash
python3 skills/twitter-scraper/scripts/search_twitter.py \
  --query "from:<handle>" \
  --since <YYYY-MM-DD> \
  --until <YYYY-MM-DD> \
  --max-tweets 20 \
  --output summary
Collect: tweet text, date, likes, retweets, URL.
为每个账号运行
twitter-scraper
工具:
bash
python3 skills/twitter-scraper/scripts/search_twitter.py \
  --query "from:<handle>" \
  --since <YYYY-MM-DD> \
  --until <YYYY-MM-DD> \
  --max-tweets 20 \
  --output summary
收集信息:推文内容、发布日期、点赞数、转发数、URL。

Phase 4: Analyze & Synthesize

阶段4:分析与整合

After collecting raw data, synthesize across all channels:
收集原始数据后,跨所有渠道进行整合分析:

For each competitor, identify:

针对每个竞争对手,识别:

  • New blog posts — titles, dates, topics
  • Top LinkedIn post — by engagement (reactions + comments), topic, key message
  • Top tweet — by likes, topic
  • Recurring themes — what topics did they post about most this period?
  • Content format patterns — are they doing listicles, opinion pieces, case studies?
  • 新博客文章 — 标题、日期、话题
  • 热门LinkedIn帖子 — 按互动量(互动数+评论数)排序、话题、核心信息
  • 热门推文 — 按点赞数排序、话题
  • 重复出现的主题 — 本期他们发布最多的话题是什么?
  • 内容格式规律 — 他们是否在做清单文、观点文、案例研究?

Cross-competitor analysis:

跨竞品分析:

  • Shared trending topics — what are multiple competitors writing about?
  • Coverage gaps — topics they're covering that you're not
  • Topics you own — where you're publishing and they're not
  • Engagement benchmarks — average likes/reactions across competitors (context for your own performance)
  • 共同热门话题 — 多个竞争对手都在讨论的话题?
  • 内容空白 — 他们覆盖但你未覆盖的话题
  • 你的优势话题 — 你覆盖但他们未覆盖的话题
  • 互动基准 — 竞争对手的平均点赞/互动数(为你的表现提供参考)

Phase 5: Output Format

阶段5:输出格式

Produce a structured markdown digest:
markdown
undefined
生成结构化的Markdown摘要:
markdown
undefined

Competitor Content Digest — Week of [DATE]

竞品内容摘要 — [日期]周

Summary

总览

  • [N] new blog posts tracked across [N] competitors
  • Top trending topic: [topic]
  • Biggest content gap for you: [topic]

  • 共追踪到[N]个竞争对手发布的[N]篇新博客文章
  • 最热门话题:[话题]
  • 你的最大内容空白:[话题]

[Competitor Name]

[竞争对手名称]

Blog

博客

  • [Post Title] — [Date] — [URL]
    [One-sentence summary]
  • [文章标题] — [日期] — [URL]
    [一句话摘要]

LinkedIn (top post)

LinkedIn(热门帖子)

"[Post preview...]" — [Author], [Date] | [Reactions] reactions, [Comments] comments [URL]
"[帖子预览...]" — [作者], [日期] | [互动数]次互动,[评论数]条评论 [URL]

Twitter/X (top tweet)

Twitter/X(热门推文)

"[Tweet text]" — [@handle], [Date] | [Likes] likes [URL]
"[推文内容]" — [@账号], [日期] | [点赞数]次点赞 [URL]

Themes this week: [tag1], [tag2], [tag3]

本周主题:[标签1], [标签2], [标签3]



Content Gap Analysis

内容空白分析

TopicCompetitors coveringYou covering
[topic]Clay, Apollo❌ No
[topic]Nobody✅ Yes
话题覆盖的竞争对手你的覆盖情况
[话题]Clay, Apollo❌ 未覆盖
[话题]✅ 已覆盖

Recommended Actions

建议行动

  1. [Specific content opportunity to act on this week]
  2. [Topic to consider writing a response/alternative take on]

Save digest to `clients/<client-name>/intelligence/competitor-content-[YYYY-MM-DD].md`.
  1. [本周可执行的具体内容机会]
  2. [可考虑撰写回应或不同观点的话题]

将摘要保存至 `clients/<client-name>/intelligence/competitor-content-[YYYY-MM-DD].md`。

Scheduling

调度设置

This skill is designed to run weekly (Mondays recommended). Set up a cron job:
bash
undefined
本工具设计为每周运行(推荐周一)。可设置cron定时任务:
bash
undefined

Every Monday at 8am

每周一上午8点运行

0 8 * * 1 python3 run_skill.py competitor-content-tracker --client <client-name>
undefined
0 8 * * 1 python3 run_skill.py competitor-content-tracker --client <client-name>
undefined

Cost

成本

ComponentCost
Blog scraping (RSS mode)Free
LinkedIn post scraping~$0.05-0.20/profile (Apify)
Twitter scraping~$0.01-0.05 per run
Total per weekly run~$0.10-0.50 depending on scope
组件成本
博客爬取(RSS模式)免费
LinkedIn帖子爬取约$0.05-0.20/账号(Apify)
Twitter内容爬取约$0.01-0.05/次运行
每周运行总成本约$0.10-0.50(取决于范围)

Tools Required

所需工具

  • Apify API token
    APIFY_API_TOKEN
    env var
  • Upstream skills:
    blog-scraper
    ,
    linkedin-profile-post-scraper
    ,
    twitter-scraper
  • Apify API token — 需设置为环境变量
    APIFY_API_TOKEN
  • 上游工具
    blog-scraper
    ,
    linkedin-profile-post-scraper
    ,
    twitter-scraper

Trigger Phrases

触发短语

  • "Run competitor content tracker for [client]"
  • "What did my competitors publish this week?"
  • "Give me a competitor content digest"
  • "What's [competitor] writing about?"
  • "为[客户]运行竞品内容追踪工具"
  • "我的竞争对手本周发布了什么?"
  • "给我一份竞品内容摘要"
  • "[竞争对手]在写什么内容?"