Loading...
Loading...
Podcast-to-Everything content pipeline. Takes a podcast RSS feed or raw transcript and generates a full cross-platform content calendar: short-form video clips, Twitter/X threads, LinkedIn articles, newsletter sections, quote cards, blog outlines with SEO keywords, and YouTube Shorts/TikTok scripts. Scores each piece by viral potential (novelty × controversy × utility) and deduplicates against recent output. Use when asked to: "repurpose this podcast", "turn this episode into content", "podcast content calendar", "extract clips from this episode", "podcast to social", "content from RSS feed", "batch process episodes", or any request to turn podcast/audio content into a multi-platform content plan.
npx skill4agent add ericosiu/ai-marketing-skills podcast-pipeline# Version check (silent if up to date)
python3 telemetry/version_check.py 2>/dev/null || true
# Telemetry opt-in (first run only, then remembers your choice)
python3 telemetry/telemetry_init.py 2>/dev/null || truePrivacy: This skill logs usage locally to. Remote telemetry is opt-in only. No code, file paths, or repo content is ever collected. See~/.ai-marketing-skills/analytics/.telemetry/README.md
--rss <url>--episodes N--transcript <file>--batch <rss_url> --episodes N- Type: [narrative_arc | quote | controversial_take | data_point | story | framework | prediction]
- Content: [extracted text]
- Timestamp: [start - end, if available]
- Context: [what was being discussed]
- Viral Score: [0-100, see Step 4]
- Suggested platforms: [where this atom works best]- Hook: [First 3 seconds — pattern interrupt or bold claim]
- Clip segment: [Timestamp range from transcript]
- Caption overlay: [Text for the screen]
- Platform: [YouTube Shorts / TikTok / Instagram Reels]
- Why it works: [What makes this clippable]- Thread hook (tweet 1): [Curiosity gap or bold opener]
- Thread body (5-10 tweets): [Each tweet is one complete thought]
- Thread closer: [CTA — follow, reply, retweet trigger]
- Source atoms: [Which content atoms feed this thread]- Headline: [Specific, benefit-driven]
- Hook paragraph: [Before the "see more" fold — must earn the click]
- Body: [3-5 sections with headers, 800-1200 words]
- CTA: [Engagement driver — question, not link]
- Hashtags: [3-5 relevant, not spammy]- Section headline: [Scannable, specific]
- TL;DR: [One sentence, the core insight]
- Body: [3-5 bullet points, each with a takeaway]
- Pull quote: [The most shareable line from the episode]
- Link: [Back to full episode]- Quote text: [Max 20 words — must work as text overlay]
- Attribution: [Speaker name]
- Background suggestion: [Color/mood that matches the tone]
- Platform sizing: [1080x1080 for IG, 1200x675 for Twitter, 1080x1920 for Stories]- Title: [SEO-optimized, includes primary keyword]
- Primary keyword: [Search volume + difficulty estimate]
- Secondary keywords: [3-5 related terms]
- Meta description: [155 chars max]
- H2 sections: [5-7, each maps to a content atom]
- Internal linking opportunities: [Topics that connect to existing content]
- Estimated word count: [1500-2500]- HOOK (0-3s): [Pattern interrupt — question, bold claim, or visual]
- SETUP (3-15s): [Context — why should they care]
- PAYOFF (15-45s): [The insight, data, or story resolution]
- CTA (45-60s): [Follow, comment prompt, or part 2 tease]
- On-screen text: [Key phrases to overlay]
- B-roll suggestions: [Visual ideas if not talking-head]| Dimension | What It Measures | Signals |
|---|---|---|
| Novelty | Is this new or surprising? | Contrarian takes, unexpected data, first-to-say |
| Controversy | Will people argue about this? | Strong opinions, challenges norms, picks a side |
| Utility | Can someone use this immediately? | Frameworks, how-tos, templates, specific numbers |
output/content_history.json--calendar{
"week_of": "2024-01-15",
"episode_source": "Episode Title - Guest Name",
"content_pieces": [
{
"date": "2024-01-15",
"time": "09:00 ET",
"platform": "twitter",
"type": "thread",
"content": "...",
"viral_score": 85,
"status": "draft"
}
],
"total_pieces": 18,
"avg_viral_score": 72,
"coverage": {
"twitter": 6,
"linkedin": 3,
"youtube_shorts": 3,
"newsletter": 1,
"blog": 1,
"quote_cards": 4
}
}output/output/
├── episodes/
│ ├── YYYY-MM-DD-episode-slug/
│ │ ├── transcript.txt
│ │ ├── atoms.json # Extracted content atoms
│ │ ├── content_pieces.json # All generated content
│ │ └── calendar.json # Scheduled calendar
│ └── ...
├── calendar/
│ └── week-YYYY-WNN.json # Aggregated weekly calendar
├── content_history.json # Dedup tracking
└── pipeline_log.json # Run history and stats# Process latest episode from RSS feed
python podcast_pipeline.py --rss "https://feeds.example.com/podcast.xml"
# Process a local transcript
python podcast_pipeline.py --transcript episode-42.txt
# Batch process last 5 episodes
python podcast_pipeline.py --batch "https://feeds.example.com/podcast.xml" --episodes 5
# Generate weekly calendar from existing outputs
python podcast_pipeline.py --calendar
# Process with custom dedup window
python podcast_pipeline.py --rss "https://feeds.example.com/podcast.xml" --dedup-days 60
# Process and only keep 80+ viral score content
python podcast_pipeline.py --rss "https://feeds.example.com/podcast.xml" --min-score 80| Variable | Required | Description |
|---|---|---|
| Yes (for Whisper) | OpenAI API key for audio transcription |
| Yes (for generation) | Anthropic API key for content generation |
| Optional | Separate OpenAI key if using GPT for generation instead |
| File | Purpose |
|---|---|
| Main pipeline script |
| Python dependencies |
| Setup and usage guide |