News Aggregator Skill
Fetch real-time hot news from multiple sources.
Supported Data Sources
| Data Source | Identifier | Type |
|---|
| Hacker News | | Technology/Entrepreneurship |
| Weibo Hot Search | | Society/Entertainment |
| GitHub Trending | | Open Source Projects |
| 36Kr | | Technology/Business |
| Product Hunt | | Product Launch |
| V2EX | | Tech Community |
| Tencent News | | General News |
| Wallstreetcn | | Finance |
Tool Usage
Basic Commands
bash
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py [parameters]
Parameter Description
-
- Single source: , , , , , , ,
- Multiple sources (comma separated):
hackernews,github,producthunt
- All sources:
-
: Maximum number of entries returned per source (default: 10)
-
: Keyword filtering (comma separated)
- Example:
- Case insensitive, supports word boundary matching
-
- Download and extract article body content (truncate first 3000 characters)
- Concurrent scraping to improve speed
- The result will include the field
Output Format
JSON array, each entry contains:
- : Source name
- : Title
- : Link
- : Heat indicator (points, number of replies, star count, etc.)
- : Time information
- : Article content (only available when is used)
Usage Strategies
1. Global Scan (Wide Acquisition)
Applicable scenarios: Daily news summary, comprehensive understanding of dynamics in various fields
bash
# Fetch from all sources, 15 entries per source, enable deep scraping
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 15 --deep
Note: Global scan will return about 120 pieces of data, you need to perform semantic filtering and classification according to user interests.
2. Single Data Source
Applicable scenarios: Focus on specific platforms or fields
bash
# Top 10 entries from Hacker News
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source hackernews --limit 10 --deep
# Top 15 entries from GitHub Trending
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source github --limit 15 --deep
3. Keyword Search (Intelligent Expansion)
Key Rules: Automatically expand user keywords to cover the entire field
- User says "AI" → Use:
"AI,LLM,GPT,Claude,DeepSeek,Gemini,机器学习,RAG,Agent,大模型"
- User says "frontend" → Use:
"前端,React,Vue,Next.js,TypeScript,JavaScript,CSS,Vite"
- User says "finance" → Use:
"金融,股票,市场,经济,加密货币,比特币,黄金,A股"
bash
# Example: User asks "What AI related news are there"
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py \
--source hackernews,github,36kr \
--limit 20 \
--keyword "AI,LLM,GPT,Claude,DeepSeek,Agent,大模型" \
--deep
4. Precise Search
Only used for very specific proper nouns
bash
# Search news related to "DeepSeek"
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 10 --keyword "DeepSeek" --deep
Output Specifications
Report Format Requirements
Language and Style:
- Use Simplified Chinese
- Adopt magazine/newsletter style (such as The Economist or Morning Brew)
- Professional, concise and engaging
Report Structure:
-
Front Page Headlines (3-5 entries)
- The most important cross-domain news
-
Technology & AI
- Special section for AI, LLM and technology related content
-
Finance/Society
- Other important categories (sorted by relevance)
Single News Format:
markdown
### Number. [Title Text](Original URL)
**Source**: <Data Source> | **Time**: <Time Information> | **Heat**: <Heat Indicator>
**Key Point**: One sentence summary of "so what?"
**In-depth Interpretation**:
- Point 1: Why it is important
- Point 2: Technical details or background
- Point 3: Impact and implications
Key Rules:
- ✅ Title must be a Markdown link:
[OpenAI releases GPT-5](https://...)
- ❌ Plain text titles are prohibited:
- Metadata line must include: source, time, heat
- 2-3 interpretation points must be provided when deep scanning is enabled
Time Filtering and Intelligent Supplement
When the user specifies a time window (such as "past X hours") and the results are scarce (< 5 entries):
- Prioritize user's window: First list entries that strictly meet the time requirements
- Intelligent supplement: If the list is too short, must include high-value/hot entries in a larger range (such as the past 24 hours)
- Clear marking: Clearly mark supplementary entries (such as "⚠️ 18h ago", "🔥 24h Hot")
- Value first: Even if it slightly exceeds the time window, prioritize displaying SOTA, major releases or high-heat content
GitHub Trending Special Case:
- Strictly return valid entries in the crawl list (such as Top 10)
- List all crawled entries
- Do not perform intelligent supplementation
- Must conduct in-depth analysis for each project:
- Core Value: What problem does it solve? Why is it popular?
- Inspirational Thinking: Technical or product insights
- Scenario Tags: 3-5 keywords (such as )
Output File
- Save Location: folder in the workspace root directory
- File Naming: With timestamp (such as )
- Full Path Example:
/Users/dio/Documents/new_vault/reports/tech_news_20260131_1430.md
- User Display: Present the full report content in the chat
Interactive Menu
When the user says "news-aggregator-skill 如意如意" (or similar "menu/help" trigger words):
- Read the file in the skill directory
- Show the user the list of available commands in the file
- Guide the user to select the number or copy the command to execute