news-aggregator-skill
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
Chinese新闻聚合技能 (News Aggregator Skill)
News Aggregator Skill
从多个来源获取实时热点新闻。
Fetch real-time hot news from multiple sources.
支持的数据源
Supported Data Sources
| 数据源 | 标识符 | 类型 |
|---|---|---|
| Hacker News | | 科技/创业 |
| 微博热搜 | | 社会/娱乐 |
| GitHub Trending | | 开源项目 |
| 36氪 | | 科技/商业 |
| Product Hunt | | 产品发布 |
| V2EX | | 技术社区 |
| 腾讯新闻 | | 综合新闻 |
| 华尔街见闻 | | 财经 |
| Data Source | Identifier | Type |
|---|---|---|
| Hacker News | | Technology/Entrepreneurship |
| Weibo Hot Search | | Society/Entertainment |
| GitHub Trending | | Open Source Projects |
| 36Kr | | Technology/Business |
| Product Hunt | | Product Launch |
| V2EX | | Tech Community |
| Tencent News | | General News |
| Wallstreetcn | | Finance |
工具使用
Tool Usage
基本命令
Basic Commands
bash
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py [参数]bash
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py [parameters]参数说明
Parameter Description
-
: 指定数据源
--source <源>- 单个源:,
hackernews,weibo,github,36kr,producthunt,v2ex,tencentwallstreetcn - 多个源(逗号分隔):
hackernews,github,producthunt - 所有源:
all
- 单个源:
-
: 每个源返回的最大条目数(默认:10)
--limit <数量> -
: 关键词过滤(逗号分隔)
--keyword <关键词>- 示例:
"AI,LLM,GPT" - 不区分大小写,支持单词边界匹配
- 示例:
-
: 启用深度抓取
--deep- 下载并提取文章正文内容(截取前 3000 字符)
- 并发抓取以提高速度
- 结果中会包含 字段
content
-
: Specify data source
--source <source>- Single source: ,
hackernews,weibo,github,36kr,producthunt,v2ex,tencentwallstreetcn - Multiple sources (comma separated):
hackernews,github,producthunt - All sources:
all
- Single source:
-
: Maximum number of entries returned per source (default: 10)
--limit <count> -
: Keyword filtering (comma separated)
--keyword <keyword>- Example:
"AI,LLM,GPT" - Case insensitive, supports word boundary matching
- Example:
-
: Enable deep scraping
--deep- Download and extract article body content (truncate first 3000 characters)
- Concurrent scraping to improve speed
- The result will include the field
content
输出格式
Output Format
JSON 数组,每个条目包含:
- : 来源名称
source - : 标题
title - : 链接
url - : 热度指标(点数、回复数、星标数等)
heat - : 时间信息
time - : 文章内容(仅在使用
content时)--deep
JSON array, each entry contains:
- : Source name
source - : Title
title - : Link
url - : Heat indicator (points, number of replies, star count, etc.)
heat - : Time information
time - : Article content (only available when
contentis used)--deep
使用策略
Usage Strategies
1. 全局扫描(广泛获取)
1. Global Scan (Wide Acquisition)
适用场景:每日新闻汇总、全面了解各领域动态
bash
undefinedApplicable scenarios: Daily news summary, comprehensive understanding of dynamics in various fields
bash
undefined从所有源获取,每源 15 条,启用深度抓取
Fetch from all sources, 15 entries per source, enable deep scraping
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 15 --deep
**注意**:全局扫描会返回约 120 条数据,你需要根据用户兴趣进行语义过滤和分类。uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 15 --deep
**Note**: Global scan will return about 120 pieces of data, you need to perform semantic filtering and classification according to user interests.2. 单一数据源
2. Single Data Source
适用场景:专注特定平台或领域
bash
undefinedApplicable scenarios: Focus on specific platforms or fields
bash
undefinedHacker News 前 10 条
Top 10 entries from Hacker News
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source hackernews --limit 10 --deep
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source hackernews --limit 10 --deep
GitHub Trending 前 15 条
Top 15 entries from GitHub Trending
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source github --limit 15 --deep
undefineduv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source github --limit 15 --deep
undefined3. 关键词搜索(智能扩展)
3. Keyword Search (Intelligent Expansion)
关键规则:自动扩展用户关键词以覆盖整个领域
- 用户说 "AI" → 使用:
"AI,LLM,GPT,Claude,DeepSeek,Gemini,机器学习,RAG,Agent,大模型" - 用户说 "前端" → 使用:
"前端,React,Vue,Next.js,TypeScript,JavaScript,CSS,Vite" - 用户说 "金融" → 使用:
"金融,股票,市场,经济,加密货币,比特币,黄金,A股"
bash
undefinedKey Rules: Automatically expand user keywords to cover the entire field
- User says "AI" → Use:
"AI,LLM,GPT,Claude,DeepSeek,Gemini,机器学习,RAG,Agent,大模型" - User says "frontend" → Use:
"前端,React,Vue,Next.js,TypeScript,JavaScript,CSS,Vite" - User says "finance" → Use:
"金融,股票,市场,经济,加密货币,比特币,黄金,A股"
bash
undefined示例:用户问 "有什么 AI 相关的新闻"
Example: User asks "What AI related news are there"
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py
--source hackernews,github,36kr
--limit 20
--keyword "AI,LLM,GPT,Claude,DeepSeek,Agent,大模型"
--deep
--source hackernews,github,36kr
--limit 20
--keyword "AI,LLM,GPT,Claude,DeepSeek,Agent,大模型"
--deep
undefineduv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py
--source hackernews,github,36kr
--limit 20
--keyword "AI,LLM,GPT,Claude,DeepSeek,Agent,大模型"
--deep
--source hackernews,github,36kr
--limit 20
--keyword "AI,LLM,GPT,Claude,DeepSeek,Agent,大模型"
--deep
undefined4. 精确搜索
4. Precise Search
仅用于非常具体的专有名词
bash
undefinedOnly used for very specific proper nouns
bash
undefined搜索 "DeepSeek" 相关新闻
Search news related to "DeepSeek"
uv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 10 --keyword "DeepSeek" --deep
undefineduv run --directory .agents/skills/news-aggregator-skill python scripts/fetch_news.py --source all --limit 10 --keyword "DeepSeek" --deep
undefined输出规范
Output Specifications
报告格式要求
Report Format Requirements
语言与风格:
- 使用简体中文
- 采用杂志/新闻通讯风格(如《经济学人》或 Morning Brew)
- 专业、简洁、引人入胜
报告结构:
-
头版头条(3-5 条)
- 跨领域最重要的新闻
-
科技与 AI
- AI、LLM、技术相关内容的专门板块
-
财经/社会
- 其他重要类别(根据相关性)
单条新闻格式:
markdown
undefinedLanguage and Style:
- Use Simplified Chinese
- Adopt magazine/newsletter style (such as The Economist or Morning Brew)
- Professional, concise and engaging
Report Structure:
-
Front Page Headlines (3-5 entries)
- The most important cross-domain news
-
Technology & AI
- Special section for AI, LLM and technology related content
-
Finance/Society
- Other important categories (sorted by relevance)
Single News Format:
markdown
undefined序号. 标题文本
Number. [Title Text](Original URL)
来源:<数据源> | 时间:<时间信息> | 热度:<热度指标>
核心要点:一句话概括"所以呢?"
深度解读:
- 要点 1:为什么重要
- 要点 2:技术细节或背景
- 要点 3:影响和启示
**关键规则**:
- ✅ **标题必须是 Markdown 链接**:`[OpenAI 发布 GPT-5](https://...)`
- ❌ **禁止纯文本标题**:`OpenAI 发布 GPT-5`
- 元数据行必须包含:来源、时间、热度
- 深度扫描时必须提供 2-3 条解读要点Source: <Data Source> | Time: <Time Information> | Heat: <Heat Indicator>
Key Point: One sentence summary of "so what?"
In-depth Interpretation:
- Point 1: Why it is important
- Point 2: Technical details or background
- Point 3: Impact and implications
**Key Rules**:
- ✅ **Title must be a Markdown link**: `[OpenAI releases GPT-5](https://...)`
- ❌ **Plain text titles are prohibited**: `OpenAI releases GPT-5`
- Metadata line must include: source, time, heat
- 2-3 interpretation points must be provided when deep scanning is enabled时间过滤与智能补充
Time Filtering and Intelligent Supplement
当用户指定时间窗口(如"过去 X 小时")且结果稀少(< 5 条)时:
- 优先用户窗口:先列出严格符合时间要求的条目
- 智能补充:如果列表过短,必须包含更大范围内的高价值/高热度条目(如过去 24 小时)
- 明确标注:清楚标记补充条目(如 "⚠️ 18h ago"、"🔥 24h 热门")
- 价值优先:即使略微超出时间窗口,也要优先展示 SOTA、重大发布或高热度内容
GitHub Trending 特例:
- 严格返回抓取列表中的有效条目(如 Top 10)
- 列出所有抓取的条目
- 不进行智能补充
- 必须对每个项目进行深度分析:
- 核心价值:解决什么问题?为何流行?
- 启发思考:技术或产品洞察
- 场景标签:3-5 个关键词(如 )
#RAG #本地优先 #Rust
When the user specifies a time window (such as "past X hours") and the results are scarce (< 5 entries):
- Prioritize user's window: First list entries that strictly meet the time requirements
- Intelligent supplement: If the list is too short, must include high-value/hot entries in a larger range (such as the past 24 hours)
- Clear marking: Clearly mark supplementary entries (such as "⚠️ 18h ago", "🔥 24h Hot")
- Value first: Even if it slightly exceeds the time window, prioritize displaying SOTA, major releases or high-heat content
GitHub Trending Special Case:
- Strictly return valid entries in the crawl list (such as Top 10)
- List all crawled entries
- Do not perform intelligent supplementation
- Must conduct in-depth analysis for each project:
- Core Value: What problem does it solve? Why is it popular?
- Inspirational Thinking: Technical or product insights
- Scenario Tags: 3-5 keywords (such as )
#RAG #Local First #Rust
输出文件
Output File
- 保存位置:工作区根目录的 文件夹
reports/ - 文件命名:带时间戳(如 )
hn_news_20260131_1430.md - 完整路径示例:
/Users/dio/Documents/new_vault/reports/tech_news_20260131_1430.md - 用户展示:在聊天中呈现完整报告内容
- Save Location: folder in the workspace root directory
reports/ - File Naming: With timestamp (such as )
hn_news_20260131_1430.md - Full Path Example:
/Users/dio/Documents/new_vault/reports/tech_news_20260131_1430.md - User Display: Present the full report content in the chat
交互菜单
Interactive Menu
当用户说 "news-aggregator-skill 如意如意"(或类似的"菜单/帮助"触发词)时:
- 读取技能目录中的 文件
templates.md - 向用户展示文件中的可用命令列表
- 引导用户选择编号或复制命令执行
When the user says "news-aggregator-skill 如意如意" (or similar "menu/help" trigger words):
- Read the file in the skill directory
templates.md - Show the user the list of available commands in the file
- Guide the user to select the number or copy the command to execute