Loading...
Loading...
Found 47 Skills
data-collection for evidence-based learning research and evaluation.
Chapter 2 데이터 수집 품질 기준 및 검증 방법
Crawl any website and save pages as local markdown files. Use when you need to download documentation, knowledge bases, or web content for offline access or analysis. No code required - just provide a URL.
Crawl websites and extract content from multiple pages via the Tavily CLI. Use this skill when the user wants to crawl a site, download documentation, extract an entire docs section, bulk-extract pages, save a site as local markdown files, or says "crawl", "get all the pages", "download the docs", "extract everything under /docs", "bulk extract", or needs content from many pages on the same domain. Supports depth/breadth control, path filtering, semantic instructions, and saving each page as a local markdown file.
Automated collection process for WeChat Channels search and result traversal (Android), supporting scenarios such as comprehensive page search and personal page search.
Profile-guided optimisation skill for C/C++ with GCC and Clang. Use when squeezing maximum runtime performance after standard optimisation plateaus, implementing two-stage PGO builds, collecting profile data, or applying BOLT for post-link optimisation. Activates on queries about PGO, profile-guided optimization, fprofile-generate, fprofile-use, instrumented builds, or BOLT.
Build a fully automated AI-powered data collection agent for any public source — job boards, prices, news, GitHub, sports, anything. Scrapes on a schedule, enriches data with a free LLM (Gemini Flash), stores results in Notion/Sheets/Supabase, and learns from user feedback. Runs 100% free on GitHub Actions. Use when the user wants to monitor, collect, or track any public data automatically.
Fetch recent posts from one or more X/Twitter accounts through twitterapi.io, output structured JSON/CSV records, optionally sync records to Feishu/Lark Bitable through feishu-cli, and optionally guide recurring execution through OpenClaw, Codex automations, cron, or launchd. Use when the user wants to monitor X bloggers, collect recent tweets, export tweet metrics, append tweets to Feishu Bitable, or set up a scheduled Twitter/X account tracking workflow.
Implement a web crawler pipeline covering URL discovery, fetching, parsing, and storage. Use this skill when the user needs to build a site crawler, audit website structure, or collect web data systematically — even if they say 'scrape a website', 'crawl all pages', or 'site audit spider'.
Unified macro intelligence feed — reads 7 sources, classifies events, scores sentiment, generates AI insights, exposes signals via HTTP API
Typeform integration. Manage Forms, Workspaces. Use when the user wants to interact with Typeform data.
Automatically discover research methodology skills when working with research methodology, literature review, systematic review, evidence synthesis, academic research, or experimental design. Activates for research tasks.