Loading...
Loading...
Found 44 Skills
Calculate text similarity using lexical and semantic methods for matching and deduplication. Use this skill when the user needs to find similar documents, detect near-duplicates, or measure semantic closeness between texts — even if they say 'how similar are these texts', 'find duplicates', or 'semantic matching'.
Document business rules, technical patterns, and service interfaces discovered during analysis or implementation. Use when you find reusable patterns, external integrations, domain-specific rules, or API contracts. Always check existing documentation before creating new files. Handles deduplication and proper categorization.
Reviews, curates, and maintains the Forge library of agents, skills, and templates. Performs deduplication analysis, staleness detection, quality promotion, and orphan reference checking. Produces structured review reports with actionable recommendations for merging, archiving, or promoting library items. Use this skill when the user wants to review the library, clean up agents or skills, check what's available, find duplicates, trim unused items, see library statistics, or says "what's in my library?" Also triggers on scheduled review intervals or when the library grows beyond 20 items. Do NOT use for creating new agents (use Agent Creator), creating skills (use Skill Creator), or planning teams (use Mission Planner).
Secures webhook receivers with signature verification, retry handling, deduplication, idempotency keys, and error responses. Provides verification code, dedupe storage strategy, runbook for incidents. Use when implementing "webhooks", "webhook security", "event receivers", or "third-party integrations".
For users needing to conduct systematic literature reviews, literature reviews, related work, or literature research: AI automatically generates search terms, performs multi-source retrieval → deduplication → AI reads and scores each paper one by one (1–10 points for semantic relevance and sub-topic grouping) → selects papers based on high-score priority ratio → automatically generates word budget for the review (70% cited sections + 30% non-cited sections, average of three samplings) → free writing in the style of senior domain experts (fixed sections: abstract, introduction, sub-topics, discussion, future outlook, conclusion), with strict verification of main text word count and number of references, and mandatory export to PDF and Word. Supports multilingual translation and intelligent compilation (en/zh/ja/de/fr/es).
Next.js performance optimization and best practices. Use when writing Next.js code (App Router or Pages Router); implementing Server Components, Server Actions, or API routes; optimizing RSC serialization, data fetching, or server-side rendering; reviewing Next.js code for performance issues; fixing authentication in Server Actions; or implementing Suspense boundaries, parallel data fetching, or request deduplication.
Merge multiple CSV/Excel files with intelligent column matching, data deduplication, and conflict resolution. Handles different schemas, formats, and combines data sources. Use when users need to merge spreadsheets, combine data exports, or consolidate multiple files into one.
Coordinate parallel code reviews across multiple quality dimensions with finding deduplication, severity calibration, and consolidated reporting. Use this skill when organizing multi-reviewer code reviews, calibrating finding severity, or consolidating review results.
Implementation workflows for Frappe scheduled tasks and background jobs (v14/v15/v16). Covers hooks.py scheduler_events, frappe.enqueue, queue selection, job deduplication, and error handling. Triggers: how to schedule task, background job, cron job, async processing, queue selection, job deduplication, scheduler implementation.
Centralized TypeScript API client with typed namespaces, automatic token refresh with request deduplication, TanStack Query integration, and consistent error handling.
Multi-source AI news aggregation and digest generation with deduplication, classification, and source tracing. Supports 20+ sources, 5 theme categories, multi-language output (ZH/EN/JA), and image export.
Generate and curate evaluation datasets — structured generation via dimensions-tuples-NL, quick from description, expansion from existing data, plus dataset maintenance through deduplication, rebalancing, and gap-filling. Use when creating eval data, expanding test coverage, or cleaning datasets. Do NOT use when sufficient real production data exists (use analyze-trace-failures instead). Do NOT use for evaluator creation (use build-evaluator).