Loading...
Loading...
Found 7,348 Skills
Authoritative reference for the neo4j-agent-memory Python package — a graph-native memory system for AI agents built on Neo4j — and for the hosted service (NAMS) at memory.neo4jlabs.com. Use this skill whenever the user mentions neo4j-agent-memory, agent memory with Neo4j, context graphs, the POLE+O model, MemoryClient/MemorySettings, the memory MCP server, or any of the framework integrations (LangChain, PydanticAI, CrewAI, AWS Strands, Google ADK, Microsoft Agent Framework, OpenAI Agents, LlamaIndex). Also use when the user mentions the hosted service at memory.neo4jlabs.com, NAMS, the Neo4j Agent Memory Service, the `nams_` API key prefix, or the hosted MCP endpoint. Also use when writing documentation, blog posts, tutorials, PRDs, or code samples for the project, when comparing agent memory approaches, or when positioning graph-native memory against vector-only approaches — even if the user doesn't explicitly name the package.
Turn an article or script into a click-driven 16:9 web presentation that "looks like a video", with optional voiceover audio synthesis. Workflow: Original Article → **One-time Output** Script + Outline Development Plan → User **One-time Alignment** on 5 Items (Script / Outline / Theme / Assets / Development Mode) → Web Development (Chapter-by-Chapter / Sequential / Parallel) → Optional Audio Synthesis (Default: MiniMax CLI mmx-cli). **Outline only plans rhythm and information density, not animations** — Animations are designed on the fly during chapter development following the PRINCIPLES + ANTI-AI rules. Each click advances one beat of the script, each step occupies the full screen, and the progress bar is hidden by default only appearing on hover. Application Scenarios: Use web pages to make videos (dynamic PPT but not like PPT), turn scripts/articles into interactive explanations, create screen recording tutorials for Bilibili / YouTube / Video Channels, make cinematic product/talk demos. This Skill embodies design methodology + collaboration process — it is not bound to any specific styles/fonts/colors — so it can be reused for any theme and aesthetic.
通过 Chrome 扩展控制真实浏览器。需要访问网页、抽取网页数据、点击按钮、填写表单、执行浏览器自动化、提取渲染后的组件证据,或以程序方式操作页面时使用。通过 DOM diff、简化 HTML 和 component evidence pack 返回节省 token 的结构化结果。适用于 browser control、web automation、page scraping、web data extraction、execute JS in browser、web_scan、web_execute_js、open browser、navigate to URL、get page content、fill form、click button、extract component、rendered DOM、computed styles、component evidence。
Implements Syncfusion SfSmartRichTextEditor, an AI-enhanced WYSIWYG editor extending SfRichTextEditor in Blazor. Use this when configuring AI backends (OpenAI, Azure OpenAI, Ollama, custom IChatClient), Smart Action toolbar, AI query dialog, AssistViewSettings, AI popup events and methods, or any inherited Rich Text Editor features in Blazor Server and Web App.
Guide for implementing Syncfusion Blazor Carousel component for image galleries, product showcases, and content sliders. Use when user mentions carousel, image slider, slideshow, content rotation, product galleries, rotating banners, slide transitions, or needs to display multiple items in a cycling presentation format.
Implement Syncfusion Blazor ContextMenu component for building right-click menus, context-sensitive navigation, or popup menu systems. Use this when you need hierarchical menu structures with nesting support, data binding, or custom templates. This skill covers setup, customization, events, and dynamic menu management.
Implements Syncfusion React ContextMenu (SfContextMenu) for right-click interactions and context-sensitive popup menus. Use this when adding menu items, handling selection events, or customizing templates and styling. Covers setup, data binding, accessibility, keyboard navigation, common methods and properties, and integration patterns.
Guide for implementing Syncfusion Angular Breadcrumb components for navigation trails. Covers installation, data binding, navigation setup, icons, overflow modes, and template customization. Use this when building breadcrumb navigation that displays user location in hierarchies, enables clicking parent items for navigation, adds icons for visual context, or customizes appearance with templates.
Implement Syncfusion Angular ContextMenu component for right-click and touch-hold menus. Use this skill when user needs to create context menus, add/remove/enable menu items, handle menu clicks, customize animations, apply templates, handle data binding, trigger dialogs from menu items, show/hide items dynamically, add icons, create scrollable menus, or customize menu appearance.
Use this skill when working on an Expo or React Native app that uses, adds, debugs, or migrates to Convex. It covers `npx convex dev`, `EXPO_PUBLIC_CONVEX_URL` and EAS envs, `ConvexReactClient` and provider wiring in `expo-router` or `App.tsx`, generated `api` imports, schema and index design, queries, mutations, actions, auth (Clerk, Convex Auth, JWT or OIDC), file uploads from Expo URIs, pagination, migrations, and common `useQuery` or `_generated` failures. Do not use it for generic Expo UI or navigation work, or for non-Expo Convex frontends unless the task is specifically about adapting them to this mobile stack.
Adds production-safe Motion for React or Framer Motion animations to Next.js apps, including reveal, hover and tap micro-interactions, whileInView, stagger, AnimatePresence, layout and layoutId transitions, reorder, scroll-linked UI, and lightweight route-content transitions. Use when the user asks to add, refactor, or debug Motion or Framer Motion in App Router or Pages Router codebases, especially around server/client boundaries, reduced motion, LazyMotion, bundle size, hydration, or route transitions. Avoid for GSAP-style timelines, WebGL or 3D scenes, heavy scroll storytelling, or CSS-only effects unless Motion is explicitly requested.
DeepEval evaluation workflow for AI agents and LLM applications. TRIGGER when the user wants to evaluate or improve an AI agent, tool-using workflow, multi-turn chatbot, RAG pipeline, or LLM app; add evals; generate datasets or goldens; use deepeval generate; use deepeval test run; add tracing or @observe; send results to Confident AI; monitor production; run online evals; inspect traces; or iterate on prompts, tools, retrieval, or agent behavior from eval failures. AI agents are the primary use case. Covers Python SDK, pytest eval suites, CLI generation, tracing, Confident AI reporting, and agent-driven improvement loops. DO NOT TRIGGER for unrelated generic pytest, non-AI test setup, or non-DeepEval observability work unless the user asks to compare or migrate to DeepEval.