Loading...
Loading...
Found 244 Skills
Implements Motion (Framer Motion) animations in React applications. Covers animation presets, page transitions, modals, stagger effects, and skeleton loaders. Use when adding animations, transitions, or interactive hover effects.
Deep learning for single-cell analysis using scvi-tools. This skill should be used when users need (1) data integration and batch correction with scVI/scANVI, (2) ATAC-seq analysis with PeakVI, (3) CITE-seq multi-modal analysis with totalVI, (4) multiome RNA+ATAC analysis with MultiVI, (5) spatial transcriptomics deconvolution with DestVI, (6) label transfer and reference mapping with scANVI/scArches, (7) RNA velocity with veloVI, or (8) any deep learning-based single-cell method. Triggers include mentions of scVI, scANVI, totalVI, PeakVI, MultiVI, DestVI, veloVI, sysVI, scArches, variational autoencoder, VAE, batch correction, data integration, multi-modal, CITE-seq, multiome, reference mapping, latent space.
Use the Gemini API (Nano Banana image generation, Veo video, Gemini TTS speech and audio understanding) to deliver end-to-end multimodal media workflows and code templates for "generation + understanding".
Implement Syncfusion Angular Dialog component with complete API coverage. Build modal/modeless dialogs, confirmation popups, forms in dialogs, draggable windows, and overlaid content. Use this skill when users need dialog implementation, positioning, animations, WCAG 2.2 accessibility, forms integration, and event handling.
AnyCap CLI -- capability runtime for AI agents. One CLI for image generation, image read, video analysis, audio analysis, music composition, text-to-speech, web search, web crawling, file download, static site hosting, and cloud file storage. Use when the agent needs to generate images, analyze images, video, or audio, produce audio/music, search or crawl the web, download remote files, deploy static sites, or store and share files. Also use when the agent needs to authenticate with AnyCap (login, API key, credentials), or when encountering errors from AnyCap to submit feedback via 'anycap feedback'. Trigger on mentions of AnyCap, multimodal capabilities, AI-generated media, page hosting, or drive storage.
On-device, real-time multimodal AI voice and vision assistant powered by Gemma 4 E2B and Kokoro TTS, running entirely locally via FastAPI WebSocket server.
Use this skill whenever deciding what features to extract from raw marketplace assets — listing photos, owner-entered listing metadata, sitter wizard responses — to power item-to-item (similar listings), user-to-item (homefeed ranking), or user-to-user (mutual-fit matching) recommenders in a two-sided trust marketplace. Covers asset auditing, first-principles feature decomposition from the decision the user is making, vision-feature extraction (CLIP, room-type classification, amenity detection, aesthetic and quality scoring), listing text and metadata encoding (categoricals, multi-hot amenities, H3 geo-hashing, sentence-transformer description embeddings, structured pet triples), sitter wizard design (information-gain ordering, multiple-choice over free text, genuine skippability, hard constraint versus soft preference), derived-composition patterns for i2i / u2i / u2u (precomputed ANN shelves, multi-modal fusion, two-tower affinity, symmetric mutual-fit scoring, interpretable subscores), feature quality governance (single registry, training-serving parity, coverage and drift alarms, PII scrubbing, schema versioning), and incremental value proof (one feature at a time, ablation A/B, kill reviews, exploration slice, permanent feature-free baseline). Trigger even when the user does not explicitly say "feature engineering" but is asking how to get more signal out of listing photos, listing metadata, or the sitter onboarding wizard, or how to improve i2i / u2i / u2u quality without blindly ingesting a new model.
Comprehensive psychoeducation on mental health conditions, therapy modalities, evidence-based coping techniques, psychiatric medications, and self-assessment frameworks. Educational resource only — not medical advice, diagnosis, or treatment. Use when learning about mental health concepts, understanding therapy options, exploring coping strategies, or recognizing when to seek professional help. Trigger on "mental health", "therapy types", "coping strategies", "anxiety", "depression", "ADHD", "psychiatric medication", "when should I see a therapist".
OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on 400M image-text pairs. Use for image search, content moderation, or vision-language tasks without fine-tuning. Best for general-purpose image understanding.
This skill should be used when working with single-cell omics data analysis using scvi-tools, including scRNA-seq, scATAC-seq, CITE-seq, spatial transcriptomics, and other single-cell modalities. Use this skill for probabilistic modeling, batch correction, dimensionality reduction, differential expression, cell type annotation, multimodal integration, and spatial analysis tasks.
Builds AI chat interfaces and conversational UI with streaming responses, context management, and multi-modal support. Use when creating ChatGPT-style interfaces, AI assistants, code copilots, or conversational agents. Handles streaming text, token limits, regeneration, feedback loops, tool usage visualization, and AI-specific error patterns. Provides battle-tested components from leading AI products with accessibility and performance built in.
Z.ai API integration for building applications with GLM models. Use when working with Z.ai/ZhipuAI APIs for: (1) Chat completions with GLM-4.7/4.6/4.5 models, (2) Vision/multimodal tasks with GLM-4.6V, (3) Image generation with GLM-Image or CogView-4, (4) Video generation with CogVideoX-3 or Vidu models, (5) Audio transcription with GLM-ASR-2512, (6) Function calling and tool use, (7) Web search integration, (8) Translation, slide/poster generation agents. Triggers: Z.ai, ZhipuAI, GLM, BigModel, Zhipu, CogVideoX, CogView, Vidu.