Loading...
Loading...
Found 244 Skills
Deep generative models for single-cell omics. Use when you need probabilistic batch correction (scVI), transfer learning, differential expression with uncertainty, or multi-modal integration (TOTALVI, MultiVI). Best for advanced modeling, batch effects, multimodal data. For standard analysis pipelines use scanpy.
Azure AI Content Understanding SDK for Python. Use for multimodal content extraction from documents, images, audio, and video. Triggers: "azure-ai-contentunderstanding", "ContentUnderstandingClient", "multimodal analysis", "document extraction", "video analysis", "audio transcription".
Expert guidance for working with Hugging Face Transformers library for NLP, computer vision, and multimodal AI tasks.
Multimodal image processing skill, supporting text-to-image, image-to-image, image-to-text, long image stitching, marketing material packs, product design images, element disassembly diagrams, and social media image sets. Triggered when the user mentions keywords such as "draw", "generate image", "draw XX", "image processing", "image-to-image", "OCR", "image recognition", "stitch long image", "infographic", "illustration", "product image", "material pack", "marketing material", "detail page", "e-commerce image", "design drawing", "exploded view", "disassembly", "image set", "nine-grid", etc. Note: If the user requests a video (including illustrations + voiceover), use the video-creator skill instead.
Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations. Use PROACTIVELY for LLM features, chatbots, AI agents, or AI-powered applications.
Integrate Gemini API with @google/genai SDK (NOT deprecated @google/generative-ai). Text generation, multimodal (images/video/audio/PDFs), function calling, thinking mode, streaming. 1M input tokens. Prevents 14 documented errors. Use when: Gemini integration, multimodal AI, reasoning with thinking mode. Troubleshoot: SDK deprecation, model not found, context window, function calling errors, streaming corruption, safety settings, rate limits.
Expert prompt engineering for Seedance 2.0. Use when the user wants to generate a video with multimodal assets (images, videos, audio) and needs the best possible prompt.
Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support
Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.
Comprehensive biosignal processing toolkit for analyzing physiological data including ECG, EEG, EDA, RSP, PPG, EMG, and EOG signals. Use this skill when processing cardiovascular signals, brain activity, electrodermal responses, respiratory patterns, muscle activity, or eye movements. Applicable for heart rate variability analysis, event-related potentials, complexity measures, autonomic nervous system assessment, psychophysiology research, and multi-modal physiological signal integration.
Create interactive dialogs and modal windows in React with Syncfusion DialogComponent. Implement modal/modeless dialogs with custom positioning, dragging, resizing, animations, templating, and keyboard navigation. Use this skill whenever the user needs to display dialog boxes, modal windows, confirmation prompts, forms in popups, floating panels, or complex windowed interactions.
This skill should be used when working with pre-trained transformer models for natural language processing, computer vision, audio, or multimodal tasks. Use for text generation, classification, question answering, translation, summarization, image classification, object detection, speech recognition, and fine-tuning models on custom datasets.