Loading...
Loading...
Found 11 Skills
Generate and edit images with OpenAI GPT Image 2 (ChatGPT Images 2.0) on RunComfy. Documents GPT Image 2's strengths (embedded text, logos, multilingual typography, instruction precision), its 3 fixed sizes, edit-with-preservation language, and when to route to a sibling (Flux 2 / Nano Banana Pro / Seedream) instead. Calls `runcomfy run openai/gpt-image-2/text-to-image` or `/edit` through the local RunComfy CLI. Triggers on "gpt image 2", "gpt-image-2", "ChatGPT Images 2", "image 2", or any explicit ask to generate or edit with this model.
Generate high-quality images from text prompts using fal.ai's text-to-image models. Supports intelligent model selection, style transfer, and professional-grade outputs.
Generate and edit images using TensorsLab's AI models. Supports text-to-image, image-to-image generation, plus advanced editing: avatar generation, watermark removal, object erasure, face replacement, and general image editing. Features automatic prompt enhancement, progress tracking, and local file saving. Requires browser-based authorization before first use.
Install and configure Ollama for local embeddings with GrepAI. Use this skill when setting up private, local embedding generation.
Find AI models on Replicate using search and curated collections.
Add, update, or remove text/image/video models. Handles any provider.
Query OpenRouter for available AI models, pricing, capabilities, throughput, and provider performance. Use when the user asks about available OpenRouter models, model pricing, model context lengths, model capabilities, provider latency or uptime, throughput limits, supported parameters, wants to search/filter/compare models, or find the fastest provider for a model.
Generate images directly using the Runway API via runnable scripts. Supports text-to-image with optional reference images.
Choose the right fal.ai endpoint for a given task. Modality-organized catalog of production endpoint defaults, text-to-image, image-to-image, text-to-video, image-to-video, and more. Use when the user has not named a specific model, or asks "which model for X", "best endpoint for Y", "what should I use for Z".
Generate AI images, videos, music, and audio from the terminal via muapi.ai — supports 100+ models including Flux, Midjourney v7, Kling 3.0, Veo3, and Suno V5
Search and discover 50,000+ AI models on ModelsLab, check usage analytics, and monitor generation history via the Agent Control Plane API.