Loading...
Loading...
Found 85 Skills
This skill should be used when the user asks to "build an MCP server", "create an MCP tool", "expose resources with MCP", "write an MCP client", or needs guidance on the Model Context Protocol Python SDK best practices, transports, server primitives, or LLM context integration.
Ultra-lightweight AI assistant in Go that runs on $10 hardware with <10MB RAM, supporting multiple LLM providers, tools, and single-binary deployment across RISC-V, ARM, MIPS, and x86.
OpenMAIC — Open Multi-Agent Interactive Classroom platform for generating immersive AI-powered learning experiences with slides, quizzes, simulations, and multi-agent discussions.
Novita AI: LLM, Image Generation & Editing, Video Generation, Audio (TTS/ASR), and GPU Cloud. Use this skill whenever the user wants to call Novita AI APIs — chat with LLMs (DeepSeek, Llama, Qwen), generate images (FLUX, Stable Diffusion, Seedream, Hunyuan Image), edit images (remove background, upscale, inpainting, img2img, outpainting, reimagine, merge face, replace background, remove text), generate videos (Kling, Wan, Hunyuan, Minimax Hailuo, Vidu, PixVerse, Seedance), do text-to-speech or speech-to-text (MiniMax TTS, GLM TTS, Fish Audio, ASR, voice cloning), run OpenAI-compatible batch jobs, manage GPU cloud instances and serverless endpoints, or check account balance and billing. Also trigger when the user mentions novita.ai, Novita AI, Novita API key, or wants to use any Novita platform service — even if they just say "generate an image" or "run an LLM" and Novita is available as a provider.
TensorLake SDK for building agentic workflows, sandboxed code execution, and document parsing/extraction. Use when the user mentions tensorlake, or asks about TensorLake APIs/docs/capabilities. Also use when the user is building AI agents or agentic applications that need serverless workflow orchestration (parallel map/reduce DAGs), sandboxed execution of LLM-generated code, or document parsing, structured extraction, and OCR from PDFs/images. Works with any LLM provider (OpenAI, Anthropic), agent framework (LangChain, CrewAI, LlamaIndex), database, or API as the infrastructure layer.
Use this skill when a PinMe project (Worker TypeScript) needs to call OpenRouter-backed LLM APIs, including models, chat/completions, streaming, or OpenRouter web search. Guides AI to generate correct Worker TS code.
Persistent memory layer for AI agents using Postgres/pgvector with MCP server support
Create a Mastra project using create-mastra and smoke test the studio in Chrome
Build AI agents on Cloudflare Workers with MCP integration, tool use, and LLM providers.
Consult external LLMs (Gemini, OpenAI/Codex, Qwen) for second opinions, alternative plans, independent reviews, or delegated tasks. Use when a user asks for another model's perspective, wants to compare answers, or requests delegating a subtask to Gemini/Codex/Qwen.
Motto: The LLM is the dice. It narrates the outcome.
Instructions for using the ModelMix Node.js library to interact with multiple AI LLM providers through a unified interface. Use when integrating AI models (OpenAI, Anthropic, Google, Groq, Perplexity, Grok, etc.), chaining models with fallback, getting structured JSON from LLMs, adding MCP tools, streaming responses, or managing multi-provider AI workflows in Node.js.