Skill4Agent
Skill4Agent
All SkillsSearchTools
|
Explore
Skill4Agent
Skill4Agent

AI Agent Skills Directory with categorization, English/Chinese translation, and script security checks.

Sitemap

  • Home
  • All Skills
  • Search
  • Tools

About

  • About Us
  • Disclaimer
  • Copyright

Help

  • FAQ
  • Privacy
  • Terms
Contact Us:osulivan147@qq.com

© 2026 Skill4Agent. All rights reserved.

All Skills

Total 42,909 skills, AI & Machine Learning has 6876 skills

Categories

Showing 12 of 6876 skills

Per page
Downloads
Sort
AI & Machine Learningsoftaworks/agent-toolkit

command-creator

This skill should be used when creating a Claude Code slash command. Use when users ask to "create a command", "make a slash command", "add a command", or want to document a workflow as a reusable command. Essential for creating optimized, agent-executable slash commands with proper structure and best practices.

🇺🇸|EnglishTranslated
3.3k
AI & Machine Learningsoftaworks/agent-toolkit

plugin-forge

Create and manage Claude Code plugins with proper structure, manifests, and marketplace integration. Use when creating plugins for a marketplace, adding plugin components (commands, agents, hooks), bumping plugin versions, or working with plugin.json/marketplace.json manifests.

🇺🇸|EnglishTranslated
3.3k
2 scripts/Checked
AI & Machine Learningam-will/codex-skills

codex-subagent

Spawn Codex subagents via background shell to offload context-heavy work. Use for: deep research (3+ searches), codebase exploration (8+ files), multi-step workflows, exploratory tasks, long-running operations, documentation generation, or any other task where the intermediate steps will use large numbers of tokens.

🇺🇸|EnglishTranslated
3.2k
2 scripts/Attention
AI & Machine Learninginference-shell/skills

p-video

Generate videos with Pruna P-Video and WAN models via inference.sh CLI. Models: P-Video, WAN-T2V, WAN-I2V. Capabilities: text-to-video, image-to-video, audio support, 720p/1080p, fast inference. Pruna optimizes models for speed without quality loss. Triggers: pruna video, p-video, pruna ai video, fast video generation, optimized video, wan t2v, wan i2v, economic video generation, cheap video generation, pruna text to video, pruna image to video

🇺🇸|EnglishTranslated
2.8k
AI & Machine Learningpilioai/skills

nano-banana-2

Create or edit images with Pilio Nano Banana 2 through the unified Pilio developer API. Use when the user wants Nano Banana 2 text-to-image generation, reference-image editing, product posters, or image composition from local inputs.

🇺🇸|EnglishTranslated
2.6k
AI & Machine Learninginference-shell/skills

p-image

Generate images with Pruna P-Image models via inference.sh CLI. Models: P-Image, P-Image-LoRA, P-Image-Edit, P-Image-Edit-LoRA. Capabilities: text-to-image, image editing, LoRA styles, multi-image compositing, fast inference. Pruna optimizes models for speed without quality loss. Triggers: pruna, p-image, pruna image, fast image generation, optimized flux, pruna ai, p image, fast ai image, economic image generation, cheap image generation

🇺🇸|EnglishTranslated
2.6k
AI & Machine Learninghiggsfield-ai/skills

higgsfield-generate

Generate images and videos via Higgsfield AI through 30+ models including Nano Banana 2, Soul V2, Veo 3.1, Kling 3.0, Seedance 2.0, Flux 2, GPT Image 2, plus Marketing Studio for branded ad video/image with curated avatars and imported products. Use when: "generate an image", "make a picture", "create artwork", "make a video", "animate this photo", "image-to-video", "img2vid", "edit this image with AI", "stylize a photo", "remix this image", "produce a clip", "render a scene", "create an ad", "make a UGC video", "generate marketing video", "make a product demo", "create unboxing", "TV spot", "virtual try-on", "product showcase", "brand video", "presenter video for product", "import product from URL", "create avatar for ad". Supports text-to-image, image-to-image, image-to-video, reference-based generation, and Marketing Studio (avatars + products + ad modes). Auto-detects whether passed IDs are uploads or previous jobs. Chain with higgsfield-soul-id when the user wants their face in the output. NOT for: training Soul Character (use higgsfield-soul-id), professional product photoshoots with mode-specific prompt enhancement (use higgsfield-product-photoshoot), text-only / chat / TTS tasks.

🇺🇸|EnglishTranslated
2.2k
AI & Machine Learninghiggsfield-ai/skills

higgsfield-product-photoshoot

Generate brand-quality product images via mode-specific prompt enhancement on Higgsfield's gpt_image_2 model. The single entry point for any professional brand visual involving a product. Use when: "make a product photo", "studio shot", "lifestyle photo", "in use", "Pinterest pin", "hero banner", "website header", "carousel", "Meta ads", "ad creatives", "model wearing", "virtual try-on", "person holding product", "closeup with hands", "levitating product", "floating", "splash shot", "CGI style", "surreal product", "restyle", "Christmas version", "in [aesthetic] style", or any request involving a product, brand, or paid social creative. Modes: product_shot, lifestyle_scene, closeup_product_with_person, pinterest_pin, hero_banner, social_carousel, ad_creative_pack, virtual_model_tryout, conceptual_product, restyle. Backend assembles the final prompt — never write gpt_image_2 prompts freehand. Always go through this skill. NOT for: raw text-to-image with no brand/product (use higgsfield-generate), branded marketing video with avatars (use higgsfield-generate's Marketing Studio), Soul Character training (use higgsfield-soul-id).

🇺🇸|EnglishTranslated
2.0k
AI & Machine Learninghiggsfield-ai/skills

higgsfield-soul-id

Train a Soul Character — a personalized model on a person's face that Higgsfield uses for identity-faithful image and video generation. Use when: "create my Soul", "train my face", "make my digital twin", "build me an avatar", "learn my appearance", "create a character of me", "set up identity for video", "I want my face in generated images". Chain: train Soul (one-time, returns reference_id) → use in higgsfield-generate via `--soul-id <id>` with models like `text2image_soul_v2` or `soul_cinema_studio`. NOT for: one-shot face swaps (use higgsfield-generate with --image), named-character / non-photo avatars (use higgsfield-generate with prompt).

🇺🇸|EnglishTranslated
2.0k
AI & Machine Learningjuliusbrussee/caveman

caveman-stats

Show real token usage and estimated savings for the current session. Reads directly from the Claude Code session log — no AI estimation. Triggers on /caveman-stats. Output is injected by the mode-tracker hook; the model itself does not compute the numbers.

🇺🇸|EnglishTranslated
1.9k
AI & Machine Learningjuliusbrussee/caveman

cavecrew

Decision guide for delegating to caveman-style subagents. Tells the main thread WHEN to spawn `cavecrew-investigator` (locate code), `cavecrew-builder` (1-2 file edit), or `cavecrew-reviewer` (diff review) instead of doing the work inline or using vanilla `Explore`. Subagent output is caveman-compressed so the tool-result injected back into main context is ~60% smaller — main context lasts longer across long sessions. Trigger: "delegate to subagent", "use cavecrew", "spawn investigator/builder/reviewer", "save context", "compressed agent output".

🇺🇸|EnglishTranslated
1.9k
AI & Machine Learninggargantuax/openskills

gpt-image-2

Full OpenAI-compatible GPT Image 2 coverage across images/generations, images/edits, and responses with the image_generation tool. Use when the one-shot image helper is not enough - text-to-image, mask edits, multi-image batches, streaming, partial_images, and mixed text+image Responses flows. Reads .env and respects process environment variables; works with any OpenAI-compatible gateway.

🇺🇸|EnglishTranslated
1.9k
2 scripts/Attention
1...89101112...573
Page