Loading...
Loading...
Found 1,288 Skills
Expert skill for using TileKernels, a library of optimized GPU kernels for LLM operations (MoE routing, quantization, transpose, engram gating, Manifold HyperConnection) built with TileLang.
OpenGame is an open-source agentic framework for end-to-end web game creation from a single text prompt, using LLMs, Game Skill (Template + Debug), and headless browser evaluation.
Framework for collective skill evolution in multi-user LLM agent ecosystems — automatically distills session experience into reusable SKILL.md files and shares them across agent clusters.
Expert skill for building AI systems with Weft, a Rust-based programming language where LLMs, humans, APIs, and infrastructure are first-class primitives with typed connections and durable execution.
Lossless DFlash speculative decoding for MLX on Apple Silicon — 1.7–4x faster LLM inference using block diffusion drafting with target model verification.
Karpathy LLM Wiki 패턴 기반 지식 관리 스킬. 코드 프로젝트와 옵시디언 노트 모두 지원. Raw Source(코드·문서)를 읽어 docs/wiki/에 누적형 지식베이스를 구축·유지한다. "wiki", "위키", "ingest", "인제스트", "wiki 점검", "wiki lint", "wiki 업데이트", "문서화해줘", "아키텍처 설명해줘", "어떻게 동작해?" 키워드로 트리거. qmd 검색 도구와 연동하여 토큰 절약 + 높은 검색 정확도 제공.
Expert guidance for building conversational AI applications with Chainlit framework in Python. Use when (1) creating chat interfaces for LLM applications, (2) building apps with OpenAI, LangChain, LlamaIndex, or Mistral AI, (3) implementing streaming responses, (4) adding UI elements like images, files, charts, (5) handling user file uploads, (6) implementing authentication (OAuth, password), (7) creating multi-step workflows with visible steps, (8) building RAG applications with document upload, or (9) deploying chat apps to web, Slack, Discord, or Teams.
Expert skill for using Future AGI — the open-source end-to-end platform for evaluating, observing, and improving LLM and AI agent applications with tracing, evals, simulations, datasets, gateway, and guardrails.
Guide for creating high-quality MCP (Model Context Protocol) servers that enable LLMs to interact with external services through well-designed tools. Use when building MCP servers to integrate external APIs or services, whether in Python (FastMCP) or Node/TypeScript (MCP SDK).
Implement a task with automated LLM-as-Judge verification for critical steps
Make text more genuine, natural, and feel not written by an AI or LLM by removing AI tropes and cliches. Use when asked to deslopify, naturalize, or remove AI tropes from text.
Generative Engine Optimization review: evaluate your content's visibility to AI-powered search engines — citation-worthiness, content structure, authority signals, llms.txt, entity clarity, and AI retrieval readiness.