Loading...
Loading...
Found 302 Skills
LLM observability platform for tracing, evaluation, prompt management, and cost tracking. Use when setting up Langfuse, monitoring LLM costs, tracking token usage, or implementing prompt versioning.
Use when working with AWS Strands Agents SDK or Amazon Bedrock AgentCore platform for building AI agents. Provides architecture guidance, implementation patterns, deployment strategies, observability, quality evaluations, multi-agent orchestration, and MCP server integration.
Principal backend engineering intelligence for Python AI/ML systems. Actions: plan, design, build, implement, review, fix, optimize, refactor, debug, secure, scale ML services and pipelines. Focus: data quality, reproducibility, reliability, performance, security, observability, model evaluation, MLOps.
Monitor Granola usage, analytics, and meeting insights. Use when tracking meeting patterns, analyzing team productivity, or building meeting analytics dashboards. Trigger with phrases like "granola analytics", "granola metrics", "granola monitoring", "meeting insights", "granola observability".
Setup Spanora AI observability in any project (JavaScript/TypeScript or Python). Use when user asks to "add spanora", "setup spanora", "integrate spanora", "add AI observability", "monitor LLM calls with spanora", "track AI costs", or mentions spanora in the context of adding observability to their project. Detects the language and installed AI SDKs (Vercel AI, Anthropic, OpenAI, LangChain) and configures the optimal integration pattern.
Instruments Python and TypeScript code with MLflow Tracing for observability. Triggers on questions about adding tracing, instrumenting agents/LLM apps, getting started with MLflow tracing, or tracing specific frameworks (LangGraph, LangChain, OpenAI, DSPy, CrewAI, AutoGen). Examples - "How do I add tracing?", "How to instrument my agent?", "How to trace my LangChain app?", "Getting started with MLflow tracing", "Trace my TypeScript app"
Production observability with structured logging, metrics collection, distributed tracing, and alerting
Azure Application Insights SDK for .NET. Application performance monitoring and observability resource management. Use for creating Application Insights components, web tests, workbooks, analytics items, and API keys. Triggers: "Application Insights", "ApplicationInsights", "App Insights", "APM", "application monitoring", "web tests", "availability tests", "workbooks".
Guides Docker, CI/CD pipelines, deployment strategies, infrastructure as code, and observability setup. Use when writing Dockerfiles, configuring GitHub Actions, planning deployments, setting up monitoring, or when asked about containers, pipelines, Terraform, or production infrastructure.
Add observability to any repo: Sentry (errors), PostHog (analytics), Helicone (LLM costs). Auto-detects language/framework. Creates Sentry project via MCP. Installs SDKs, writes config, updates .env.example, opens PR. Supports: Next.js, Node/Express/Hono, Go, Python, Swift, Rust, React Native.
Create Post Incident Records (PIRs) by analysing incidents discovered from PagerDuty. Orchestrates pagerduty-oncall, datadog-analyser, and traffic-spikes-investigator skills to enrich each incident with observability and traffic data, auto-determines severity, and outputs completed PIR forms. Use when asked to "create a PIR", "write a post incident record", "fill out PIR form", "incident report", "analyse incidents", or after on-call shifts need documentation.
Analyse Datadog observability data including metrics, logs, monitors, incidents, SLOs, APM traces, RUM, security signals, and more. Use when asked to investigate infrastructure health, query metrics, search logs, check monitors, diagnose errors, or analyse any Datadog data.