Loading...
Loading...
Found 92 Skills
Create and configure surveys in PostHog through guided conversation. Use this skill when a user wants to create a survey, collect user feedback, run NPS/CSAT/CES/PMF surveys, gather product feedback, or understand user sentiment. The skill guides Product Managers through survey design by matching their goals to proven templates (or creating custom surveys), then configuring targeting and scheduling before creating via PostHog MCP tools.
Resolves experiment references from natural language to concrete experiment IDs. Handles name lookups, fuzzy descriptions ('the signup experiment', 'my latest experiment'), status filtering, and disambiguation when multiple experiments match. TRIGGER when: user refers to an experiment by name, description, or relative reference ('latest', 'most recent', 'the one I created yesterday') and you don't already have the experiment ID. DO NOT TRIGGER when: user provides an experiment ID directly, or you already resolved the experiment earlier in the conversation.
HogQL queries for PostHog analytics
Evaluate and respond to inbound PostHog sales leads from Salesforce. Use this skill when any PostHog TAE needs to triage an inbound lead — deciding whether to qualify for a call, route to self-serve, or disqualify — and then draft an appropriate response email. Checks Vitally for existing account context before qualifying. Triggers on "respond to this lead", "triage this inbound", "write a response to this lead", "disposition this lead", "evaluate this Salesforce lead", or any request involving an inbound sales inquiry that needs qualification and a reply. Also trigger when a TAE pastes or describes lead details and asks what to do with them.
PostHog integration for Expo applications
PostHog integration for React Router v7 - Data mode applications
PostHog feature flags for Go applications
PostHog error tracking for Web (JavaScript)
PostHog integration for React Router v7 - Framework mode applications
Investigate LLM analytics evaluations of both types — `hog` (deterministic code-based) and `llm_judge` (LLM-prompt-based). Find existing evaluations, inspect their configuration, run them against specific generations, query individual pass/fail results, and generate AI-powered summaries of patterns across many runs. Use when the user asks to debug why an evaluation is failing, surface common failure modes, compare results across filters, dry-run a Hog evaluator, prototype a new LLM-judge prompt, or manage the evaluation lifecycle (create, update, enable/disable, delete).
Help existing PostHog customers improve their PostHog instance. Triggers on "help [customer] improve their PostHog setup", "audit [company]'s PostHog instance", "create tracking plan for [company]", "design data schema for [customer]", or requests to improve analytics coverage, fix instrumentation gaps, expand PostHog usage, or build better insights for customers already using PostHog. Use when working with a customer who already has PostHog installed.
PostHog feature flags for Node.js applications