Analytics Strategy
Design measurement frameworks that produce decisions, not just dashboards. Stack-agnostic. Tool-agnostic.
This skill is for measurement planning. For conversion optimization, use
. For SEO measurement specifically, use
and adjacent SEO skills.
When to use
- Setting up analytics on a new product or site
- Auditing existing analytics setup
- Designing dashboards for a team or business
- Defining KPIs and a north star metric
- Building event taxonomies for product analytics
- Designing attribution models for marketing
- Translating business questions into measurement plans
When NOT to use
- Conversion testing or optimization (use )
- SEO performance measurement (use SEO skills)
- Pure data infrastructure decisions (different domain)
Required inputs
- The business or product context (what does success look like)
- The audience for the analytics (who needs to make what decisions)
- The current measurement state (existing tools, tracking, gaps)
- The questions the team needs to answer
The framework: 4 layers
A complete measurement strategy covers all four. Each layer feeds the next.
1. North star and KPI hierarchy
The single metric that captures the most important outcome, plus the supporting metrics.
North star metric:
- One metric. Singular.
- Captures customer-perceived value.
- Leads to revenue, but isn't revenue itself (revenue is too far downstream).
- Examples: weekly active users, completed jobs, revenue-generating sessions, hours of value delivered.
Underneath the north star, the KPI hierarchy:
North star metric
├── Acquisition KPI (how new users enter)
├── Activation KPI (when new users get value)
├── Engagement KPI (how often users return)
├── Retention KPI (how many stick over time)
└── Monetization KPI (how value translates to revenue)
This is the "AARRR" or "pirate metrics" framework. It works because it covers the full lifecycle.
2. Event taxonomy
The vocabulary the product uses to describe what users do.
Event design principles:
- Verb + noun. , , . Past tense, snake_case.
- One event per discrete action. Not "interacted_with_modal" - too vague. Specifically , , .
- Properties capture context. Each event has properties (key-value pairs) for context. has properties like , , .
- Standardize property names. everywhere, not here and there.
- Document everything. A tracking plan that lives nowhere is a tracking plan no one follows.
Event coverage:
- All key user actions tracked
- All conversion points tracked
- All errors tracked
- All page views tracked (with consistent properties)
- All button clicks that matter (not all button clicks - that's noise)
Anti-patterns:
- 500+ events with no documentation
- Inconsistent naming (, , )
- Property keys that vary across events
- Events fired client-side that should be server-side (and vice versa)
- PII in event properties (privacy issue and tooling issue)
3. Dashboards and reports
The interface between data and decisions.
Dashboard design principles:
- One audience per dashboard. Executive dashboard != product team dashboard. Different metrics, different cadence.
- One question per chart. A chart should answer one question, not three.
- Annotations matter. Note launches, experiments, holidays, outages. A spike means nothing without context.
- Context comparisons. "10,000 signups this month" - compared to what? Last month, last year, target?
- Lead with the action. What does this dashboard help someone decide?
Common dashboard types:
| Dashboard | Audience | Metrics | Cadence |
|---|
| Executive | Leadership | North star, top 3 KPIs, big-picture trends | Weekly review |
| Product | Product team | Funnel metrics, feature adoption, retention | Daily / weekly |
| Marketing | Marketing team | Acquisition by channel, CAC, attribution | Daily / weekly |
| Operations | Ops / on-call | Performance, errors, capacity | Real-time |
| Custom (per team) | Specific team | Their specific KPIs | Their cadence |
4. Attribution and segmentation
How to connect cause and effect.
Attribution models:
- First-touch. Credit the first interaction. Useful for awareness understanding.
- Last-touch. Credit the final interaction before conversion. Default in many tools, often misleading.
- Linear. Spread credit equally across touches. Avoids over-crediting any single channel.
- Time-decay. Recent touches get more credit. Reasonable middle ground.
- Position-based. First and last get more credit, middle touches less.
- Data-driven (algorithmic). Tools like Google Analytics 4 use ML. Black box but increasingly the default.
For most businesses: pick one primary attribution model, use multiple secondary models for validation.
Segmentation principles:
- Segment by what causes different behavior, not by what's easy to track
- Useful segments: source/channel, plan tier, geography, device, cohort (signup date)
- Less useful: demographic guesses without behavioral validation
The tracking plan document
Output of the analytics strategy. A living document.
Structure:
- Goals and KPIs. Business objectives, north star, KPI hierarchy.
- Event catalog. Every event, with properties, when fired, why tracked.
- User properties. Persistent attributes (plan, signup_date, role).
- Page taxonomy. Page categories, page properties.
- Naming conventions. Snake_case, verb_noun, etc.
- Implementation notes. Client-side vs server-side, SDK details, sampling.
- Privacy and compliance. PII rules, consent handling, data retention.
- Governance. Who can add events, review process, change log.
Workflow
- Define the questions. What does the team need to answer? Working backward from questions to metrics works better than starting from metrics.
- Define the north star. One metric. Tested against the criteria above.
- Build the KPI hierarchy. Acquisition, activation, engagement, retention, monetization.
- Audit existing tracking. What's there? What's broken? What's missing?
- Design the event taxonomy. Cover the user journey. Document everything.
- Implement with care. Test each event. Verify properties. Catch issues in staging.
- Build dashboards. One per audience. Lead with action.
- Establish review cadence. Weekly business review, monthly KPI review, quarterly strategy review.
- Govern. Who adds events, who reviews, how changes propagate.
Failure patterns
- Tracking everything. Noise overwhelms signal.
- Tracking nothing strategic. Page views and that's it. Cannot answer real questions.
- No documentation. Tracking plan lives in someone's head.
- Inconsistent naming. Same concept, three names. Reports become detective work.
- Events fired but never reviewed. Tracking debt accumulates.
- Dashboards no one looks at. Built for vanity, not decisions.
- Single attribution model treated as truth. All models lie. Some lie usefully.
- PII in events. Compliance and tooling problems.
- Client-side only. Critical business events should be server-side too. Ad blockers, network issues, edge cases lose client-side events.
- No connection to business outcomes. Metrics exist in a silo, never connected to revenue, retention, or strategic decisions.
Output format
Default output: a markdown tracking plan at
analytics-tracking-plan.md
plus a dashboard inventory.
Tracking plan structure:
markdown
# Tracking Plan
## North star metric
[Definition, calculation, target]
## KPI hierarchy
[Each KPI with definition, calculation, owner]
## Event catalog
|---|---|---|---|---|
| user_signed_up | After successful signup form submit | source, plan, referrer | Marketing | Live |
| project_created | When user clicks Create Project | project_type, template_used | Product | Live |
| ... | | | | |
## User properties
[List with definitions]
## Naming conventions
[Rules]
## Privacy and compliance
[Rules]
## Governance
[Process]
Reference files
references/event-taxonomy-template.md
- Starter event catalog with patterns for common product types.