write-context-rules
is loaded with
every message to the nao agent — keep it lean. Two purposes only:
- Orchestrator — point the agent to the right context fast (which metric → which table, which topic → which file, which question type → which skill).
- Broad rules — how to query and how to answer.
Anything else (per-table schema, full metric semantics, domain-specific rules) belongs in a referenced file:
,
, or a domain
. Reference:
docs.getnao.io/nao-agent/context-builder/rules-context.
Standard sections (see )
- — Product + Business model.
- — Warehouse, data stack, layers, sources.
- — (one-line pointers) + (Purpose / Granularity / Key Columns ≤10 / Use For).
- — grouped by category;
**metric** → table, column, formula
.
- — three example formulas (last X weeks / last X days / current month). Don't enumerate every period.
- — 5 subsections: Understand → Select Table → Write Query → Validate → Context.
Flow
Generate section by section. Write each section to
, show the user, then move on. Don't read everything and write everything in one batch — the user needs to see progress and catch wrong inferences early.
If already has content, run the audit-and-fill flow at the bottom instead.
Step 1 —
Sources: web search for the company name/domain (from
), then
and
. Output two paragraphs: Product (what the company does) + Business model (revenue + customer journey).
Step 2 —
From
and
: Warehouse type/project/dataset, Data stack (e.g.
dlt, dbt, no semantic layer
), Data layers (e.g.
), Data sources (numbered list with prefix + one-line description).
Step 3 —
— one line per in-scope table:
- `dim_users` — user dimension. See `databases/.../table=dim_users/`.
— per-table block: Purpose, Granularity, Key Columns (cap at 10), Use For. Per-table detail beyond top 10 columns lives in
, not here.
Step 4 —
Group by category (Revenue / Activity / Conversion). Format:
### Revenue
- **MRR** → `fct_stripe_mrr.mrr_amount`, `SUM(mrr_amount) WHERE status='active'`
If a semantic layer is configured (
), route through it:
**ARR** → query via dbt MCP query_metric (semantic layer)
.
Step 5 — (placeholder until Step 7)
Leave a
> TODO: filled in via the user-validation step below.
Filled in Step 7.
Step 6 —
Use the template's 5 subsections verbatim. The project-specific bit is subsection 2 (Select Right Tables): map each major question category to its starting table, derived from Steps 3-4.
Step 7 — Validate metrics with the user
For each metric in
, ask the user to confirm or correct the source-of-truth pointer. Update in place.
Step 8 — Date filtering, with the user
Two questions decide most of it:
- Week boundary: does a week start Sunday (BigQuery ) or Monday ()? Applies to "last week", "last N weeks", week-over-week.
- Current period inclusion: when the user says "last 8 weeks" / "last 30 days", include the current incomplete period or exclude it? Rolling-from-now vs. boundary-aligned.
Then: fiscal year start if non-calendar; anything else org-specific.
Write three example formulas only — Last X weeks, Last X days, Current month. The agent extrapolates other periods from these. Each block gets a one-line note above stating the convention used.
sql
-- Last X weeks (Monday-start, excludes current incomplete week)
WHERE date >= DATE_TRUNC(CURRENT_DATE - INTERVAL (X * 7) DAY, ISOWEEK)
AND date < DATE_TRUNC(CURRENT_DATE, ISOWEEK)
Audit-and-fill flow (when is not empty)
- Read it. Compare against the six standard sections. Produce a one-line gap report (present / missing / thin per section).
- Ask the user which sections to fill.
- Run only the relevant generation steps above. Show diffs before saving.
For deeper diagnostics (MECE, schema drift, test failure root causes), route to
.
Guardrails
- Section by section, not all-at-once. Show progress, let the user course-correct.
- Show diffs, don't auto-overwrite.
- Don't bloat . Per-table detail in .
- Don't invent metric sources. Unclear → list for user validation in Step 7.
- keeps three examples max.
Templates
- — six-section scaffold. This skill is the only one that writes to .