Brand Extraction
Extract brand identity from guidelines and produce a visual brand board for designer approval.
When to use this skill
- The user provides brand guidelines (URL, PDF, or conversational description) and asks to extract, create, or capture brand identity.
- The user asks to change, refine, review, critique, or iterate on
stardust/brand-profile.json
, stardust/brand-board.html
, or .
- The user types .
Do NOT use this skill
- For page copy, headlines, or intent — those live in briefings. Hand off to .
- For visual layout, spacing, or prototype styling — those belong to .
- To render wireframes. Hand off to .
Pre-flight
Run the procedure in
first.
is optional input; if absent this skill may create it.
Contract
Needs (reads if present):
- Brand URL, PDF, or conversational description from the user
- (design personality, if any)
Produces:
stardust/brand-profile.json
stardust/brand-board.html
stardust/assets/logo.<ext>
(extracted or synthesized)
- (created or updated)
If missing:
- No URL/PDF/anchor-set/description → deliver the "no reference" warning in the Inputs section below before synthesizing. If the designer proceeds without any reference, roll a deterministic random seed from
../_shared/divergence-toolkit.md
§2, stamp _divergence.divergence_warning = true
on the profile, and use the seed as a hard constraint on visual decisions. Stamp provenance per ../_shared/skill-contract.md
.
- No → prefer the designer-authored path (see Phase 3). Only fall back to synthesis after the "last chance" prompt has been declined, and stamp in frontmatter.
Inputs
The designer provides ONE of:
- Brand guidelines URL — a web-based brand guide (Corebook, Frontify, Brandfolder) or a live marketing site.
- Brand guidelines PDF — uploaded document.
- Reference anchor set — a moodboard URL (Are.na, Pinterest, Dribbble collection) OR 3–5 uploaded reference images with short context notes. This is NOT a brand guideline — it is a visual anchor that keeps the extraction honest.
- No reference (conversational) — the explicit escape hatch. See warning below.
When the designer has no reference
The skill defaults to requiring at least option 3 (anchor set). Before proceeding without any reference, stop and say, verbatim:
Without a reference, visual decisions will be synthesized from the assistant's defaults. The assistant has known recurring moves (see
../_shared/divergence-toolkit.md
§1) that tend to appear across unrelated brands.
Options:
- Provide a reference anchor set (option 3 above) — 2 minutes, biggest divergence payoff.
- Run yourself first — produces a personal that pushes back on assistant defaults.
- Proceed anyway with . The profile will be stamped with a divergence warning and the skill will roll a random seed per
../_shared/divergence-toolkit.md
§2.
If the designer proceeds anyway, set
_provenance.source = "conversation"
and
_divergence.divergence_warning = true
in the emitted
. The skill then rolls a deterministic random seed from §2 of the toolkit (decade × craft × register) and uses it as a hard constraint on visual synthesis. Downstream skills read both flags and adjust.
Phase 1: Extract
If guidelines URL or PDF provided:
Always use a real browser (Playwright or equivalent) to extract brand signals from a URL. Do NOT rely on WebFetch or raw HTML — it misses JS-rendered copy, computed styles, and the visual identity cues that actually make a brand recognizable.
WebFetch inference produces generic brand profiles ("Inter body, pill buttons, deep blue accent") that could describe any product. The goal here is specificity — the quirks that make the brand itself, not a generic version of its category.
1. Drive a real browser
Use Playwright (Chromium, viewport 1440×900, deviceScaleFactor 2) to load the URL, wait for network idle + ~1.5s, then both:
- Take screenshots — a hero clip (1440×900) and a full-page screenshot. Save to a scratch dir.
- Evaluate computed styles in the page context and capture:
- / CSS custom properties (brand often exposes design tokens here)
- computed , ,
- All unique values in use (walk )
- First 10 headings: tag, text, , , , , ,
- First body sample with the same style props — note carefully (Arc uses , not pure ink)
- , , or elements — the italic display accent often lives here and is brand-specific (Exposure VAR, etc.)
- First 8 CTAs (, , ): text, , , , , , . Capture multiple CTA variants — primary/inverted/inked patterns matter.
- Section background colors (walk
section, header, main > div
) — surface a variant cream, a variant ink, or an off-white you'd otherwise miss
- and with logo/brand/hero classes — source paths
- Meta tags, especially (the brand's official color), ,
- Hero copy text (first 6 h1/h2/p contents)
2. Identify identity traits, not just tokens
After extraction, explicitly look for and record:
- Signature border-radius — brands often pick a specific non-round value (10px, 14px). Record it as
componentStyle.borderRadius.default
.
- Body text opacity — is body copy at full ink or softened (e.g. 65%)? This is a recurring brand trait.
- Display metrics — if headings run tight (, ), capture both values verbatim. These feel-of-type details are what separate a real brand from a generic clone.
- Multiple CTA patterns — if you see inked-black, branded-color, and inverted buttons, record all three with their exact styles.
- Color variants for context — capture "cream on blue" as a separate color if the value differs from "cream on cream" (e.g. vs ).
- Visual motifs beyond logos/colors — look at the screenshot and name them: dashed dividers, aurora gradients, squiggle separators, noise textures, hand-drawn elements. Add a array to the brand profile with , , . These motifs are what signal the brand before a word is read.
2.5 Locate and save the real logo
The logo is a real asset — download it, don't synthesize it. Walk this fixed order and stop at the first hit:
- Inline inside the page header or — look for , , . Serialize the outer HTML and save as .
- with a logo-shaped src or class — , , , , . Download the resolved URL. Prefer SVG; fall back to the highest-resolution PNG or WebP.
- ,
<link rel="apple-touch-icon">
, — in that order. These are branded marks even if not full wordmarks.
- Favicon — or at the site root. Last resort.
- No logo found — create a minimal placeholder SVG with the brand name as text, save to , and add
"logo: synthesized placeholder — no mark found on source"
to _provenance.synthesized_inputs
.
Save all downloads to
(not
— the design phase is platform-agnostic;
is a convention some downstream systems use, e.g. AEM Edge Delivery Services, which would leak structure into stardust).
Record the outcome in the brand profile's
object:
json
"logo": {
"path": "stardust/assets/logo.svg",
"format": "svg",
"source": "inline svg in <header> | <img src=...> | apple-touch-icon | favicon | synthesized"
}
For the PDF path, extract embedded images — prefer vector — and save the same way.
For the conversational path, synthesize a placeholder as described above and stamp
_provenance.synthesized_inputs
.
3. Voice examples from live copy
Pull real copy from the rendered page (hero headlines, CTAs, micro-copy) for
. Real examples beat invented ones. Note rhetorical devices — em-dashes, rhythmic triplets, single-word italic accents — and call them out in
.
4. PDFs and non-URL sources
If the input is a PDF or uploaded asset, read it directly (PDF viewer / Read tool) and extract palette, typography, and voice cues inline from the rendered pages. Playwright is the primary path for URLs; direct read is the path for static assets.
5. Write the profile
Map everything to the brand profile schema — consult brand-profile-schema.md.
Every MUST start with a block per
../_shared/skill-contract.md
. Populate:
- — today, ISO format
- — URL, PDF path, or
- — the actual method used (e.g. Playwright string, PDF read, conversation only)
- — enumerate every field the skill filled in but did not extract from source. Examples: "personas (mottos, values) — composed from extracted voice signals, not read from the page"; "contentPillars descriptions — inferred from nav + hero copy"; "voice.examples.dont — LLM-composed to match extracted voice rules". If you genuinely extracted every field from source, pass an empty array and note it.
- — scratch paths
Readers use
to decide what to trust. Be honest and specific. "Some fields" is not acceptable — list each one.
Do
not emit a separate
block. All source-of-extraction info lives inside
.
Pay special attention to:
- Voice examples — do/don't copy pairs. Critical for content generation. Extract from live copy.
- Photography direction — style rules, composition, subject matter. Feeds image generation.
- Logo variants — primary mark always saved to
stardust/assets/logo.<ext>
(see Step 2.5). Additional variants (white-on-dark, mono, stacked) also go under with descriptive names like .
- Color roles — don't just capture hex, capture what each color is FOR (CTAs, headings, backgrounds, on-blue text).
- Motifs — the non-obvious visual gestures. Without these the board reads as a generic palette.
If no guidelines (conversational):
-
Run the soft-deps discovery decision before asking anything inline:
If is registered in this session (detect per
): hand off to
with a seeded prompt naming the brand (if known) and the discovery topics below. Wait for its output; use the result to populate
. Do NOT run an inline interview once this delegation is made.
Otherwise (superpowers not installed): announce the fallback
exactly once per session, using the verbatim text from
("superpowers announcement"). Then run the inline interview.
Either path covers the same topics: brand name, mission, target audience, personality (3-5 adjectives), colors they like/dislike, typography preference (serif/sans/mixed), photography style, competitive positioning.
The user must be able to tell which path ran — either by
's visible hand-off UI or by the one-time announcement. Silent inline interviews are a bug.
-
From the conversation, construct
stardust/brand-profile.json
with whatever was discussed.
-
Mark optional fields as null — the designer can fill them in later.
Phase 2: Palette Selection
Before writing the palette into
, the designer picks from a set of candidate palettes pulled from the bundled library at
. Every candidate palette's colors come from
coolors.co/palettes/trending and carry a source URL back to the original — zero assistant-invented hexes.
See
../_shared/palette-picker.md
for the full classifier vocabulary, filter scoring rules, and HSL helpers. See
reference/pick-ui-template.md
for the UI structure and template.
Skip this phase when
- The brand has authoritative guidelines and the URL/PDF extraction produced a real palette — use the extracted palette as-is (record
_divergence.palette_source.method = "extracted-from-source"
).
- The designer explicitly provides a palette file at
stardust/palettes/brand.json
— use it verbatim (method = "designer-provided"
).
- Running in fully-automated pipeline mode with no description — fall back to auto-classification from brand voice + seed (
method = "auto-classified"
).
Otherwise, run all six steps below.
Step A · Derive the palette description
-
If the designer provides a natural-language palette brief (e.g. "freaking bold and shocking", "clean and superbly engineered", "muted sage, considered"), use it verbatim.
-
Otherwise synthesize one short descriptor from:
- the brand's content pillars,
- the brand voice traits,
- the seed triple from (decade × craft × register),
- any ground-family hint from .
Example: for a Lagos dance collective with seed
1960s × folded-paper × travel-brochure × saturated
, a reasonable synthesized description is
"bold youthful outdoor, 1960s travel brochure, saturated".
Stamp the description in
_divergence.palette_source.description_used
.
Step B · Classify the description
Apply the keyword vocabulary from
../_shared/palette-picker.md
§ 1 to extract the five descriptor dimensions:
- (1–5)
- (1–5)
- (1–5)
- (hot / warm / mustard / green / teal / cool / violet / neutral / rainbow)
- (cream / stark-white / pale-gray / saturated / dark / monochrome-tint)
Any dimension the description doesn't trigger keywords for stays
(no constraint). Keyword matching is whole-word, case-insensitive. Record the full classification in
_divergence.palette_source.classification
.
Step C · Filter and score the library
Load palettes from
— either from the consolidated
or by walking each
file. Apply the filter scoring from
../_shared/palette-picker.md
§ 2:
- exact match: +100
- exact match: +50 / loose match (hue group): +20
- difference:
- difference:
Keep palettes with score > 0. Sort descending by score. Take the top 5 as candidates.
Compute the recommended pick via
of
MD5(description + YYYY-MM-DD)
mod
. This is the default the designer can accept with a single keypress; the others are alternatives at the same score tier.
Step D · Render the pick UI
Write
stardust/_palette-pick.html
following the template structure in
reference/pick-ui-template.md
:
- Shows the description and the classifier output at the top
- Renders the 5 candidates as numbered cards (1 = recommended, 2–5 = alternatives)
- Card 1 gets a gold border and larger swatches
- Each card: swatches with hex labels, anchor marker (★), cream-family flag if present, palette name linked to Coolors source, classification tags
- Footer instruction: "Tell the assistant a number (1–5), a palette name, or 'refine' to change the description."
Immediately after writing the file, open it in the designer's default browser per
../_shared/skill-contract.md
Opening HTML artifacts. On macOS:
open stardust/_palette-pick.html
. Do not require the designer to open it manually.
Then say:
"Your palette pick UI is open. Five candidate palettes from the library. Tell me a number (1–5), a palette name, or 'refine' to change the description."
Step E · Designer picks
Wait for the designer's input. Valid responses:
- Integer 1–5 → pick that card
- Palette name substring match → pick the matching card
- / → re-ask Step A for a new description, re-run B–D
- / Enter → accept the recommended (index 1)
Pipeline automation: when invoked as part of a full end-to-end pipeline run (no interactive designer), auto-accept the recommended pick and continue.
Step F · Record and write to brand-profile
Write the chosen palette into the in-memory brand profile (not yet on disk — Phase 4 writes the file). Each Coolors hex becomes a
entry:
json
{
"name": "Ankara Burnt Orange", // brand-native if the designer names it, else derived from Coolors palette name + swatch-role
"hex": "#DD5B26", // from the picked palette
"role": "Ankara", // brand-native role (see brand-profile-schema.md token-level enforcement)
"use": "primary saturated ground, CTA fill" // technical qualifier
}
Role-naming rule from
reference/brand-profile-schema.md
Role Naming — enforced applies: no
/
/
/
/
tokens in the
field. If the designer hasn't provided brand-native role names, derive from the brand's subject matter (e.g., "Bar Beach", "Grove", "Lamination").
Record the full chain in
_divergence.palette_source
:
json
"palette_source": {
"method": "library-pick",
"library_version": "v0.6.0",
"library_source": "coolors.co/palettes/trending (scraped 2026-04-24)",
"description_used": "freaking bold and shocking",
"classification": {
"energy": 5, "contrast": 5,
"saturation_level": null, "hue_bias": null,
"ground_family": "saturated"
},
"candidates_shown": [
{ "index": 1, "name": "...", "source": "https://coolors.co/..." },
{ "index": 2, "name": "...", "source": "..." },
"...up to 5..."
],
"recommended_index": 1,
"picked_index": 1,
"picked_palette_name": "Autumn Glow",
"picked_palette_source": "https://coolors.co/780116-f7b538-db7c26-d8572a-c32f27"
}
Leave the
file on disk as an audit record.
Phase 3: Design Personality
captures the designer's
taste — references, pet peeves, which rules to bend — signal that can't be inferred from the brand profile alone. It's produced by an interactive interview and used as a quality gate by downstream skills.
Authorship matters
has two authorship paths that downstream skills treat differently:
- Designer-authored (via or the inline interview answered by the designer) — strong quality gate. Downstream skills enforce its rules.
- Synthesized (fallback, LLM-authored without designer input) — weak hint. Downstream skills render a visible banner on brand boards and prototypes: "Design personality was synthesized by the assistant, not authored by the designer. The rules below reflect assistant defaults. Run to replace."
Every
file MUST open with frontmatter that declares authorship:
yaml
---
authored_by: designer # or: synthesized
author_date: YYYY-MM-DD
source: "/impeccable teach interview" # or: "brand skill inline interview" | "brand skill synthesis fallback"
strength: strong # designer → strong; synthesized → weak
---
Downstream skills key off
. A file without this frontmatter is treated as weak.
Path selection
This phase is delegated when possible; otherwise it runs a lighter inline interview.
If the plugin is installed (
is registered), pause and recommend to the designer:
Before we render the brand board, run
in the prompt. It will interview you about design personality (references, do's and don'ts, taste) and write
. Downstream quality gates read this file.
This is optional but recommended. You can also skip it — we can always run
later and re-refine. Reply "skip" to continue without it, or run the command now and then ask me to continue.
Wait for the designer to either run
(then resume) or explicitly say "skip". When
runs, it stamps
. On "skip" without
running, proceed to the "last chance" prompt below.
If is not installed, use the "For brand discovery" variant in
../_shared/fallback-brainstorm.md
to run a short personality interview. The designer answers — stamp
,
source: "brand skill inline interview"
.
Last-chance prompt before synthesis
Before the skill synthesizes
on its own (no designer interview), stop and ask:
Before I synthesize
myself, would you rather answer three quick questions? References you like, things that annoy you, one rule you want broken. Three minutes, and the file reflects your taste instead of my defaults. Reply "ok" to do the interview, or "synthesize" to proceed without.
- Reply "ok" → run the three-question inline interview, write the file with .
- Reply "synthesize" → write the file with , , and include the banner language in its body so downstream skills surface it.
Never invent content from the brand profile alone without declaring it synthesized — that provides no signal beyond what's already captured, and stamping it as "designer" would mislead downstream quality gates into enforcing assistant defaults as if they were designer taste.
Skip this phase when
- already exists. (Read its frontmatter; do not overwrite.)
- Invoked as part of a full end-to-end pipeline run — but still present the last-chance prompt before synthesis unless the designer has explicitly opted out of it for the pipeline run.
Phase 4: Render Brand Board
- Read
stardust/brand-profile.json
.
- Generate
stardust/brand-board.html
following the template in reference/brand-board-template.md
. Render the data contract sections in the canonical order.
- The board must:
- Use the brand's own extracted colors and fonts
- Derive the page ground from the brand palette (see
../_shared/divergence-toolkit.md
§ 2.5 Ground-color seed). Do not default to cream or any cream rebrand (vellum, kami, bone, ivory, eggshell).
- Include sticky navigation for section jumping on long boards
- Render only sections that have data (omit sections for null fields)
- Be self-contained HTML with embedded CSS (no external JS)
- Render the logo with
<img src="assets/logo.svg">
per reference/brand-board-template.md
— never inline the SVG.
- Open the file in the designer's default browser per
../_shared/skill-contract.md
Opening HTML artifacts. On macOS: open stardust/brand-board.html
.
- Then say: "Your brand board is open. Review and approve, or tell me what to change."
- In SLICC: the board also renders in the browser panel automatically; the open command is harmless.
- In pipeline-automation mode (end-to-end auto-approve): skip the open.
Phase 5: Approval Gate
This is a hard gate in interactive mode. Do not proceed until the designer approves.
Pipeline automation: When invoked as part of a full pipeline run (e.g., the user asked to run all stardust stages end-to-end with auto-approve), skip the interactive approval loop. Auto-approve and continue to the next stage. The user can always come back to refine later.
Present the brand board and ask: "Does this accurately represent your brand? What needs to change?"
Common feedback and how to handle it:
- "That color is wrong" → Update the hex in brand-profile.json, re-render the board
- "The voice feels too [formal/casual/etc]" → Update voice traits and examples, re-run to update .impeccable.md
- "Missing our secondary font" → Add to typography section, re-render
- "Photography direction is off" → Update photography rules, re-render
Iterate until the designer says the board looks right. Then:
- Confirm the brand profile is saved
- Confirm .impeccable.md exists
- Tell the designer: "Brand extraction complete. Run to see your next step. Briefings () can be written in parallel; once brand and briefings are ready, choose for a grey structural pass, or jump straight to ."
Artifacts Written
| File | Description |
|---|
stardust/brand-profile.json
| Structured brand tokens (source of truth) |
stardust/brand-board.html
| Visual brand board (rendered view) |
| Design personality for quality gates |
stardust/assets/logo.<ext>
| Primary logo mark — SVG preferred, PNG fallback. Additional variants also under . |