Use this skill when you optimize VTEX IO backends (typically Node with
@vtex/api
/ Koa-style middleware, or .NET services) for performance and resilience: caching, deduplicating work, parallel I/O, and efficient configuration loading—not only “add a cache.”
Adding an in-memory LRU (per pod) for hot keys
Adding VBase persistence for shared cache across pods, optionally with stale-while-revalidate (return stale, refresh in background)
Loading AppSettings (or similar) once at startup or on a TTL refresh vs every request
Parallelizing independent client calls (
Promise.all
) instead of serial waterfalls
Passing
ctx.clients
(e.g.
vbase
) into client helpers or resolvers so caches are testable and explicit
Do not use this skill for:
Choosing
/_v/private
vs public paths or
Cache-Control
at the edge → vtex-io-service-paths-and-cdn
GraphQL
@cacheControl
field semantics only → vtex-io-graphql-api
Decision rules
Layer 1 — LRU (in-process) — Fastest; lost on cold start and not shared across replicas. Use bounded size + TTL for hot keys (organization, cost center, small config slices).
Layer 2 — VBase — Shared across pods; platform data is partitioned by account / workspace like other IO resources. Pair with hash or
trySaveIfhashMatches
when the client supports concurrency-safe updates (see Clients).
Stale-while-revalidate — On VBase hit with expired freshness, return stale immediately and revalidate asynchronously (fetch origin → write VBase + LRU). Reduces tail latency vs blocking on origin every time.
TTL-only — Simpler: cache until TTL expires, then blocking fetch. Prefer when staleness is unacceptable or origin is cheap.
AppSettings — If values are account-wide and rarely change, load once (or refresh on interval) and hold in module memory; if workspace-dependent or must reflect admin changes quickly, use per-request read or short TTL cache. Never cache secrets in logs or global state without guardrails.
Context — Use
ctx.state
for per-request deduplication (e.g. “already loaded org for this request”). Use global module cache only for immutable or TTL-refreshed app data; account and workspace live on
ctx.vtex
—always include them in in-memory cache keys when the same pod serves multiple tenants.
Parallel requests — When resolvers need independent upstream calls, run them in parallel; combine only when outputs depend on each other.
Timeouts on every outbound call — Every
ctx.clients
call and external HTTP request must have an explicit timeout. Use
@vtex/api
client options (
timeout
,
retries
,
exponentialTimeoutCoefficient
) to tune per-client behavior. Unbounded waits are the top cause of cascading failures in distributed systems.
Graceful degradation — When an upstream is slow or down, fail open where the business allows (return cached/default data, skip optional enrichment) rather than blocking the response. Consider circuit breaker patterns for chronically failing dependencies.
Never cache real-time transactional state — Order forms, cart simulations, payment responses, full session state, and commitment pricing must never be served from cache. They reflect live, mutable state that changes on every interaction. Caching these creates stale prices, phantom inventory, or duplicate charges.
Resolver chain deduplication — When a resolver chain calls the same client method multiple times (e.g.
getCostCenter
in the resolver and again inside a helper), deduplicate: call once, pass the result through, or stash in
ctx.state
. Serial waterfalls of 7+ calls that could be 3 parallel + 1 sequential are the top performance sink.
Phased
Promise.all
— Group independent calls into parallel phases. Phase 1:
six calls sequentially when only two depend on each other.
Batch mutations — When setting multiple values (e.g.
setManualPrice
per cart item), use
Promise.all
instead of a sequential loop. Each
await
in a loop adds a full round-trip.
VBase deep patterns
Per-entity keys, not blob keys — Cache individual entities (e.g.
sku:{region}:{skuId}
) instead of composite blobs (e.g.
allSkus:{sortedCartSkuIds}
). Per-entity keys dramatically increase cache hit rates when items are added/removed.
Minimal DTOs — Store only the fields the consumer needs (e.g.
{ skuId, mappedId, isSpecialItem }
at ~50 bytes) instead of the full API response (~10-50 KB per product). Reduces VBase storage, serialization time, and transfer size.
Sibling prewarming — When a search API returns a product with 4 SKU variants, cache all 4 individual SKUs even if only 1 was requested. The next request for a sibling is a VBase hit instead of an API call.
Pass
vbase
as a parameter — Clients don't have direct access to other clients. Pass
ctx.clients.vbase
as a parameter to client methods or utilities that need it. This keeps code testable and explicit about dependencies.
VBase state machines — For long-running operations (scans, imports, batch processing), use VBase as a state store with
current-operation.json
(lock + progress), heartbeat extensions, checkpoint/resume, and TTL-based lock expiry to prevent zombie locks.
service.json
tuning
timeout
— Maximum seconds before the platform kills a request. Set based on the longest expected operation; do not leave at the default if your resolver calls slow upstreams.
memory
— MB per worker. Increase if LRU caches or large payloads cause OOM; monitor actual usage before over-provisioning.
workers
— Concurrent request handlers per replica. More workers handle more concurrent requests but each shares the memory budget and in-process LRU.
minReplicas
/
maxReplicas
— Controls horizontal scaling. For payment-critical or high-throughput apps, set
minReplicas >= 2
so cold starts don't hit production traffic.
Tenancy and in-memory caches
IO runs per app version per shard, with pods shared across accounts: every request is still resolved in
{account, workspace}
context. VBase, app buckets, and related platform stores partition data by account/workspace. In-process LRU/module
Map
does not—you must key explicitly with
ctx.vtex.account
and
ctx.vtex.workspace
(plus entity id) so two consecutive requests for different accounts on the same pod cannot read each other’s entries.
Hard constraints
Constraint: Do not store sensitive or tenant-specific data in module-level caches without tenant keys
Global or module-level maps must not store PII, tokens, or authorization-sensitive blobs keyed only by user id or email without
account
and
workspace
(and any other dimension needed for isolation).
Why this matters — Pods are multi-tenant: the same process may serve many accounts in sequence. VBase and similar APIs are scoped to the current account/workspace, but an in-memory
Map
is your responsibility. Missing
account
/
workspace
in the key risks cross-tenant reads from warm cache.
Detection — A module-scope
Map
keyed only by
userId
or
email
; or cache keys that omit
ctx.vtex.account
/
ctx.vtex.workspace
when the value is tenant-specific.
Correct — Build keys from
ctx.vtex.account
,
ctx.vtex.workspace
, and the entity id; never store app tokens in VBase/LRU as plain cache values; prefer
ctx.clients
and platform auth.
typescript
// Pseudocode: in-memory key must mirror tenant scope (same pod, many accounts)functioncacheKey(ctx: Context, subjectId:string){return`${ctx.vtex.account}:${ctx.vtex.workspace}:${subjectId}`;}
Wrong —
globalUserCache.set(email, profile)
keyed only by email, with no
account
/
workspace
segment—unsafe on shared pods even though a later VBase read would be account-scoped, because this map is not partitioned by the platform.
Constraint: Do not use fire-and-forget VBase writes in financial or idempotency-critical paths
When VBase serves as an idempotency store (e.g. payment connectors storing transaction state) or a data-integrity store, writes must be awaited. Fire-and-forget writes risk silent failure: a successful upstream operation (e.g. a charge) whose VBase record is lost causes a duplicate on the next retry.
Why this matters — VTEX Gateway retries payment calls with the same
paymentId
. If VBase write fails silently after a successful authorization, the connector cannot find the previous result and sends another payment request—causing a duplicate charge.
Detection — A VBase
saveJSON
or
saveOrUpdate
call without
await
in a payment, settlement, refund, or any flow where the stored value is the only record preventing re-execution.
Correct — Await the write; accept the latency cost for correctness.
typescript
// Critical path: await guarantees the idempotency record is persistedawait ctx.clients.vbase.saveJSON<Transaction>('transactions', paymentId, transactionData)return Authorizations.approve(authorization,{...})
Wrong — Fire-and-forget in a payment flow.
typescript
// No await — if this fails silently, the next retry creates a duplicate chargectx.clients.vbase.saveJSON('transactions', paymentId, transactionData)return Authorizations.approve(authorization,{...})
Constraint: Do not cache real-time transactional data
Order forms, cart simulation responses, payment statuses, full session state, and commitment prices must never be served from LRU, VBase, or any cache layer. They reflect live mutable state.
Why this matters — Serving a cached order form shows phantom items, stale prices, or wrong quantities. Caching payment responses could return a previous transaction's status for a different payment. Caching cart simulations returns stale availability and pricing.
Detection — LRU or VBase keys like
orderForm:{id}
,
cartSim:{hash}
,
paymentResponse:{id}
, or
session:{token}
used for read-through caching. Or a resolver that caches the result of
checkout.orderForm()
.
Correct — Always call the live API for transactional data; cache reference data (org, cost center, config, seller lists) around it.
Wrong — Caching the order form or cart simulation.
typescript
const cacheKey =`orderForm:${orderFormId}`const cached = orderFormCache.get(cacheKey)if(cached)return cached // Stale cart state served to user
Constraint: Do not block the purchase path on slow or unbounded cache refresh
Stale-while-revalidate or origin calls must not add unbounded latency to checkout-critical middleware if the platform SLA requires a fast response.
Why this matters — Blocking checkout on optional enrichment breaks conversion and reliability.
Detection — A cart or payment resolver awaits VBase refresh or external API before returning; no timeout or fallback.
Correct — Return stale or default; enqueue refresh; fail open where business rules allow.
Wrong —
await fetchHeavyPartner()
in the hot path with no timeout.
Preferred pattern
Classify data: reference data (org, cost center, config, seller lists → cacheable) vs transactional data (order form, cart sim, payment → never cache) vs user-private (never in shared cache without encryption and keying).
Choose LRU only, VBase only, or LRU → VBase → origin (two-layer) for read-heavy reference data.
Deduplicate within a request: set
ctx.state
flags when a resolver chain might call the same loader twice.
Parallelize independent
ctx.clients
calls in phased
Promise.all
groups.
Per-entity VBase keys with minimal DTOs for high-cardinality data (SKUs, users, org records).
Document TTLs and invalidation (who writes, when refresh runs).