MongoDB Atlas Streams
Build, operate, and debug Atlas Stream Processing (ASP) pipelines using four MCP tools from the MongoDB MCP Server.
Prerequisites
This skill requires the MongoDB MCP Server connected with:
- Atlas API credentials ( and )
All operations require an Atlas project ID. If unknown, call
first to find your project ID.
If MCP tools are unavailable
If the MongoDB MCP Server is not connected or the streams tools are missing, see references/mcp-troubleshooting.md for diagnostic steps and fallback options.
Tool Selection Matrix
atlas-streams-discover — ALL read operations
| Action | Use when |
|---|
| See all workspaces in a project |
| Review workspace config, state, region |
| See all connections in a workspace |
| Check connection state, config, health |
| See all processors in a workspace |
| Check processor state, pipeline, config |
| Full health report: state, stats, errors |
| PrivateLink and VPC peering details. Optional: + to get Atlas account details for PrivateLink setup |
Pagination (all list actions):
(1-100, default 20),
(default 1).
Response format:
—
(default for list actions) or
(default for inspect/diagnose).
atlas-streams-build — ALL create operations
| Resource | Key parameters |
|---|
| , , (default SP10), |
| , (Kafka/Cluster/S3/Https/Kinesis/Lambda/SchemaRegistry/Sample), |
| , (must start with , end with /), , |
| (project-level, not tied to a specific workspace) |
Field mapping — only fill fields for the selected resource type:
- resource = "workspace": Fill: , , , , , . Leave empty: all connection and processor fields.
- resource = "connection": Fill: , , , , . Leave empty: all workspace and processor fields. (See references/connection-configs.md for type-specific schemas.)
- resource = "processor": Fill: , , , , (recommended), (optional). Leave empty: all workspace and connection fields. (See references/pipeline-patterns.md for pipeline examples.)
- resource = "privatelink": Fill: , . Note: PrivateLink is project-level, not workspace-level. is not required — omit it. Leave empty: all connection and processor fields.
atlas-streams-manage — ALL update/state operations
| Action | Notes |
|---|
| Begins billing. Optional override, |
| Stops billing. Retains state 45 days |
| Processor must be stopped first. Change pipeline, DLQ, or name |
| Change tier or region |
| Update config (networking is immutable — must delete and recreate) |
| / | VPC peering management |
Field mapping — always fill
,
, then by action:
- → . Optional: , , (ISO 8601 timestamp to resume from a specific point)
- →
- → . At least one of: , ,
- → or
- → , . Exception: networking config (e.g., PrivateLink) cannot be modified after creation — delete and recreate.
- → , ,
- →
State pre-checks:
- → errors if processor is already STARTED
- → no-ops if already STOPPED or CREATED (not an error)
- → errors if processor is STARTED (must stop first)
Processor states: →
(via start) →
(via stop). Can also enter
on runtime errors. Modify requires STOPPED or CREATED state.
Teardown safety checks:
- Processor deletion → auto-stops before deleting (no need to stop manually first)
- Connection deletion → blocks if any running processor references it. Stop/delete referencing processors first.
- Workspace deletion → See detailed workflow below (lines 108-111).
atlas-streams-teardown — ALL delete operations
| Resource | Safety behavior |
|---|
| Auto-stops before deleting |
| Blocks if referenced by running processor |
| Cascading delete of all connections and processors |
| / | Remove networking resources |
Field mapping — always fill
,
, then:
- →
- or → ,
- or → (the ID). These are project-level resources, not tied to a specific workspace.
Before deleting a workspace, inspect it first:
- → — get connection/processor counts
- Present to user: "Workspace X contains N connections and M processors. Deleting permanently removes all. Proceed?"
- Wait for confirmation before calling
CRITICAL: Validate Before Creating Processors
You MUST call before composing any processor pipeline. This is not optional.
- Field validation: Query with the sink/source type, e.g. "Atlas Stream Processing $emit S3 fields" or "Atlas Stream Processing Kafka $source configuration". This catches errors like vs for S3 .
- Pattern examples: Query with
dataSources: [{"name": "devcenter"}]
for working pipelines, e.g. "Atlas Stream Processing tumbling window example".
Also fetch examples from the official ASP examples repo when building non-trivial processors:
https://github.com/mongodb/ASP_example (quickstarts, example processors, Terraform examples). Start with
example_processors/README.md
for the full pattern catalog.
Key quickstarts:
| Quickstart | Pattern |
|---|
| Inline with (zero infra, ephemeral) |
01_changestream_basic.json
| Change stream → tumbling window → to Atlas |
| Kafka source → tumbling window rollup → to Atlas |
| Chained processors: rollup → archive to separate collection |
| Real-time Kafka topic monitoring (sinkless, like ) |
Pipeline Rules & Warnings
Invalid constructs — these are NOT valid in streaming pipelines:
- , , — NOT available in stream processing. NEVER use these. Use the document's own timestamp field or metadata for event time instead of .
- HTTPS connections as — HTTPS is for enrichment or sink only, NOT as a data source
- Kafka without — topic field is required
- Pipelines without a sink — terminal stage (, , , or async) required for deployed processors (sinkless only works via )
- Lambda as target — Lambda uses (mid-pipeline enrichment), not
- with
validationAction: "error"
— crashes processor; use instead
Required fields by stage:
- (change stream): include
fullDocument: "updateLookup"
to get the full document content
- (Kinesis): use (NOT or )
- (Kinesis): MUST include
- (S3): use (NOT )
- : must include , , , ,
- : must include , , , ,
- : must include with and
- : include setting (e.g., ) for concurrent I/O
- AWS connections (S3, Kinesis, Lambda): IAM role ARN must be registered via Atlas Cloud Provider Access first. Always confirm this with user. See references/connection-configs.md for details.
See references/pipeline-patterns.md for stage field examples with JSON syntax.
SchemaRegistry connection: must be
(not
). Schema type values are case-sensitive (use lowercase
, not
). See
references/connection-configs.md for required fields and auth types.
MCP Tool Behaviors
Elicitation: When creating connections, the build tool auto-collects missing sensitive fields (passwords, bootstrap servers) via MCP elicitation. Do NOT ask the user for these — let the tool collect them.
Auto-normalization:
- array → auto-converted to comma-separated string
- string → auto-wrapped in array
- → defaults to
{role: "readWriteAnyDatabase", type: "BUILT_IN"}
for Cluster connections
Workspace creation: defaults to
, which auto-creates the
connection.
Region naming: The
field uses Atlas-specific names that differ by cloud provider. Using the wrong format returns a cryptic
error.
| Provider | Cloud Region | Streams Value |
|---|
| AWS | us-east-1 | |
| AWS | us-east-2 | |
| AWS | eu-west-1 | |
| GCP | us-central1 | |
| GCP | europe-west1 | |
| Azure | eastus | |
| Azure | westeurope | |
See
references/connection-configs.md for the full region mapping table. If unsure, inspect an existing workspace with
→
and check
.
Connection Capabilities — Source/Sink Reference
Know what each connection type can do before creating pipelines:
| Connection Type | As Source ($source) | As Sink ($merge / $emit) | Mid-Pipeline | Notes |
|---|
| Cluster | ✅ Change streams | ✅ $merge to collections | ✅ $lookup | Change streams monitor insert/update/delete/replace operations |
| Kafka | ✅ Topic consumer | ✅ $emit to topics | ❌ | Source MUST include field |
| Sample Stream | ✅ Sample data | ❌ Not valid | ❌ | Testing/demo only |
| S3 | ❌ Not valid | ✅ $emit to buckets | ❌ | Sink only - use , , . Supports AWS PrivateLink. |
| Https | ❌ Not valid | ✅ $https as sink | ✅ $https enrichment | Can be used mid-pipeline for enrichment OR as final sink stage |
| AWSLambda | ❌ Not valid | ✅ $externalFunction (async only) | ✅ $externalFunction (sync or async) | Sink: required. Mid-pipeline: or |
| AWS Kinesis | ✅ Stream consumer | ✅ $emit to streams | ❌ | Similar to Kafka pattern |
| SchemaRegistry | ❌ Not valid | ❌ Not valid | ✅ Schema resolution | Metadata only - used by Kafka connections for Avro schemas |
Common connection usage mistakes to avoid:
- ❌ Using as sink with → Must use for sink stage
- ❌ Forgetting change streams exist → Atlas Cluster is a powerful source, not just a sink
- ❌ Using with Kafka → Use for Kafka sinks
See references/connection-configs.md for detailed connection configuration schemas by type.
Core Workflows
Setup from scratch
- → (check existing)
- → (region near data, SP10 for dev)
- → (for each source/sink/enrichment)
- Validate connections: → + for each — verify names match targets, present summary to user
- Call to validate field names. Fetch relevant examples from https://github.com/mongodb/ASP_example
- → (with DLQ configured)
- → (warn about billing)
Workflow Patterns
Incremental pipeline development (recommended):
See references/development-workflow.md for the full 5-phase lifecycle.
- Start with basic → pipeline (validate connectivity)
- Add stages (validate filtering)
- Add / transforms (validate reshaping)
- Add windowing or enrichment (validate aggregation logic)
- Add error handling / DLQ configuration
Modify a processor pipeline:
- → — processor MUST be stopped first
- →
action: "modify-processor"
— provide new pipeline
- →
action: "start-processor"
— restart
Debug a failing processor:
- → — one-shot health report. Always call this first.
- Commit to a specific root cause. Match symptoms to diagnostic patterns:
- Error 419 + "no partitions found" → Kafka topic doesn't exist or is misspelled
- State: FAILED + multiple restarts → connection-level error (bypasses DLQ), check connection config
- State: STARTED + zero output + windowed pipeline → likely idle Kafka partitions blocking window closure; add to Kafka (e.g.,
{"size": 30, "unit": "second"}
)
- State: STARTED + zero output + non-windowed → check if source has data; inspect Kafka offset lag
- High memoryUsageBytes approaching tier limit → OOM risk; recommend higher tier
- DLQ count increasing → per-document errors; use MongoDB on DLQ collection
See references/output-diagnostics.md for the full pattern table.
- Classify processor type before interpreting output volume (alert vs transformation vs filter).
- Provide concrete, ordered fix steps specific to the diagnosed root cause. Do NOT present a list of hypothetical scenarios.
- If detailed logs are needed, direct the user to the Atlas UI: Atlas → Stream Processing → Workspace → Processor → Logs tab.
Chained processors (multi-sink pattern)
CRITICAL: A single pipeline can only have ONE terminal sink (
or
). When users request multiple output destinations (e.g., "write to Atlas AND emit to Kafka"), you MUST acknowledge the single-sink constraint and propose chained processors using an intermediate destination. See
references/pipeline-patterns.md for the full pattern with examples.
Pre-Deploy & Post-Deploy Checklists
See references/development-workflow.md for the complete pre-deploy quality checklist (connection validation, pipeline validation) and post-deploy verification workflow.
Tier Sizing & Performance
See references/sizing-and-parallelism.md for tier specifications, parallelism formulas, complexity scoring, and performance optimization strategies.
Troubleshooting
See references/development-workflow.md for the complete troubleshooting table covering processor failures, API errors, configuration issues, and performance problems.
Billing & Cost
Atlas Stream Processing has no free tier. All deployed processors incur continuous charges while running.
- Charges are per-hour, calculated per-second, only while the processor is running
- stops billing; stopped processors retain state for 45 days at no charge
- For prototyping without billing: Use in mongosh — runs pipelines ephemerally without deploying a processor
- See
references/sizing-and-parallelism.md
for tier pricing and cost optimization strategies
Safety Rules
- and require user confirmation — do not bypass
- BEFORE calling for a workspace, you MUST first inspect the workspace with to count connections and processors, then present this information to the user before requesting confirmation
- BEFORE creating any processor, you MUST validate all connections per the "Pre-Deployment Validation" section in references/development-workflow.md
- Deleting a workspace removes ALL connections and processors permanently
- After stopping a processor, state is preserved 45 days — then checkpoints are discarded
resumeFromCheckpoint: false
drops all window state — warn user first
- Moving processors between workspaces is not supported (must recreate)
- Dry-run / simulation is not supported — explain what you would do and ask for confirmation
- Always warn users about billing before starting processors
- Store API authentication credentials in connection settings, never hardcode in processor pipelines
Reference Files
| File | Read when... |
|---|
references/pipeline-patterns.md
| Building or modifying processor pipelines |
references/connection-configs.md
| Creating connections (type-specific schemas) |
references/development-workflow.md
| Following lifecycle management or debugging decision trees |
references/output-diagnostics.md
| Processor output is unexpected (zero, low, or wrong) |
references/sizing-and-parallelism.md
| Choosing tiers, tuning parallelism, or optimizing cost |