Loading...
Loading...
Set up orq.ai observability for LLM applications. Use when setting up tracing, adding the AI Router proxy, integrating OpenTelemetry, auditing existing instrumentation, or enriching traces with metadata.
npx skill4agent add orq-ai/assistant-plugins setup-observabilitycapture_input=Falsecapture_output=False@tracedtrace-1defaultstep1chat-responseclassify-intentservice.nameanalyze-trace-failuresbuild-evaluatorrun-experimentoptimize-promptInstrumentation Progress:
- [ ] Phase 1: Assess current state (framework, SDK, existing instrumentation)
- [ ] Phase 2: Choose integration mode (AI Router vs Observability vs both)
- [ ] Phase 3: Implement integration (framework-specific setup)
- [ ] Phase 4: Verify baseline (traces appearing, model/tokens captured, span hierarchy)
- [ ] Phase 5: Enrich traces (session_id, user_id, tags, @traced for custom spans)https://api.orq.ai/v2/routerhttps://api.orq.ai/v2/otel@tracedagentllmtoolretrievalembeddingfunctionAskUserQuestionopenailangchaincrewaiautogenvercel/aillamaindexpydantic_aismolagentsagnodspyorq.aiORQ_API_KEYapi.orq.aiopentelemetryOTEL_TracerProvider@tracedBatchSpanProcessor.env.env.example| Situation | Recommendation |
|---|---|
| No tracing yet, framework supports AI Router | AI Router — fastest path, traces are automatic |
| Already calling providers directly, don't want to change LLM calls | Observability only — add OTEL instrumentors |
| Want multi-provider routing AND framework-level span detail | Both — AI Router for routing, OTEL for orchestration spans |
| Framework only supports Observability (BeeAI, Haystack, LiteLLM, Google AI) | Observability only |
export ORQ_API_KEY=your-key-herehttps://api.orq.ai/v2/routerprovider/modelopenai/gpt-4oanthropic/claude-sonnet-4-5-20250929OTEL_*TracerProviderNote: Import order is critical — instrumentors must be initialized before framework clients. If the project uses an auto-formatter (isort, Ruff), addat the top of the file or# isort:skip_fileon late imports to prevent reordering.# noqa: E402
| Requirement | How to Check |
|---|---|
| Traces appearing | At least one trace visible in the Traces view |
| Model name captured | Open an LLM span → |
| Token usage tracked | LLM span shows |
| Span hierarchy | Trace View shows nested spans for multi-step operations |
| Correct span types | LLM calls show as |
| No sensitive data | Spot-check span inputs/outputs for PII or secrets |
| If You See in Code... | Suggest Adding |
|---|---|
| Conversation history, chat endpoints, message arrays | |
User authentication, | |
| Multiple distinct features or endpoints | |
| Customer/tenant identifiers | |
| Feedback collection, ratings | Score annotations |
@traced@tracedagentfunctiontoolretrieval| Anti-Pattern | What to Do Instead |
|---|---|
| Manual tracing when framework instrumentor exists | Use the framework instrumentor — it captures model, tokens, spans automatically |
| Instrumentor imported AFTER framework client creation | Initialize instrumentor BEFORE creating SDK clients |
Generic trace names ( | Use descriptive names: |
| Logging PII/secrets in trace inputs | Use |
No | Always set |
| Adding all enrichment before verifying baseline | Get traces working first, explore in UI, then add context |
| Flat spans (no hierarchy) for multi-step pipelines | Nest |
| Overloading traces with every possible attribute | Only add attributes the user will actually filter or analyze by |
| No graceful shutdown in Node.js | Call |
| Env vars loaded AFTER SDK import | Load |