Already using OpenTelemetry or a framework with built-in tracing (Vercel AI SDK, LangChain, LlamaIndex)?
You can send traces straight to Avido — no custom integration code required.
Avido accepts standard OTLP JSON payloads and automatically maps
OpenInference span attributes
and OTel GenAI semantic conventions
into its trace model.
Quick setup
Point your OpenTelemetry exporter at Avido by setting three environment variables:
OTEL_EXPORTER_OTLP_PROTOCOL="http/json"
OTEL_EXPORTER_OTLP_ENDPOINT="https://api.avidoai.com/v0/otel/traces"
OTEL_EXPORTER_OTLP_HEADERS="x-application-id=<application-id>,x-api-key=<api-key>"
| Variable | Description |
|---|
OTEL_EXPORTER_OTLP_PROTOCOL | Must be http/json. Avido does not support gRPC or protobuf. |
OTEL_EXPORTER_OTLP_ENDPOINT | https://api.avidoai.com/v0/otel/traces |
OTEL_EXPORTER_OTLP_HEADERS | Your Avido x-application-id and x-api-key, comma-separated. |
You can find your Application ID and API key in the Avido dashboard under Settings > API Keys.
Disabling instrumentation: If OTEL_EXPORTER_OTLP_ENDPOINT is not set, the Avido
OpenTelemetry integration is automatically disabled. This lets you turn tracing on and off
per environment without code changes.
Sending traces
Once your exporter is configured, traces are sent automatically by your instrumentation library.
You can also send an OTLP payload manually:
curl -X POST https://api.avidoai.com/v0/otel/traces \
-H "Content-Type: application/json" \
-H "x-application-id: <application-id>" \
-H "x-api-key: <api-key>" \
-d '{
"resourceSpans": [
{
"scopeSpans": [
{
"spans": [
{
"traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
"spanId": "00f067aa0ba902b7",
"name": "llm.generate",
"startTimeUnixNano": "1737052800000000000",
"endTimeUnixNano": "1737052800500000000",
"attributes": [
{
"key": "openinference.span.kind",
"value": { "stringValue": "LLM" }
},
{
"key": "llm.model_name",
"value": { "stringValue": "gpt-4o-2024-08-06" }
},
{
"key": "input.value",
"value": { "stringValue": "Tell me a joke." }
},
{
"key": "output.value",
"value": { "stringValue": "Why did the chicken cross the road?" }
},
{
"key": "llm.token_count.prompt",
"value": { "intValue": 12 }
},
{
"key": "llm.token_count.completion",
"value": { "intValue": 18 }
}
]
}
]
}
]
}
]
}'
A successful request returns the created trace and step IDs — the same response shape as the /v0/ingest endpoint.
How spans are mapped
Avido reads the openinference.span.kind attribute on each span and converts it into the
matching Avido step type:
| OpenInference span kind | Avido step type | What gets extracted |
|---|
LLM | llm | Model, input/output messages, token usage, finish reason, cost |
TOOL | tool | Tool name, parameters, output, tool call ID |
RETRIEVER / RERANKER | retriever | Query, retrieved documents |
AGENT | group | Agent name, agent ID, child spans nested underneath |
CHAIN | group | Orchestration chain name, child spans nested underneath |
EMBEDDING, GUARDRAIL, EVALUATOR | log | Name and metadata |
Spans without a recognised openinference.span.kind are stored as log steps so nothing is lost.
Agentic trace support: AGENT and CHAIN spans are mapped to group steps, preserving
the hierarchical structure of agentic workflows. This means multi-turn agent loops, tool-calling
chains, and orchestration flows are displayed with their full parent-child relationships in the
Avido trace viewer.
Attribute reference
The tables below list every attribute Avido extracts from spans. Any attributes not listed here
are preserved in the step’s metadata field.
LLM spans
| Attribute | Mapped to |
|---|
llm.model_name | Model ID |
input.value | Input |
output.value | Output |
llm.input_messages | Input (preferred over input.value) |
llm.output_messages | Output (preferred over output.value) |
llm.token_count.prompt | Prompt token count |
llm.token_count.completion | Completion token count |
gen_ai.response.finish_reasons | Finish reason on the LLM end step ("stop", "tool_calls", "tool_use", "max_tokens", etc.) |
| Attribute | Mapped to |
|---|
tool.name | Step name |
tool.parameters | Tool input |
tool.output | Tool output |
tool_call.function.name | Step name (fallback) |
tool_call.function.arguments | Tool input (fallback) |
gen_ai.tool.call.id | Tool call ID (links the tool invocation to the LLM’s tool_use request) |
Group spans (Agent / Chain)
| Attribute | Mapped to |
|---|
gen_ai.agent.name | Step name (falls back to span name) |
gen_ai.agent.id | Group key (falls back to agent name, then span name) |
Retriever spans
| Attribute | Mapped to |
|---|
retrieval.query | Query |
retrieval.documents | Result |
Common attributes
| Attribute | Mapped to |
|---|
session.id | Trace reference ID (links conversations in a session) |
gen_ai.conversation.id | Trace reference ID (alternative to session.id) |
avido.test.id | Test ID (connects the trace to an Avido test run) |
Linking test runs: avido.test.id is a custom Avido attribute — it is not part of
the OpenInference spec. If you’re running Avido tests via webhooks, set this
span attribute to the testId from the webhook payload so the trace is automatically
connected to the test run and evaluation results are linked.
Error and status tracking
Avido maps the OTel span status to structured error fields on each step:
OTel span status.code | Avido step status |
|---|
0 (UNSET) | success |
1 (OK) | success |
2 (ERROR) | error |
When a span has status.code = 2 (ERROR):
- The step’s
status is set to error
- The span’s
status.message is stored in the step’s error field
- The numeric status code is preserved in
statusCode
This means failed LLM calls, tool errors, and timeout spans are automatically flagged in
Avido’s trace viewer without any extra instrumentation on your side.
Cost tracking
Avido automatically computes the cost of LLM steps when token counts are present.
How it works
- When an LLM span includes
llm.token_count.prompt and llm.token_count.completion, Avido
looks up the model in the Model Pricing table (configurable in your dashboard).
- Cost is computed as:
(promptTokens x inputCostPer1kTokens + completionTokens x outputCostPer1kTokens) / 1000
- The resulting
costAmount is stored on the step.
Trace-level aggregation
After all steps are ingested, Avido computes summary fields on the trace:
| Field | Description |
|---|
totalCost | Sum of all step costs |
totalPromptTokens | Sum of prompt tokens across all LLM steps |
totalCompletionTokens | Sum of completion tokens across all LLM steps |
totalDurationMs | End-to-end trace duration |
hasError | true if any step has an error status |
stepCount | Total number of steps in the trace |
These pre-computed fields power the trace list view and enable filtering by cost, duration,
and error state without scanning individual steps.
Set up Model Pricing in the Avido dashboard or via the API to enable automatic cost
computation. If no pricing entry exists for a model, the step is ingested without a cost value.
Trace structure
Each OTLP batch creates one trace in Avido:
- If a root span (no
parentSpanId) is present, it becomes the trace container.
Its session.id or gen_ai.conversation.id attribute is used as the trace’s referenceId.
- If no root span exists, the first span in the batch is used.
- All spans become steps nested under the trace, preserving parent-child relationships
via
parentSpanId.
- Timing fields (
startTimeUnixNano, endTimeUnixNano) are stored as step timestamps with
millisecond duration.
Understanding testId and traceId
Two IDs can appear on OTEL traces — here’s what each one does:
| Field | How to set it | When to include |
|---|
avido.test.id | Set as a span attribute | Only when the trace originates from an Avido test. Pass the testId from the webhook payload. Do not set it for traces that come from real user interactions. |
traceId (OTLP) | Set on the span | The OTLP traceId is converted to a UUID and used to group all spans into a single Avido trace. Keep the same traceId across all spans in the batch. |
avido.test.id links the trace to an Avido test run for evaluation. The OTLP traceId groups
spans together. They serve different purposes — do not confuse them.
Agentic trace patterns
Avido is designed to capture complex agentic workflows. Here’s how common patterns map through
the OTEL converter:
AGENT span (openinference.span.kind=AGENT)
+-- LLM span (kind=LLM, finishReason="tool_calls")
+-- TOOL span (kind=TOOL, gen_ai.tool.call.id="call_abc123")
+-- TOOL span (kind=TOOL, gen_ai.tool.call.id="call_def456")
+-- LLM span (kind=LLM, finishReason="stop")
This becomes in Avido:
group step (agent)
+-- llm start/end (finishReason: "tool_calls")
+-- tool step (toolCallId: "call_abc123")
+-- tool step (toolCallId: "call_def456")
+-- llm start/end (finishReason: "stop")
Orchestration chain
CHAIN span (kind=CHAIN)
+-- RETRIEVER span (kind=RETRIEVER)
+-- LLM span (kind=LLM)
Both the outer CHAIN and inner spans are preserved with their full hierarchy.
Error handling in agents
When a span has status.code = 2, the step is marked as error. This is useful for tracking
retry patterns:
LLM span (status.code=2, status.message="Rate limited") -> status: error
LLM span (status.code=1) -> status: success
Vercel AI SDK
If you’re using the Vercel AI SDK, Avido also recognises its telemetry attributes as fallbacks:
| Vercel AI SDK attribute | Mapped to |
|---|
ai.response.model | Model ID (highest priority) |
ai.model.id | Model ID (fallback) |
ai.response.text | Output (fallback) |
ai.usage.promptTokens | Prompt token count (fallback) |
ai.usage.completionTokens | Completion token count (fallback) |
Next steps
Need help wiring up your stack? Contact us and we’ll help you get connected.