Skip to main content
Already using OpenTelemetry or a framework with built-in tracing (Vercel AI SDK, LangChain, LlamaIndex)? You can send traces straight to Avido — no custom integration code required. Avido accepts standard OTLP JSON payloads and automatically maps OpenInference span attributes and OTel GenAI semantic conventions into its trace model.

Quick setup

Point your OpenTelemetry exporter at Avido by setting three environment variables:
OTEL_EXPORTER_OTLP_PROTOCOL="http/json"
OTEL_EXPORTER_OTLP_ENDPOINT="https://api.avidoai.com/v0/otel/traces"
OTEL_EXPORTER_OTLP_HEADERS="x-application-id=<application-id>,x-api-key=<api-key>"
VariableDescription
OTEL_EXPORTER_OTLP_PROTOCOLMust be http/json. Avido does not support gRPC or protobuf.
OTEL_EXPORTER_OTLP_ENDPOINThttps://api.avidoai.com/v0/otel/traces
OTEL_EXPORTER_OTLP_HEADERSYour Avido x-application-id and x-api-key, comma-separated.
You can find your Application ID and API key in the Avido dashboard under Settings > API Keys.
Disabling instrumentation: If OTEL_EXPORTER_OTLP_ENDPOINT is not set, the Avido OpenTelemetry integration is automatically disabled. This lets you turn tracing on and off per environment without code changes.

Sending traces

Once your exporter is configured, traces are sent automatically by your instrumentation library. You can also send an OTLP payload manually:
cURL
curl -X POST https://api.avidoai.com/v0/otel/traces \
  -H "Content-Type: application/json" \
  -H "x-application-id: <application-id>" \
  -H "x-api-key: <api-key>" \
  -d '{
  "resourceSpans": [
    {
      "scopeSpans": [
        {
          "spans": [
            {
              "traceId": "4bf92f3577b34da6a3ce929d0e0e4736",
              "spanId": "00f067aa0ba902b7",
              "name": "llm.generate",
              "startTimeUnixNano": "1737052800000000000",
              "endTimeUnixNano": "1737052800500000000",
              "attributes": [
                {
                  "key": "openinference.span.kind",
                  "value": { "stringValue": "LLM" }
                },
                {
                  "key": "llm.model_name",
                  "value": { "stringValue": "gpt-4o-2024-08-06" }
                },
                {
                  "key": "input.value",
                  "value": { "stringValue": "Tell me a joke." }
                },
                {
                  "key": "output.value",
                  "value": { "stringValue": "Why did the chicken cross the road?" }
                },
                {
                  "key": "llm.token_count.prompt",
                  "value": { "intValue": 12 }
                },
                {
                  "key": "llm.token_count.completion",
                  "value": { "intValue": 18 }
                }
              ]
            }
          ]
        }
      ]
    }
  ]
}'
A successful request returns the created trace and step IDs — the same response shape as the /v0/ingest endpoint.

How spans are mapped

Avido reads the openinference.span.kind attribute on each span and converts it into the matching Avido step type:
OpenInference span kindAvido step typeWhat gets extracted
LLMllmModel, input/output messages, token usage, finish reason, cost
TOOLtoolTool name, parameters, output, tool call ID
RETRIEVER / RERANKERretrieverQuery, retrieved documents
AGENTgroupAgent name, agent ID, child spans nested underneath
CHAINgroupOrchestration chain name, child spans nested underneath
EMBEDDING, GUARDRAIL, EVALUATORlogName and metadata
Spans without a recognised openinference.span.kind are stored as log steps so nothing is lost.
Agentic trace support: AGENT and CHAIN spans are mapped to group steps, preserving the hierarchical structure of agentic workflows. This means multi-turn agent loops, tool-calling chains, and orchestration flows are displayed with their full parent-child relationships in the Avido trace viewer.

Attribute reference

The tables below list every attribute Avido extracts from spans. Any attributes not listed here are preserved in the step’s metadata field.

LLM spans

AttributeMapped to
llm.model_nameModel ID
input.valueInput
output.valueOutput
llm.input_messagesInput (preferred over input.value)
llm.output_messagesOutput (preferred over output.value)
llm.token_count.promptPrompt token count
llm.token_count.completionCompletion token count
gen_ai.response.finish_reasonsFinish reason on the LLM end step ("stop", "tool_calls", "tool_use", "max_tokens", etc.)

Tool spans

AttributeMapped to
tool.nameStep name
tool.parametersTool input
tool.outputTool output
tool_call.function.nameStep name (fallback)
tool_call.function.argumentsTool input (fallback)
gen_ai.tool.call.idTool call ID (links the tool invocation to the LLM’s tool_use request)

Group spans (Agent / Chain)

AttributeMapped to
gen_ai.agent.nameStep name (falls back to span name)
gen_ai.agent.idGroup key (falls back to agent name, then span name)

Retriever spans

AttributeMapped to
retrieval.queryQuery
retrieval.documentsResult

Common attributes

AttributeMapped to
session.idTrace reference ID (links conversations in a session)
gen_ai.conversation.idTrace reference ID (alternative to session.id)
avido.test.idTest ID (connects the trace to an Avido test run)
Linking test runs: avido.test.id is a custom Avido attribute — it is not part of the OpenInference spec. If you’re running Avido tests via webhooks, set this span attribute to the testId from the webhook payload so the trace is automatically connected to the test run and evaluation results are linked.

Error and status tracking

Avido maps the OTel span status to structured error fields on each step:
OTel span status.codeAvido step status
0 (UNSET)success
1 (OK)success
2 (ERROR)error
When a span has status.code = 2 (ERROR):
  • The step’s status is set to error
  • The span’s status.message is stored in the step’s error field
  • The numeric status code is preserved in statusCode
This means failed LLM calls, tool errors, and timeout spans are automatically flagged in Avido’s trace viewer without any extra instrumentation on your side.

Cost tracking

Avido automatically computes the cost of LLM steps when token counts are present.

How it works

  1. When an LLM span includes llm.token_count.prompt and llm.token_count.completion, Avido looks up the model in the Model Pricing table (configurable in your dashboard).
  2. Cost is computed as: (promptTokens x inputCostPer1kTokens + completionTokens x outputCostPer1kTokens) / 1000
  3. The resulting costAmount is stored on the step.

Trace-level aggregation

After all steps are ingested, Avido computes summary fields on the trace:
FieldDescription
totalCostSum of all step costs
totalPromptTokensSum of prompt tokens across all LLM steps
totalCompletionTokensSum of completion tokens across all LLM steps
totalDurationMsEnd-to-end trace duration
hasErrortrue if any step has an error status
stepCountTotal number of steps in the trace
These pre-computed fields power the trace list view and enable filtering by cost, duration, and error state without scanning individual steps.
Set up Model Pricing in the Avido dashboard or via the API to enable automatic cost computation. If no pricing entry exists for a model, the step is ingested without a cost value.

Trace structure

Each OTLP batch creates one trace in Avido:
  • If a root span (no parentSpanId) is present, it becomes the trace container. Its session.id or gen_ai.conversation.id attribute is used as the trace’s referenceId.
  • If no root span exists, the first span in the batch is used.
  • All spans become steps nested under the trace, preserving parent-child relationships via parentSpanId.
  • Timing fields (startTimeUnixNano, endTimeUnixNano) are stored as step timestamps with millisecond duration.

Understanding testId and traceId

Two IDs can appear on OTEL traces — here’s what each one does:
FieldHow to set itWhen to include
avido.test.idSet as a span attributeOnly when the trace originates from an Avido test. Pass the testId from the webhook payload. Do not set it for traces that come from real user interactions.
traceId (OTLP)Set on the spanThe OTLP traceId is converted to a UUID and used to group all spans into a single Avido trace. Keep the same traceId across all spans in the batch.
avido.test.id links the trace to an Avido test run for evaluation. The OTLP traceId groups spans together. They serve different purposes — do not confuse them.

Agentic trace patterns

Avido is designed to capture complex agentic workflows. Here’s how common patterns map through the OTEL converter:

Multi-turn tool-calling agent

AGENT span (openinference.span.kind=AGENT)
  +-- LLM span (kind=LLM, finishReason="tool_calls")
  +-- TOOL span (kind=TOOL, gen_ai.tool.call.id="call_abc123")
  +-- TOOL span (kind=TOOL, gen_ai.tool.call.id="call_def456")
  +-- LLM span (kind=LLM, finishReason="stop")
This becomes in Avido:
group step (agent)
  +-- llm start/end (finishReason: "tool_calls")
  +-- tool step (toolCallId: "call_abc123")
  +-- tool step (toolCallId: "call_def456")
  +-- llm start/end (finishReason: "stop")

Orchestration chain

CHAIN span (kind=CHAIN)
  +-- RETRIEVER span (kind=RETRIEVER)
  +-- LLM span (kind=LLM)
Both the outer CHAIN and inner spans are preserved with their full hierarchy.

Error handling in agents

When a span has status.code = 2, the step is marked as error. This is useful for tracking retry patterns:
LLM span (status.code=2, status.message="Rate limited")  -> status: error
LLM span (status.code=1)                                  -> status: success

Vercel AI SDK

If you’re using the Vercel AI SDK, Avido also recognises its telemetry attributes as fallbacks:
Vercel AI SDK attributeMapped to
ai.response.modelModel ID (highest priority)
ai.model.idModel ID (fallback)
ai.response.textOutput (fallback)
ai.usage.promptTokensPrompt token count (fallback)
ai.usage.completionTokensCompletion token count (fallback)

Next steps

Need help wiring up your stack? Contact us and we’ll help you get connected.