Last updated: May 21, 2025
Mastering the OpenTelemetry Transformation Language (OTTL)
The OpenTelemetry ecosystem continues to evolve with powerful tools that enhance your observability strategy. Among these, the OpenTelemetry Transformation Language (OTTL) stands out as an incredible capability for manipulating and transforming telemetry data.
This guide explores what OTTL is, how it works, and how you can leverage it to maximize the value of your observability data with minimal effort.
What is OpenTelemetry Transformation Language?
OpenTelemetry Transformation Language is a domain-specific language designed for transforming, filtering, and manipulating telemetry data within the OpenTelemetry Collector. It allows you to modify traces, metrics, and logs during the collection process before they're exported to your observability backend.
OTTL enables you to:
- Transform attribute names and values
- Filter unwanted telemetry data
- Redact sensitive information
- Add contextual information to your telemetry signals
- Convert between different format types
This capability is particularly valuable when you need to adapt your telemetry data to meet specific requirements or enhance it with additional context without modifying your application code. OTTL is also handy when you need to enforce the existence of format of attributes in a centralized location, e.g., as part of your work as a platform engineer.
Prerequisites
Before diving into OTTL, you should have:
- Familiarity with basic OpenTelemetry concepts such as metrics, logs and traces.
- Understanding of your telemetry data structure and the transformations you want to apply.
You ideally also have an OpenTelemetry collector installed and configured to follow along. If you don’t have one, consider checking out our example that shows you how to run one locally using Docker.
If you just want to quickly validate an OTTL expression, you may also find ottl.run handy. It allows you to validate a filter or transform processor configuration right in your web browser (more on this down below).
Common OTTL use cases
Attribute manipulation
One of the most common uses of OTTL is to standardize attribute names or values across your telemetry data. For example, to set an attribute on all telemetry to identify the Kubernetes cluster via the transform processor.
1resource.attributes["k8s.cluster.name"] = "prod-aws-us-west-2"
Redacting sensitive data
OTTL provides elegant ways to redact sensitive information from your telemetry via the transform processor.
1span.attributes["http.request.header.authorization"] = "REDACTED" where span.attributes["http.request.header.authorization"] != nil
This simple statement replaces authorization header values with a safe placeholder, but only when this attribute is actually on the source data. Also see our dedicated guide to this topic.
Dropping data
You can drop data you don’t care for. A common example is to drop metrics you are never querying via the filter processor:
1IsMatch(metric.name, "^k8s\.replicaset.*$")
Or dropping telemetry that is older than six hours:
1time_unix_nano < UnixNano(Now()) - 21600000000000
Understanding the OTTL Syntax
OTTL is a domain-specific language with its own syntax and semantics designed specifically for telemetry data transformation. Before diving into how it integrates with the OpenTelemetry Collector, let's understand the core language elements.
Path expressions
Path expressions in OTTL allow you to navigate and select specific elements within your telemetry data structure. They use a dot notation similar to many programming languages:
span.name
span.attributes["http.method"]
resource.attributes["service.name"]
These expressions point to specific parts of your telemetry data that you want to read or modify.
The first segment of a path expression is referred to as the context. Contexts directly map to the signals existing within OpenTelemetry, and higher-level constructs such as resources and scopes. The best way to learn about the supported contexts and possible path expressions is through the OTTL reference documentation itself:
Always watch out for the types that the path expressions resolve to. For example, the path expression log.time
resolves to a Go time.Time
type. Comparing this value using log.time > 5000
won’t work, because the left and right side of the operators don’t have matching types. Instead, you could use the path expression log.time_unix_nano
to get an int64
value.
Enumerations
Several OTTL fields accept int64
values that are in fact enumerations. Span kind, span status code, and log severity numbers are examples for this. OTTL exposes these enumeration values through global constants you can access.
1span.status.code == STATUS_CODE_ERROR
The available enumerations are listed at the end of the OTTL context documentation that we linked above.
Operators
OTTL supports several operators for different transformation needs:
- Assignment (
=
): Sets values for telemetry fields - Comparison operators (
==
,!=
,>
,<
, etc.): Used in conditional statements - Logical operators (
and
,or
,not
): Combine multiple conditions
Functions
OTTL provides a rich set of built-in functions for data manipulation:
- Convert to uppercase:
ToUpperCase(span.attributes["http.request.method"])
- Limit string length:
Substring(log.body.string, 0, 1024)
- Combine strings:
Concat(["prefix", span.attributes["request.id"]], "-")
- Match against regular expressions:
IsMatch(metric.name, "^k8s\..*$")
- Limit the number of attributes:
limit(log.attributes, 10, [])
You might have noticed a difference between these functions: Some start with an uppercase character, whereas others begin with a lowercase one. This difference is not accidental (albeit somewhat arbitrary). The lowercase ones are what OTTL calls Editors. Editors can manipulate an existing piece of data in place and therefore have side-effects (think of mutating a map or array). The uppercase ones are Converters. Converters are plain functions, taking an input parameter and generating an output.
Don’t worry too much about this semantical difference. Instead take a look at the OTTL function reference to learn about supported Editors and Converters.
Conditional statements
The where
clause allows you to apply transformations conditionally.
1span.attributes["db.statement"] = "REDACTED" where resource.attributes["service.name"] == "accounting"
Nil vs null
The age old difference between (programming) languages. OTTL uses nil
. If you want to check whether an attribute is set, you would therefore use:
1resource.attributes["service.name"] != nil
Using OTTL in the OpenTelemetry Collector
OTTL is part of the OpenTelemetry Collector Contrib repository. It exists as a reusable Go module that you can use within your applications. However, most users will interact with OTTL through the filter and transform processors in the OpenTelemetry Collector.
When telemetry data flows through a Collector having such processors configured, OTTL expressions are evaluated against each telemetry signal (span, metric, or log record) and transformations are applied based on the conditions you define.
If you want to run the following examples yourself, consider running our OpenTelemetry Collector example locally.
Dropping telemetry in the OpenTelemetry Collector
This (partial) collector configuration defines a metric pipeline that accepts data via OTLP, filters it using OTTL, batches it, and emits logs for these batches. The filter itself will drop any metric whose name starts with k8s.replicaset.
, a common pattern for users collecting Kubernetes information that want to get rid of information about Kubernetes ReplicaSet versions.
1234568910111213processors:batch:filter:metrics:datapoint:- 'IsMatch(ConvertCase(String(metric.name), "lower"), "^k8s\\.replicaset\\.")'service:pipelines:metrics:receivers: [otlp]processors: [filter, batch]exporters: [debug]
Presented visually using OTelBin, this Collector pipeline configuration looks like this:
Transforming telemetry in the OpenTelemetry Collector
The transform processor is even more advanced than the filter processor. You can not just drop telemetry through it, but fully manipulate it. Let's look at an example showing how to backfill missing log timestamps.
Depending on how you collect your logs, you may encounter situations where the timestamps on the log records are not set. So, without any better hints about the timestamp, you can set the current time.
12345678101112131415processors:batch:transform:log_statements:- context: logstatements:- set(log.observed_time, Now()) where log.observed_time_unix_nano == 0- set(log.time, log.observed_time) where log.time_unix_nano == 0service:pipelines:logs:receivers: [otlp]processors: [batch, transform]exporters: [debug]
The previous example showcases several critical OTTL aspects:
- Differences between the types that path expressions resolve to (
int64
vs.time.Time
). - Chaining two successive statements that depend on each other.
- Using Editor (
set(…)
) and Converter (Now()
) functions.
In this case, the observed timestamp is backfilled when missing. In the next step, the log's timestamp is backfilled with the observed timestamp when missing.
Presented visually using OTelBin, this Collector pipeline configuration looks like this:
What about OTTL Errors?
When working with OTTL in the OpenTelemetry Collector, you'll encounter two distinct types of errors that require different handling approaches: compilation and runtime errors.
Compilation errors
Compilation errors occur when the OTTL processor initializes and attempts to parse your statements. These errors indicate issues with the syntax or structure of your OTTL expressions and prevent the collector from starting. Examples include:
- Invalid syntax (missing quotes, incorrect operators)
- Unknown functions
- Invalid path expressions
- Type mismatches in function arguments
For example, this statement would cause a compilation error due to a missing closing double quote:
12345processors:filter:traces:span:- 'span.name == "drop me'
The Collector will fail to start and log an error message like this. Note that the configured error_mode
has no impact on compilation errors.
1Error: invalid configuration: processors::filter: unable to parse OTTL condition "span.name == \"drop me": condition has invalid syntax: 1:14: lexer: invalid input text "\"drop me"
Runtime errors
Runtime errors occur during the execution of OTTL statements when processing telemetry data. These happen after successful compilation when the collector is processing actual data. Common runtime errors include:
- Accessing attributes that don't exist
- Type conversion failures
- Function execution errors (like division by zero)
- Conditional evaluations on missing fields
For example, this statement always causes a runtime error because span.status.code
is actually an int64
value. Thus it is an invalid parameter for ToUpperCase
.
12345processors:filter:traces:span:- 'ToUpperCase(span.status.code) == "ERROR"'
The Collector will generate a log record such as the following one when an OTTL runtime error occurs.
12025-05-20T05:10:58.035Z warn ottl@v0.126.0/parser.go:468 failed to eval condition {"resource": {}, "otelcol.component.id": "filter", "otelcol.component.kind": "processor", "otelcol.pipeline.id": "traces", "otelcol.signal": "traces", "error": "expected string but got int64", "condition": "ToUpperCase(span.status.code) == \"ERROR\""}
Runtime error mode configuration
The OpenTelemetry Collector's transform and filter processors that use OTTL provide an important configuration option called error_mode
that controls how the processor handles runtime errors.
The error_mode
setting accepts three values:
propagate
(default): Any runtime error stops processing the current telemetry item (span, metric, or log) and propagates the error up the pipeline. This can potentially halt processing for the entire batch containing the problematic item.ignore
: Runtime errors are logged but ignored, allowing processing to continue for the current telemetry item and subsequent statements. The transformation that caused the error won't be applied, but other valid transformations will proceed.silent
: Similar toignore
, but errors are not logged, which can improve performance but reduce visibility into issues.
It is generally recommended to use ignore
as the error_mode
for production environments. In the worst case, it means more telemetry and telemetry with wrong/incomplete data. This ensures service continuity even when encountering unexpected data patterns.
Similar to any other programming language, applying some defensive programming patterns is a good practice. For example, check the type of an attribute before modifying it. And following this, you will want to monitor your Collectors' logs for errors and warnings.
This example shows how to verify that an attribute exists and that its value was converted to a string
before applying regular expression matching.
123resource.attributes["service.namespace"] != nilandIsMatch(ConvertCase(String(resource.attributes["service.namespace"]), "lower"), "^platform.*$")
By understanding the difference between compilation and runtime errors and configuring the appropriate error mode for your environment, you can build robust telemetry transformation pipelines that gracefully handle unexpected conditions while maintaining visibility into potential issues.
OTTL performance
When implementing OTTL at scale, understanding the performance implications can help you optimize your telemetry pipeline.
OTTL statements compile once at collector startup, similar to regular expressions. At runtime, these compiled expressions efficiently execute as function chains against your telemetry data. Lets look at IsMatch
to understand what is happening:
1IsMatch(span.attributes["user.id"], "^[A-Z]{2}\\d{6}$")
The implementation for IsMatch
looks like this (source, some spacing applied for readability):
1245689101112131415161718func isMatch[K any](target ottl.StringLikeGetter[K], pattern string) (ottl.ExprFunc[K], error) {compiledPattern, err := regexp.Compile(pattern)if err != nil {return nil, fmt.Errorf("the pattern supplied to IsMatch is not a valid regexp pattern: %w", err)}return func(ctx context.Context, tCtx K) (any, error) {val, err := target.Get(ctx, tCtx)if err != nil {return nil, err}if val == nil {return false, nil}return compiledPattern.MatchString(*val), nil}, nil}
The regular expression is compiled during the OTTL statement compilation and reused for all evaluations, making runtime matching highly efficient. This mirrors how you would optimize a manual implementation—compiling patterns once and reusing them for multiple matches. So, at runtime, these OTTL functions translate 1:1 to Go functions, making it easy to verify their performance characteristics.
OTTL Builder in Dash0
One of the challenges with OTTL is constructing the correct syntax for your transformation needs. Dash0 provides an intuitive way to generate OTTL statements through our spam filter feature, even if your goal isn't actually to filter data but to learn how to write proper OTTL expressions.
Dash0's spam filter allows you to visually define filtering conditions and then export those conditions as OpenTelemetry Collector configurations using OTTL. Here's how to leverage this feature:
- Start with defining your criteria: In the Dash0 interface, navigate to the tracing, logging, or metrics explorer and use the query interface to filter for the telemetry data you want to transform.
- Trigger the spam filter dialog: Once you've defined your filtering criteria, click the flag icon (🏳️) in the interface to bring up the spam filter dialog. You don't need to actually create a spam filter - this is just to access the OTTL generator.
- Review the generated filter: The dialog will show you a preview of what data would be filtered based on your criteria. This helps validate that your expression is targeting the correct telemetry data.
- Export to OpenTelemetry Collector configuration: From the spam filter dialog, you can directly export your filter as an OpenTelemetry Collector configuration. Dash0 automatically translates your visual filtering criteria into proper OTTL syntax.
Final thoughts
OpenTelemetry Transformation Language represents a significant advancement in observability data management. By mastering OTTL, you gain fine-grained control over your telemetry data without modifying application code—making your observability strategy more flexible and adaptable.
As you implement OTTL in your environment, consider how a comprehensive observability solution like Dash0 can help you maximize the value of your transformed data. Dash0 not only provides a powerful backend for your OpenTelemetry data but also offers advanced visualization, correlation, and analysis capabilities that complement your OTTL transformations.
Ready to take your observability to the next level? Explore how Dash0's platform can help you leverage the full potential of OpenTelemetry and OTTL by signing up for a free trial today.
