Dash0 Raises $35 Million Series A to Build the First AI-Native Observability Platform
OpenTelemetry-native observability for LLM applications with automatic instrumentation and monitoring.
OpenLIT is an OpenTelemetry-native observability platform for LLM applications that provides automatic instrumentation, GPU monitoring, and comprehensive insights into AI workloads. With support for 50+ LLM providers, vector databases, and agent frameworks, OpenLIT enables zero-code observability for generative AI applications. Monitoring LLM applications with OpenLIT helps track performance, costs, token usage, and model behavior in production environments.
OpenLIT provides OpenTelemetry-native observability for LLM applications with automatic instrumentation for 50+ LLM providers, vector databases, and agent frameworks. It captures comprehensive telemetry including prompts, completions, token usage, latency, and GPU metrics with minimal configuration.
For a complete working example, see the OpenLIT observability example in the dash0-examples repository.
Before setting up OpenLIT, ensure:
You'll need an OpenTelemetry Collector deployed to receive telemetry from OpenLIT. Consider using:
Helm Chart for The OpenTelemetry Collector
For local development, see the OpenLIT observability example for a complete setup with docker-compose and Kubernetes.
The OpenLIT SDK provides automatic instrumentation with a single line of code.
Install the OpenLIT package:
Add the following code at the start of your application:
For HTTPS endpoints or authentication:
The openlit.init() function supports several parameters:
otlp_endpoint: OpenTelemetry Collector endpoint (HTTP)otlp_headers: Dictionary of headers for authenticationapplication_name: Custom service name (defaults to auto-detected)environment: Deployment environment (e.g., "production")collect_gpu_stats: Enable GPU monitoring (default: False)Example with full configuration:
The OpenLIT Operator provides zero-code auto-instrumentation for Python applications running in Kubernetes.
Add the OpenLIT Helm repository and install the operator:
Create an AutoInstrumentation custom resource to configure how pods are instrumented:
Apply the configuration:
Add the instrumentation label to your deployment to match the AutoInstrumentation selector:
The operator will automatically inject instrumentation into pods matching the instrumentation: openlit label.
OpenLIT automatically captures comprehensive telemetry following OpenTelemetry semantic conventions for GenAI:
Example captured attributes include gen_ai.usage.input_tokens, gen_ai.prompt.0.content, gen_ai.request.model, and provider-specific metrics.
After setting up OpenLIT:
anthropic.messages.create, openai.chat.completions)