Dash0 Raises $35 Million Series A to Build the First AI-Native Observability Platform
Monitor and observe LangChain applications with OpenTelemetry.
LangChain is a framework for building applications powered by large language models (LLMs). It enables developers to create sophisticated AI applications by chaining together LLM calls, tools, and data sources. Monitoring LangChain applications is essential for understanding LLM performance, tracking token usage, identifying bottlenecks in chain executions, and troubleshooting errors in production.
LangChain applications can be instrumented with OpenTelemetry to automatically capture traces, metrics, and logs. This provides visibility into LLM calls, chain executions, token usage, latency, and errors. OpenTelemetry's auto-instrumentation approach (recommended) requires zero code changes, while manual instrumentation offers full control for custom spans and attributes.
For a complete working example, see the LangChain observability example in the dash0-examples repository.
Before setting up LangChain monitoring, ensure:
You'll need an OpenTelemetry Collector deployed to receive telemetry from your LangChain application. Consider using:
Helm Chart for The OpenTelemetry Collector
For local development, see the LangChain observability example for a complete setup with docker-compose.
Auto-instrumentation automatically captures telemetry from LangChain with zero code changes.
Install the OpenTelemetry distribution and LangChain instrumentation:
Run the bootstrap command to install instrumentation packages for your installed libraries:
Set the following environment variables to send telemetry to an OpenTelemetry Collector:
Execute your application with the opentelemetry-instrument wrapper:
For frameworks like Flask or FastAPI:
Your LangChain application now exports telemetry automatically with no code changes required.
Auto-instrumentation does not prevent you from adding custom spans - you can always reference the global tracer using trace.get_tracer(__name__) to add business-specific context.
Manual instrumentation provides full control and allows custom spans and attributes.
Add the following setup code to your application:
You can add custom spans to capture business logic:
The LangChain instrumentation automatically captures comprehensive telemetry following the OpenTelemetry Semantic Conventions for GenAI, including full conversation context, token usage metrics, and chain execution flows.
Example captured attributes include gen_ai.usage.input_tokens and gen_ai.usage.output_tokens for cost tracking, gen_ai.prompt.0.content and gen_ai.completion.0.content for debugging conversations, and gen_ai.request.model for tracking which models are being used.
After running your instrumented application:
my-langchain-app)ChatAnthropic.invokeRunnableSequence.invokeChatPromptTemplate.invokeStrOutputParser.parse