Last updated: April 13, 2026
Production Logging in Laravel with OpenTelemetry
Laravel's logging system is more capable than its documentation lets on. Under the surface, the framework wraps Monolog with its own conventions for contextual data, channel routing, and exception reporting.
Recent versions introduced the Context facade, which changes how you should
think about attaching metadata to log entries. And the OpenTelemetry ecosystem
now provides first-class support for PHP and Laravel, making it possible to
connect your logs to distributed traces without writing custom plumbing.
But there's a significant gap between calling Log::info() in a controller and
having logs that actually help you understand what your application is doing in
production. Bridging that gap requires deliberate choices about output format,
contextual enrichment, exception handling, channel architecture, and telemetry
integration.
This guide focuses on exactly those choices. It assumes you understand the basics of Laravel's channel system and Monolog's role behind it. If you need a refresher on Monolog itself, including handlers, processors, formatters, and the PSR-3 interface, our PHP Logging with Monolog guide covers the library in depth.
By the end, your Laravel logs will be structured, enriched with request-scoped context, and flowing through an OpenTelemetry pipeline into a centralized platform where you can correlate them with traces and metrics across your entire infrastructure.
Prerequisites
Before working through the examples in this guide, make sure your environment includes:
- PHP 8.2 or later
- Composer (latest version)
- Laravel 11 or newer (the
Contextfacade was introduced in Laravel 11; examples also work on Laravel 12 and 13) - Docker and Docker Compose for the companion playground
You should also be comfortable running Artisan commands and editing Laravel configuration files.
Setting up the demo (optional)
If you want to follow along with a running application, the companion repository
contains a Docker Compose playground that pairs a Laravel application with an
OpenTelemetry Collector. It exercises every pattern covered in this guide, from
JSON-formatted output and the Context facade to OTel log export with trace
correlation.
123git clone https://github.com/dash0hq/dash0-examplescd dash0-examples/laravel-loggingdocker compose up --build
The application is a small API that processes fictional orders. You can modify it as you follow along and re-run the container to see results immediately.
How Laravel's logging system works
Laravel's logging infrastructure lives in the config/logging.pho configuration
file. This file defines a set of channels, each of which describes a
destination and format for log output. When your application calls Log::info()
or Log::error(), Laravel routes the message through whichever channel is
currently active.
Understanding the default configuration
A fresh Laravel installation ships with several pre-configured channels, but the
one that matters on day one is stack. The default key at the top of
config/logging.php determines which channel is used when you don't specify one
explicitly, and it reads from the LOG_CHANNEL environment variable:
123456789101112131415161718192021222324252627282930// config/logging.phpreturn ['default' => env('LOG_CHANNEL', 'stack'),'channels' => ['stack' => ['driver' => 'stack','channels' => ['single'],'ignore_exceptions' => false,],'single' => ['driver' => 'single','path' => storage_path('logs/laravel.log'),'level' => env('LOG_LEVEL', 'debug'),'replace_placeholders' => true,],'daily' => ['driver' => 'daily','path' => storage_path('logs/laravel.log'),'level' => env('LOG_LEVEL', 'debug'),'days' => env('LOG_DAILY_DAYS', 14),'replace_placeholders' => true,],// ... additional channels omitted],];
The stack driver is a meta-channel that fans out to one or more other channels
listed in its channels array. By default it points to single, which writes
everything to a single file at storage/logs/laravel.log. You can add more
channels to the stack (or swap single for daily, which rotates files
automatically) by editing this array.
Channels, drivers, and Monolog
Each channel specifies a driver that determines its behavior. Laravel ships
with several built-in drivers:
singlewrites to one file that grows indefinitely.dailycreates a new file each day (e.g.laravel-2026-04-12.log) and removes files older than the configureddaysvalue.sysloganderrorlogroute messages to PHP's nativesyslog()anderror_log()functions respectively.
Under the hood, all of these drivers create
Monolog handler instances.
For example, the single driver uses Monolog's StreamHandler, daily uses
RotatingFileHandler, and so on.
If you need to use a Monolog handler that doesn't have a dedicated Laravel
driver, you can use the monolog driver to reference any handler class
directly, or the custom driver to build a channel from scratch in a factory
class. You'll see both of these in later sections.
Log levels
Laravel follows the eight severity levels defined in
RFC 5424, from DEBUG (least
severe) through INFO, NOTICE, WARNING, ERROR, CRITICAL, ALERT, to
EMERGENCY (most severe). Each channel has a level setting that acts as a
minimum threshold: any message below that level is silently discarded.
The LOG_LEVEL environment variable in your .env file controls this threshold
for channels that reference it. In development, debug is a sensible default.
In production, most teams set it to info or warning to reduce noise and
storage costs, then lower it temporarily during incidents.
Where the .env file fits in
Your .env file is where you control logging behavior without touching the
configuration code:
1234LOG_CHANNEL=stackLOG_LEVEL=debugLOG_DAILY_DAYS=14LOG_DEPRECATIONS_CHANNEL=null
LOG_CHANNEL selects the active channel, LOG_LEVEL sets the minimum severity,
LOG_DAILY_DAYS controls how many days of rotated log files to retain (when
using the daily driver) andLOG_DEPRECATIONS_CHANNEL routes PHP and Laravel
deprecation warnings to a specific channel, which is useful when preparing for
major version upgrades.
Writing your first log entry
With the default setup, you can immediately begin creating log entries in your application:
12345678910111213// routes/web.phpuse Illuminate\Support\Facades\Log;Route::get('/test-logging', function () {Log::info('Application started.');Log::error('Something went wrong.', ['component' => 'payment-gateway',]);return response()->json(['message' => 'Logs written.',]);});
The second argument is an optional context array that gets appended to the log
entry. Check storage/logs/laravel.log and you'll see output like this:
12[2026-04-12 20:14:29] local.INFO: Application started.[2026-04-12 20:14:29] local.ERROR: Something went wrong. {"component":"payment-gateway"}
The format is [timestamp] environment.LEVEL: message {context}. It's
human-readable, which is fine for local development. But this format has real
limitations once your application reaches production, which is exactly what the
next section addresses.
Switching to structured JSON output
The default format you saw above is produced by Monolog's LineFormatter. It's
readable in a terminal, but it falls apart in production: multi-line stack
traces break line-based tooling, context values are hard to search reliably, and
parsing requires fragile regular expressions.
Structured JSON output solves all of this by turning each log entry into a well-defined object that downstream systems can index, filter, and aggregate reliably.
Laravel's tap mechanism lets you customize how Monolog is configured for
any channel without replacing the channel driver. Create a class that receives
the logger instance and swaps in the JsonFormatter:
123456789101112131415161718<?php// app/Logging/JsonFormatter.phpnamespace App\Logging;use Monolog\Formatter\JsonFormatter as MonologJsonFormatter;class JsonFormatter{public function __invoke($logger){foreach ($logger->getHandlers() as $handler) {$handler->setFormatter(new MonologJsonFormatter());}}}
In Docker or Kubernetes, writing to stdout or stderr is the recommended
approach because container runtimes capture these streams natively and
orchestrators handle collection, rotation, and forwarding for you. This aligns
with the twelve-factor app methodology and keeps
log management out of your application code.
Laravel already ships with a stderr channel in config/logging.php. Add the
tap key to it so it uses your new formatter:
12345678910111213// config/logging.php'stderr' => ['driver' => 'monolog','level' => env('LOG_LEVEL', 'debug'),'handler' => StreamHandler::class,'formatter' => env('LOG_STDERR_FORMATTER'),'with' => ['stream' => 'php://stderr',],'tap' => [App\Logging\JsonFormatter::class],'processors' => [PsrLogMessageProcessor::class],],
Then set LOG_STACK=stderr in your .env file:
1LOG_STACK=stderr
Now visit /test-logging again and you'll see JSON output directly in your
terminal:
12{"message":"Application started.","context":{},"level":200,"level_name":"INFO","channel":"local","datetime":"2026-04-12T20:30:42.918064+00:00","extra":{}}{"message":"Something went wrong.","context":{"component":"payment-gateway"},"level":400,"level_name":"ERROR","channel":"local","datetime":"2026-04-12T20:30:42.920222+00:00","extra":{}}
The context field carries the data you passed explicitly with the log call,
while extra is reserved for data injected automatically by Monolog processors.
Both are now first-class, queryable fields rather than a string appended to the
end of a line.
Contextual logging in Laravel
Logs only become valuable when they carry enough context to explain what happened, to whom, and as part of which operation.
Laravel provides several mechanisms for attaching this context. Understanding which one to use and when is one of the most important production logging decisions you'll make.
Per-message context
The simplest form of context is the array you pass as the second argument to any
Log method:
12345678use Illuminate\Support\Facades\Log;Log::error('Payment processing failed.', ['order_id' => $order->id,'gateway' => 'stripe','error_code' => $exception->getCode(),'customer_id' => $order->customer_id,]);
This data appears in the context field of the resulting JSON record, and it
should be used for event-specific context alone. The quality of the resulting
logs largely depends on the attributes you choose to include here.
An error log that says "processing failed" tells you almost nothing; one that includes the order ID, gateway name, and error code gives you the necessary details to aid your investigation.
The Context facade (Laravel 11+)
Per-message context doesn't scale when the same attributes need to appear on
every log entry in a request. Passing cross-cutting attributes into every Log
call is tedious, inconsistent, and easy to forget in the one place where it
matters most.
Laravel 11 introduced the Context facade to solve exactly this problem. It
provides a request-scoped data store that's automatically appended to every
subsequent log entry without you needing to pass it explicitly.
This is the recommended approach for request-scoped metadata like request IDs, authenticated user details, and tenant identifiers. A middleware is the natural place to set it up:
1234567891011121314151617181920212223242526<?php// app/Http/Middleware/AttachRequestContext.phpnamespace App\Http\Middleware;use Closure;use Illuminate\Http\Request;use Illuminate\Support\Facades\Context;use Illuminate\Support\Str;use Symfony\Component\HttpFoundation\Response;class AttachRequestContext{public function handle(Request $request,Closure $next): Response {Context::add('request_id', Str::uuid()->toString());if ($user = $request->user()) {Context::add('user_id', $user->id);}return $next($request);}}
Register this middleware globally in your application's bootstrap/app.php:
12345->withMiddleware(function (Middleware $middleware) {$middleware->append(\App\Http\Middleware\AttachRequestContext::class);})
From this point forward, every log call made during the request will
automatically include the request_id and user_id fields (if available). You
don't need to pass them explicitly.
You'll see these fields in the extra attribute:
12{"message":"Application started.",..., "extra":{"request_id":"2321d78f-2212-4b04-bd81-c0d064022655"}}{"message":"Something went wrong.",..., "extra":{"request_id":"2321d78f-2212-4b04-bd81-c0d064022655"}}
Context propagation across queued jobs
One of the most powerful features of the Context facade is its automatic
propagation to queued jobs. When Laravel dispatches a job, it serializes the
current context data through a dehydration step. Then when a queue worker picks
up the job, the context is hydrated back into the worker's process.
This means that if your middleware sets a request_id and user_id via
Context::add(), and the request dispatches a ProcessOrder job to the queue,
the log entries emitted by that job will automatically include the same
request_id and user_id. You get a thread of correlation from the initial
HTTP request through to the background processing without any manual wiring.
123456789101112// In your controller or serviceContext::add('request_id', Str::uuid()->toString());Context::add('user_id', $user->id);// Later in the same requestProcessOrder::dispatch($order);// Inside the job's handle() method, logs automatically// carry request_id and user_id from the originating requestLog::info('Processing order.', ['order_id' => $order->id,]);
If you need to add job-specific context without overwriting the inherited
request context, you can call Context::add() inside the job's handle()
method.
Preventing sensitive data from leaking into your logs
The more context you attach to your logs, the more useful they become for debugging. But that same habit creates a risk: it's easy to inadvertently log API keys, bearer tokens, passwords, or personally identifiable information that should never be exposed in your logs.
Laravel's Context facade provides one built-in safeguard through hidden
context, and you can supplement it with a sanitization layer in your own code.
Using hidden context for sensitive data
The Context facade supports hidden context, which is data that's available to
your application code but never written to log entries. This is useful for
values that need to flow through the request lifecycle for authorization or
routing decisions but shouldn't appear in logs.
12345Context::addHidden('api_key', $request->bearerToken());Context::addHidden('session_token',$request->session()->getId());
Hidden context propagates to queued jobs just like regular context, but it's excluded from log output by design.
Beyond hidden context, you should also be deliberate about what ends up in your
per-message context arrays. A careless
Log::info('Request received.', $request->all()) will dump every form field
into your logs, including passwords and tokens.
Redacting sensitive keys with a Monolog processor
Hidden context still requires developers to remember to use it. A stronger safety net is a Monolog processor that intercepts every log record and redacts specified keys before they reach any handler, regardless of how the developer wrote the log call.
The redact-sensitive package provides exactly this. Install it via Composer:
1composer require leocavalcante/redact-sensitive
The processor takes a map of key names and how many characters to leave visible:
12345678910111213141516171819202122232425<?php// app/Logging/RedactSensitiveFields.phpnamespace App\Logging;use RedactSensitive\RedactSensitiveProcessor;class RedactSensitiveFields{public function __invoke($logger){$logger->pushProcessor(new RedactSensitiveProcessor(['password' => 0,'password_confirmation' => 0,'api_key' => 4,'token' => 0,'secret' => 0,'ssn' => 0,'credit_card' => -4,'authorization' => 0,]));}}
A positive number like 4 shows the first four characters
(mysu***************). A negative number like -4 shows the last four
(************1142), which is useful for credit card numbers. And 0 replaces
the entire value.
To wire this into your Laravel channel, add it to its tap array:
123456789// config/logging.php'stderr' => [// ...existing config...'tap' => [App\Logging\JsonFormatter::class,App\Logging\RedactSensitiveFields::class,],],
Now even if someone writes Log::info('Login attempt.', $request->all()),
sensitive fields are masked before they're written anywhere. You can test this
with a quick route:
1234567891011Route::get('/test-redaction', function () {Log::info('login attempt', ['user_id' => 'usr-1234','password' => 'secret123','api_key' => 'sk-live-abc123xyz',]);return response()->json(['message' => 'Check logs for redacted output.',]);});
The resulting log entry will show password fully masked and api_key
partially visible, while user_id passes through unchanged:
123456789101112{"message": "login attempt","context": {"user_id": "usr-1234","password": "*********","api_key": "sk-l*************"},"level": 200,"level_name": "INFO","channel": "local","datetime": "2026-04-13T07:12:04.584784+00:00"}
Keep in mind that this approach isn't foolproof since a developer can still log a sensitive value under a key name that isn't in the redaction list so this doesn't remove the need for code review and team awareness around what gets logged.
It's also worth noting that sensitive values can leak through exception stack
traces, not just context arrays. PHP 8.2's
#[\SensitiveParameter]
attribute lets you mark function parameters so their values are replaced with a
placeholder in stack traces. Apply it to any parameter that accepts credentials,
tokens, or secrets.
If your logs flow through an OpenTelemetry Collector (which we'll set up later in this guide), you can add further layers of redaction that strip sensitive attributes from log records and other telemetry signals before they leave your infrastructure.
This catches anything the application layer missed and handles cases where redaction rules make more sense to manage outside the application itself
Capturing debug context only when errors occur
One of the hardest trade-offs in logging is choosing between verbosity and cost.
You want DEBUG-level detail when investigating a failure, but emitting it
continuously in production generates massive volumes of data that's expensive to
store and rarely needed.
Monolog's FingersCrossedHandler
solves this
by buffering all log records in memory without writing them anywhere. The moment
a record at or above a configurable trigger level arrives (typically ERROR),
the handler flushes the entire buffer to the wrapped handler.
To set it up, define a buffered channel in your logging configuration:
123456789101112131415161718192021// config/logging.php'buffered' => ['driver' => 'monolog','level' => 'debug','handler' =>\Monolog\Handler\FingersCrossedHandler::class,'handler_with' => ['handler' => new \Monolog\Handler\StreamHandler('php://stderr'),'activationStrategy' => new\Monolog\Handler\FingersCrossed\ErrorLevelActivationStrategy(\Monolog\Level::Error),],'tap' => [App\Logging\JsonFormatter::class,App\Logging\RedactSensitiveFields::class,],],
This channel uses FingersCrossedHandler to hold all log records in memory and
only flush them to stderr when a record at ERROR level or above arrives. If
the request completes without an error, the buffer is discarded and nothing is
written:
For this channel to receive your logs, you need to include it in your active
channel configuration. The most practical approach is to combine it with your
existing stderr channel through the LOG_STACK environment variable:
1LOG_STACK=stderr,buffered
Note that LOG_LEVEL controls the minimum level for the stderr channel (which
references it), but the buffered channel hardcodes level: debug so it always
accepts every record into its buffer regardless of your production LOG_LEVEL
setting. Only the activationStrategy determines when the buffer is flushed.
You can test this by adding a route that either succeeds or fails:
12345678910111213141516171819// routes/web.phpRoute::get('/test-buffered/{fail?}',function (string $fail = 'no') {Log::debug('Step 1: validating input.');Log::debug('Step 2: checking inventory.');Log::info('Step 3: charging payment.');if ($fail === 'yes') {Log::error('Step 4: payment failed.');} else {Log::info('Step 4: payment succeeded.');}return response()->json(['message' => "Done (fail=$fail). Check logs.",]);});
Make sure your LOG_LEVEL environment variable is set to info, then hit the
/test-buffered/no endpoint. You'll see two INFO messages from stderr
channel:
12{"message":"Step 3: charging payment.","level_name":"INFO", ...}{"message":"Step 4: payment succeeded.","level_name":"INFO", ...}
The buffered channel accumulated all four entries in-memory but discarded them
because no error occurred. Now, hit /test-buffered/yes (to simulate error
conditions) and you'll see an ERROR message from stderr as before, plus all
four entries (including the two DEBUG messages) flushed by the buffered
channel:
123456789// stderr channel{"message":"Step 3: charging payment.","level_name":"INFO", ...}{"message":"Step 4: payment failed.","level_name":"ERROR", ...}// buffered channel (the error + full context){"message":"Step 1: validating input.","level_name":"DEBUG", ...}{"message":"Step 2: checking inventory.","level_name":"DEBUG", ...}{"message":"Step 3: charging payment.","level_name":"INFO", ...}{"message":"Step 4: payment failed.","level_name":"ERROR", ...}
That's the value: zero noise on success, full context on failure.
You'll notice that INFO and ERROR entries appear in both channels when the
buffer flushes. This duplication is inherent to the pattern: the buffered
channel needs to see the ERROR to trigger its flush, and stderr writes
INFO+ as its normal baseline.
If deduplication matters to you, use Log::channel() to route INFO and
ERROR entries to specific channels explicitly:
123456789101112131415161718192021Route::get('/test-buffered/{fail?}',function (string $fail = 'no') {Log::debug('Step 1: validating input.');Log::debug('Step 2: checking inventory.');Log::channel('stderr')->info('Step 3: charging payment.');if ($fail === 'yes') {Log::channel('buffered')->error('Step 4: payment failed.');} else {Log::channel('stderr')->info('Step 4: payment succeeded.');}return response()->json(['message' => "Done (fail=$fail). Check logs.",]);});
The Log::debug() calls still go through the default stack, where stderr
drops them (because LOG_LEVEL=info) and buffered accepts them into its
memory buffer. But INFO goes directly to stderr only, and ERROR goes
directly to buffered only, so there are no duplicate entries:
1234567// stderr channel{"message":"Step 3: charging payment.","level_name":"INFO", ...}// buffered channel (flushed by the ERROR){"message":"Step 1: validating input.","level_name":"DEBUG", ...}{"message":"Step 2: checking inventory.","level_name":"DEBUG", ...}{"message":"Step 4: payment failed.","level_name":"ERROR", ...}
The out-of-order appearance is only a visual issue in your terminal. Since each JSON entry includes a datetime field with microsecond precision, so any log aggregation platform will sort them correctly regardless of the order they arrive.
Exception handling and error logging
Laravel's exception handler is the single most important piece of your logging infrastructure, because it's the last line of defense for errors that aren't caught by application code. Getting it right means the difference between having a complete record of every failure and having errors disappear silently.
How Laravel handles exceptions by default
When an unhandled exception occurs, Laravel's exception handler logs it at the
ERROR level and renders an appropriate response. The default behavior already
does several useful things: it serializes the exception class, message, file,
line, and full stack trace into the log entry.
In most cases, you don't need to override this behavior. What you do need is to ensure that the surrounding context is rich enough to make the logged exception actionable.
Adding context to exceptions
Laravel supports a context() method on exception classes. If your custom
exception defines this method, the returned array is automatically merged into
the log entry's context when the exception is reported:
123456789101112131415161718192021222324252627<?phpnamespace App\Exceptions;use RuntimeException;class PaymentFailedException extends RuntimeException{public function __construct(string $message,private string $orderId,private string $gateway,private string $errorCode,?\Throwable $previous = null,) {parent::__construct($message, 0, $previous);}public function context(): array{return ['order_id' => $this->orderId,'gateway' => $this->gateway,'error_code' => $this->errorCode,];}}
When this exception is thrown and reaches the handler, the resulting log entry
includes both the exception details and your structured business context
automatically. This is cleaner than catching the exception just to add context
to a manual Log::error() call.
Controlling exception reporting
Laravel's bootstrap/app.php lets you customize exception reporting behavior.
You can use reportable() to add logic for specific exception types, or
dontReport() to suppress exceptions that you know are benign:
123456789101112131415161718use App\Exceptions\PaymentFailedException;->withExceptions(function (Exceptions $exceptions) {$exceptions->reportable(function (PaymentFailedException $e) {Log::channel('critical-alerts')->error('Payment failure requires attention.',$e->context());// Return false to prevent default reporting// (which would double-log), or omit the return// to let both the custom and default reporting// happen.return false;});})
Avoiding duplicate logging
A common mistake is catching an exception, logging it manually, and then re-throwing it so that the exception handler logs it again:
123456789// This produces two log entries for the same errortry {$this->processPayment($order);} catch (PaymentFailedException $e) {Log::error('Payment failed.', ['order_id' => $order->id,]);throw $e; // The exception handler also logs this}
The cleaner approach is to let the exception carry its own context (via the
context() method shown above) and let Laravel's exception handler do the
logging. If you need additional custom behavior, use reportable() in the
exception configuration rather than catch-and-rethrow patterns.
When you do need to catch an exception for control flow but still want it
reported, use Laravel's report() helper:
1234567try {$this->processPayment($order);} catch (PaymentFailedException $e) {report($e);// Handle the failure gracefully without re-throwingreturn $this->fallbackResponse($order);}
The report() helper sends the exception through the standard reporting
pipeline exactly once.
Connecting Laravel logs to OpenTelemetry
Everything covered so far (structured output, contextual enrichment, and disciplined exception handling) makes your logs useful within a single service. But production applications rarely consist of a single service. Your Laravel API might talk to a payment gateway, a notification service, a queue worker, and a database. When something fails, you need to follow the thread across all of them.
This is where OpenTelemetry changes the equation. By routing your Laravel logs through an OpenTelemetry pipeline, they're normalized into a vendor-neutral data model that can be correlated with distributed traces and metrics from every other component in your infrastructure.
Why OpenTelemetry matters for Laravel
The key benefit isn't just centralization (any log aggregator can do that) but semantic unification. When your logs conform to the OpenTelemetry log data model, every record carries standardized resource attributes (service name, version, deployment environment), severity levels mapped to a common scale, and trace/span identifiers that link the log entry to the distributed trace it belongs to.
This means you can start from a log entry that says "payment failed," jump to the distributed trace for that request, see every service involved in the transaction, identify which step introduced the latency or error, and drill into the specific span where the failure originated. Without OpenTelemetry, reconstructing this picture requires manually correlating timestamps and request IDs across separate logging systems.
Installing the OpenTelemetry PHP extension
OpenTelemetry's auto-instrumentation for PHP relies on a C extension that hooks into the engine to intercept function calls. Install it before adding any Composer packages:
1pecl install opentelemetry
Enable it in your php.ini:
12[opentelemetry]extension=opentelemetry.so
For Docker-based deployments, add this to your Dockerfile:
12RUN pecl install opentelemetry \&& docker-php-ext-enable opentelemetry
Verify the extension is loaded:
1php -m | grep opentelemetry
Zero-code PSR-3 log export
The fastest way to get your Laravel logs into an OpenTelemetry pipeline requires
no application code changes at all. Because Laravel uses Monolog, and Monolog
implements the PSR-3 LoggerInterface, OpenTelemetry's PSR-3
auto-instrumentation can intercept every log call and forward it to the OTel
SDK.
Install the required packages:
1234567composer require \open-telemetry/sdk \open-telemetry/exporter-otlp \open-telemetry/opentelemetry-auto-psr3 \google/protobuf \guzzlehttp/guzzle \guzzlehttp/psr7
Then configure the instrumentation through environment variables:
123456OTEL_PHP_AUTOLOAD_ENABLED=trueOTEL_PHP_PSR3_MODE=inject,exportOTEL_LOGS_EXPORTER=otlpOTEL_EXPORTER_OTLP_PROTOCOL=http/protobufOTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318OTEL_SERVICE_NAME=my-laravel-app
The OTEL_PHP_PSR3_MODE variable controls how the instrumentation behaves.
Setting it to inject,export enables both modes simultaneously:
inject adds traceId and spanId fields to the context of every log call, so
these identifiers appear in your local JSON log output. This is useful even if
you aren't exporting logs through OTel yet, because it gives you correlation
identifiers you can search for manually.
export converts each log record into an OpenTelemetry LogRecord and sends it
to the configured OTLP endpoint. Your Monolog JSON output continues to work as
before, and the OTel pipeline receives a parallel copy of every log entry in the
standardized data model.
Adding Laravel-specific tracing
The PSR-3 instrumentation captures your logs, but it doesn't create traces for Laravel's framework-level operations. For that, install the Laravel auto-instrumentation package:
12composer require \open-telemetry/opentelemetry-auto-laravel
This package automatically creates spans for HTTP requests routed through Laravel, database queries via Eloquent and the query builder, cache operations, and queue job processing. It doesn't require code changes; the OpenTelemetry PHP extension hooks into Laravel's internals at the engine level.
With both packages installed, your logs (via PSR-3 auto-instrumentation) carry
the trace_id and span_id of the active trace created by the Laravel
auto-instrumentation. The correlation happens automatically.
Explicit log export with the OTel Monolog handler
If you prefer more control over which channels export logs to OpenTelemetry, or if you want to keep the OTel export separate from your local logging pipeline, you can use the dedicated Monolog handler instead of the PSR-3 auto-instrumentation:
1234567composer require \open-telemetry/opentelemetry-logger-monolog \open-telemetry/sdk \open-telemetry/exporter-otlp \google/protobuf \guzzlehttp/guzzle \guzzlehttp/psr7
Then create a custom channel that uses this handler:
1234567891011121314151617181920212223242526// app/Logging/OtelChannel.phpnamespace App\Logging;use Monolog\Logger;use OpenTelemetry\API\Globals;use OpenTelemetry\Contrib\Logs\Monolog\Handleras OTelHandler;class OtelChannel{public function __invoke(array $config): Logger{$logger = new Logger('otel');$loggerProvider = Globals::loggerProvider();$logger->pushHandler(new OTelHandler($loggerProvider,\Monolog\Level::Info,));return $logger;}}
Register the channel in your logging configuration:
123456789101112131415161718192021222324// config/logging.php'channels' => ['stack' => ['driver' => 'stack','channels' => ['stderr', 'otel'],'ignore_exceptions' => false,],'stderr' => ['driver' => 'monolog','handler' => \Monolog\Handler\StreamHandler::class,'with' => ['stream' => 'php://stderr',],'level' => env('LOG_LEVEL', 'info'),'tap' => [App\Logging\JsonFormatter::class],],'otel' => ['driver' => 'custom','via' => App\Logging\OtelChannel::class,],],
With this setup, your application writes JSON logs to stderr for local
collection and simultaneously exports the same records through the OpenTelemetry
SDK. Each pipeline is independently configurable: you can set different minimum
levels, apply different processors, or disable one without affecting the other.
What your logs look like in the OTel data model
When a Laravel log record reaches the OpenTelemetry Collector (whether via the PSR-3 auto-instrumentation or the explicit Monolog handler), it's transformed into the OTel log data model. If you enable the debug exporter on the Collector, the output looks something like this:
1234567891011121314151617181920212223242526272829ResourceLog #0Resource SchemaURL:Resource attributes:-> service.name: Str(my-laravel-app)-> host.name: Str(web-01)-> os.type: Str(linux)-> process.runtime.name: Str(cli-server)-> process.runtime.version: Str(8.4.18)-> telemetry.sdk.name: Str(opentelemetry)-> telemetry.sdk.language: Str(php)-> telemetry.sdk.version: Str(1.13.0)ScopeLogs #0ScopeLogs SchemaURL:InstrumentationScope io.opentelemetry.contrib.php.psr3LogRecord #0ObservedTimestamp: 2026-04-10 14:22:03.49... +0000 UTCSeverityText: ERRORSeverityNumber: Error(17)Body: Str(Payment processing failed.)Attributes:-> order_id: Str(ord-817)-> gateway: Str(stripe)-> error_code: Str(card_declined)-> customer_id: Str(cust-2041)-> request_id: Str(a3f8c9e1-...)-> user_id: Str(42)Trace ID: 4bf92f3577b34da6a3ce929d0e0e4736Span ID: 00f067aa0ba902b7Flags: 1
Several things are worth noting here. The Resource attributes section contains
metadata about the service and runtime environment that the OTel SDK attaches
automatically. Your Laravel context data (order_id, request_id, user_id,
etc.) appears under Attributes, flattened as top-level keys rather than nested
inside context and extra objects. And the Trace ID and Span ID fields
link this log entry directly to the distributed trace for the request that
triggered the payment failure.
This is the payoff for everything you set up earlier. The structured JSON output
ensures your context survives the transformation. The Context facade ensures
every log entry carries request-scoped identifiers. And the OpenTelemetry
pipeline ensures those identifiers connect to the broader trace that spans your
entire distributed system.
Routing logs through the OpenTelemetry Collector
In production, your Laravel application shouldn't send logs directly to a backend. Instead, route them through an OpenTelemetry Collector that acts as a local aggregation point. The Collector can batch records, retry failed exports, add resource attributes, filter or transform data, and route telemetry to one or more backends.
A minimal Collector configuration for receiving Laravel logs and forwarding them looks like this:
12345678910111213141516171819202122232425262728293031# otel-collector-config.yamlreceivers:otlp:protocols:http:endpoint: 0.0.0.0:4318processors:batch:send_batch_size: 512timeout: 5sresource:attributes:- key: deployment.environment.namevalue: productionaction: upsertexporters:otlphttp:endpoint: https://ingress.eu-west-1.aws.dash0.comheaders:Authorization: "Bearer ${DASH0_AUTH_TOKEN}"service:pipelines:logs:receivers: [otlp]processors: [batch, resource]exporters: [otlphttp]
The batch processor accumulates records and flushes them in groups, reducing the number of outbound HTTP requests. The resource processor adds deployment metadata that applies uniformly to every record, keeping this concern out of your application code.
Seeing it come together in Dash0
Once your Laravel logs are flowing through the OpenTelemetry Collector, sending them to Dash0 gives you a platform that treats the OTel data model as a first-class citizen. Because Dash0 is OpenTelemetry-native, your logs retain their full semantic structure from ingestion through to query time, without translation layers or proprietary schema mapping.
todo.png
In practice, this means you can filter logs by any context attribute you
attached in your middleware (request_id, user_id, path), search across
services using the same field names, and click through from a log entry to the
distributed trace that produced it. If the trace spans multiple services, you
see the full picture: which service was called, how long each step took, and
where the error originated.
For teams that want to go further, Dash0's
Agent0 can investigate incidents using the
same structured context your logs carry. When every log entry includes a
request_id, order_id, and trace correlation, an AI agent has enough signal
to narrow down root causes without requiring you to manually reconstruct event
timelines.
To try this with the companion playground, sign up for a
free Dash0 trial and configure the Collector's
OTLP exporter to point at your Dash0 ingress endpoint. The playground's
docker-compose.yml includes a commented-out exporter configuration that you
can uncomment and fill in with your credentials.
Final thoughts
Laravel's logging system is significantly more capable than its default
configuration suggests. The Context facade, introduced in Laravel 11, changes
the game for contextual logging by making it trivial to attach request-scoped
metadata that propagates across log entries and queued jobs automatically.
Combined with structured JSON output and disciplined exception handling, this
gives you logs that are genuinely useful for production debugging rather than a
wall of text you scroll through hoping to spot something relevant.
OpenTelemetry takes this further by connecting your Laravel logs to the broader observability ecosystem. With the PSR-3 auto-instrumentation or the explicit Monolog handler, your logs become part of a unified data model where they can be correlated with distributed traces across every service in your infrastructure. The investment is minimal (a Composer package, a few environment variables, and a Collector configuration), but the debugging capability it unlocks is transformative.
For deeper coverage of Monolog's internals, including handlers, processors,
formatters, and the FingersCrossedHandler pattern, see our companion guide:
PHP Logging with Monolog.
The
companion repository
for this guide provides a runnable playground where you can experiment with
every pattern covered here.
Then sign up for a free Dash0 trial to see your Laravel logs alongside traces and metrics in a single, unified view.
Thanks for reading!
