Dash0 Acquires Lumigo to Expand Agentic Observability Across AWS and Serverless

Last updated: March 20, 2026

PHP Logging with Monolog: A Practitioner's Guide

If you write PHP apps, you likely already use Monolog. It ships with Laravel, Symfony, and most other frameworks that matter in the PHP ecosystem. It implements the PSR-3 LoggerInterface so your application code can remain decoupled from any specific logging backend. And it's been around long enough that the community has collectively figured out how to configure it for just about any use case you can think of.

But there is a gap between using Monolog and using it well. Most codebases configure a single handler that writes to a file or to stdout, scatter a few $logger->info() calls around the codebase, and call it a day. The result is logs that technically exist but fail to answer the one question you care about when something goes wrong: why did this request fail?

This guide takes a different approach. We will build up a logging system in PHP that produces structured, contextual, and observable log data from the very beginning. By the end, your Monolog logs will flow through an OpenTelemetry pipeline into a platform like Dash0 where they can be searched, filtered, and correlated with traces and metrics.

Let's begin!

Prerequisites

Before proceeding, make sure the following are installed on your machine:

  • PHP 8.4 or later (8.5 is the latest stable release at the time of writing, but 8.4 remains under active support and works perfectly).
  • Composer (latest).

Some familiarity with PHP will be helpful, but you do not need prior experience with Monolog or OpenTelemetry to follow along.

Getting started with Monolog

Create a new directory for the project and install Monolog:

bash
12
mkdir php-monolog-logging && cd php-monolog-logging
composer require monolog/monolog

This installs Monolog 3.x (the current major version) along with its PSR-3 dependency. Composer generates the usual vendor directory and lock file.

Create an index.php file:

php
12345678910111213141516
<?php
declare(strict_types=1);
require __DIR__ . '/vendor/autoload.php';
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
use Monolog\Level;
$logger = new Logger('app');
$logger->pushHandler(
new StreamHandler('php://stdout', Level::Debug)
);
$logger->info('Application started.');

Run it:

bash
1
php index.php

You should see output like:

text
1
[2026-03-19T10:15:22.438291+00:00] app.INFO: Application started. [] []

This single line already tells you a few things about how Monolog structures a log record. The timestamp comes first, followed by the channel name (app) and the severity level (INFO). The message comes next, and the two empty brackets at the end represent the context and extra arrays, both of which are empty for now.

A note on PSR-3

When you installed Monolog, Composer also pulled in the psr/log package. This is the PSR-3 LoggerInterface, a standard contract published by the PHP-FIG that defines how a logger should behave.

The interface specifies the eight severity-level methods you will use throughout this guide (debug(), info(), notice(), warning(), error(), critical(), alert(), emergency()) along with a generic log() method and a convention for passing contextual data.

This keeps your code decoupled from any specific logging implementation, so if you ever need to swap Monolog for a different PSR-3 logger, none of your application code changes. It's also the reason OpenTelemetry's zero-code PSR-3 instrumentation (which we'll cover later) can hook into Monolog without any Monolog-specific configuration.

Understanding how Monolog works

Before going further, it's worth spending a moment on how Monolog's internals fit together. The library is built around a pipeline model with three key concepts: the Logger, the Handler, and the Processor.

The Logger is the object you interact with in your application code. When you call a method like $logger->info(), it constructs a log record object containing the timestamp, the channel name, the severity level, the message, and any context you provided. That record is then passed down the handler stack.

Handlers are the most versatile part of the system. At their simplest, a handler writes a record to a destination: a file, stdout (like the StreamHandler above), a database or some remote API.

But Monolog handlers do much more than route data. They control whether a record gets processed at all by enforcing minimum severity levels. They can buffer records in memory and flush them in a batch at the end of a request. They can wrap other handlers to add fault tolerance, conditional logic, or sampling.

Some handlers, like FingersCrossedHandler, fundamentally change the logging strategy by silently accumulating records and only releasing them when a trigger condition is met. Each logger can have multiple handlers arranged in a stack (evaluated in Last-In, First-Out (LIFO) order), which is how you build sophisticated logging pipelines from simple, composable pieces.

Processors sit between the logger and the handlers. Before a record reaches any handler, every registered processor gets a chance to enrich it with additional data. This is where you add things like request IDs, memory usage, or the current git commit hash. The distinction between context (per-message data you pass explicitly) and extra (per-processor data injected automatically) is central to Monolog's design and will become important when we get to contextual logging.

Such cross-cutting context does not belong in individual log statements; it belongs in infrastructure that runs automatically. Monolog formalizes this split by keeping two separate arrays on every record: context for per-message data you pass explicitly, and extra for data that processors inject. This distinction will become important when we get to contextual logging.

There's also the Formatter, which each handler can use to control the shape of the final output. The default LineFormatter produces the human-readable text you saw above, but for production use we will switch to JsonFormatter almost immediately.

Controlling the signal-to-noise ratio with log levels

Monolog follows the RFC 5424 severity levels, which define eight levels in decreasing order of urgency. In Monolog 3, these are represented by the Monolog\Level backed enum:

LevelEnum ValueNumeric ValueDescription
DEBUGLevel::Debug100Detailed diagnostic information for developers
INFOLevel::Info200Normal operational events worth recording
NOTICELevel::Notice250Unusual but non-erroneous conditions
WARNINGLevel::Warning300Potential problems that deserve attention
ERRORLevel::Error400A failed operation
CRITICALLevel::Critical500A serious failure in a major component
ALERTLevel::Alert550Action must be taken immediately
EMERGENCYLevel::Emergency600The system is unusable

Each level has a corresponding method on the logger:

php
12345678
$logger->debug('Detailed diagnostic information.');
$logger->info('Normal operational events.');
$logger->notice('Unusual but not erroneous conditions.');
$logger->warning('Potential problems worth attention.');
$logger->error('A failed operation that needs investigation.');
$logger->critical('A serious failure in a major component.');
$logger->alert('Action must be taken immediately.');
$logger->emergency('The system is unusable.');

When you assign a minimum level to a handler, any record with a severity below that threshold is silently discarded by that handler. For example, setting a handler to Level::Warning means it will only process WARNING, ERROR, CRITICAL, ALERT, and EMERGENCY records:

php
123456
$logger->pushHandler(
new StreamHandler('php://stdout', Level::Warning)
);
$logger->info('This will be ignored.');
$logger->warning('This will be logged.');

A practical guideline is setting your production handlers to Level::Info or Level::Warning and reserve Level::Debug for local development or short-lived diagnostic sessions if you don't want to wake up to a costly observability bill.

Reddit user laments too much logging

Making levels configurable

Hard-coding the log level in your source code means you need a new deployment to change verbosity. A better approach is to read it from an environment variable:

php
123456789101112
function getLogLevel(): Level
{
$level = strtoupper(
getenv('LOG_LEVEL') ?: 'INFO'
);
return Level::fromName($level);
}
$logger->pushHandler(
new StreamHandler('php://stdout', getLogLevel())
);

The Level::fromName() method accepts a case-insensitive string like "debug" or "WARNING" and returns the corresponding enum value. If the string is invalid, it throws an UnhandledMatchError, which lets you know that the configuration is wrong immediately.

Monolog handlers also exposes a setLevel() method so that you can adjust verbosity at any point during execution if your application logic demands it.

php
1
$handler->setLevel(Level::Error);

Setting up structured logging with JSON

The default LineFormatter output is fine for local development, but it's not suitable for production. Plain text logs are difficult to parse programmatically, expensive to search at scale, and fragile when messages contain special characters or multi-line content like stack traces.

The industry standard for production logging is structured JSON, and switching to it in Monolog takes a single line:

php
12345678910111213141516
use Monolog\Logger;
use Monolog\Handler\StreamHandler;
use Monolog\Formatter\JsonFormatter;
use Monolog\Level;
$handler = new StreamHandler('php://stdout', Level::Info);
$handler->setFormatter(new JsonFormatter());
$logger = new Logger('app');
$logger->pushHandler($handler);
$logger->info('Order placed successfully.', [
'order_id' => 'ord-48291',
'customer_id' => 'cust-1024',
'total' => 79.99,
]);

This produces a newline delimited JSON output (formatted here for readability):

json
12345678910111213
{
"message": "Order placed successfully.",
"context": {
"order_id": "ord-48291",
"customer_id": "cust-1024",
"total": 79.99
},
"level": 200,
"level_name": "INFO",
"channel": "app",
"datetime": "2026-03-19T15:00:33.836309+00:00",
"extra": {}
}

By switching to your PHP logs to JSON, you've laid the foundation for making your logs a genuinely useful signal for observability rather than a stream of text that cannot be used to debug today's complex systems.

Enriching PHP logs with contextual attributes

Switching to JSON format alone will not make your logs useful. A perfectly structured JSON record that says {"message": "Something went wrong"} is no more helpful than its plain-text equivalent.

The value of structured logging only materializes when each record carries enough contextual data to answer the questions you'll inevitably ask during an incident: which request triggered this? Which user was affected? What were the inputs? Without that context, you'll be left correlating symptoms manually and guessing at causality.

Monolog provides two complementary mechanisms for attaching this context: the context array (per-message data you pass explicitly) and processors (cross-cutting data injected automatically into every record).

In the example above, we passed contextual data directly to the info() method as the second argument. This is the context array, and it's the right place for any data that is specific to the event being logged:

php
12345
$logger->error('Payment processing failed.', [
'order_id' => 'ord-48291',
'gateway' => 'stripe',
'error_code' => 'card_declined',
]);

But some context applies to more than a single log line. Consider a function that makes several log calls during its execution: you likely want every one of those calls to include the entity ID being processed, not just the first one.

Zoom out further and you will find context that should appear on every log produced during an entire request, such as distributed tracing IDs.

And zoom out once more and you will find truly global context, like the application version or deployment environment, that belongs on every record the process emits.

Manually passing all of this to every event would be tedious, error-prone, and almost certainly inconsistent across a team. You need a mechanism that injects context automatically at the right scope without touching individual log statements.

This is what processors are for. A processor is a callable that receives a LogRecord and returns a new LogRecord with additional data in the extra array.

Built-in processors

Monolog ships with several useful processors. Here are some of the most useful ones that add global context:

php
123456789101112131415161718192021
use Monolog\Processor\GitProcessor;
use Monolog\Processor\ProcessIdProcessor;
use Monolog\Processor\MemoryUsageProcessor;
use Monolog\Processor\HostnameProcessor;
use Monolog\Processor\IntrospectionProcessor;
// Adds the current git branch and commit SHA.
$logger->pushProcessor(new GitProcessor());
// Adds the process ID of the running PHP process.
$logger->pushProcessor(new ProcessIdProcessor());
// Adds current memory usage.
$logger->pushProcessor(new MemoryUsageProcessor());
// Adds the server hostname.
$logger->pushProcessor(new HostnameProcessor());
// Adds the file, line, class, and function
// that triggered the log call.
$logger->pushProcessor(new IntrospectionProcessor());

After adding these processors, a log record's extra array will look something like:

json
1234567891011121314151617
{
[...]
"extra": {
"file": "/home/ayo/dev/dash0/demo/php-monolog-logging/index.php",
"line": 51,
"file": "/app/src/OrderService.php",
"class": "App\\OrderService",
"function": "processPayment".
"hostname": "falcon",
"memory_usage": "2 MB",
"process_id": 70657,
"git": {
"branch": "master",
"commit": "5e1d261efebe7ca251de526ea68444a46b03fa78"
}
}
}

Writing a custom processor

The built-in processors cover common use cases, but you'll often need to add application-specific context. Since the extra property on LogRecord is mutable, you can write to it directly:

php
123456789101112131415161718192021
use Monolog\LogRecord;
class AppVersionProcessor
{
public function __invoke(LogRecord $record): LogRecord
{
return $record->with(
extra: array_merge(
$record->extra,
[
'app_version' => getenv('APP_VERSION')
?: 'unknown',
'environment' => getenv('APP_ENV')
?: 'production',
]
)
);
}
}
$logger->pushProcessor(new AppVersionProcessor());

Since LogRecord is readonly in Monolog 3, you cannot mutate its properties directly. The with() method returns a new instance with the specified changes, leaving the original untouched.

As long as you set APP_VERSION and APP_ENV in your environment, they will be reflected in the extra fields of your logs:

json
1234567
{
[...],
"extra": {
"app_version": "1.2.3",
"environment": "production"
}
}

Handlers control where and how logs are written

Monolog's handlers fall into two broad categories:

  1. The first are destination handlers that write records somewhere: a file, stdout, a socket, a database.

  2. The second are wrapper handlers that do not write anything themselves but instead modify the behavior of other handlers they wrap.

A wrapper might buffer records and flush them in a batch, or swallow exceptions so a failing handler does not take down the rest of the stack, or silently accumulate records and release them only when an error occurs.

The power of Monolog's handler system comes from composing these two categories together: you pick a destination, then wrap it in one or more behavioral handlers to build a logging strategy that matches your production requirements.

Monolog ships with dozens of handlers, but most applications only need a handful. Here are some of the ones worth knowing.

Writing to stdout/stderr with StreamHandler

In containerized deployments like Docker or Kubernetes, your application should write logs to standard output or standard error and let the container runtime handle collection and routing. This follows the twelve-factor app methodology and is the simplest, most portable approach:

php
1234567
use Monolog\Handler\StreamHandler;
use Monolog\Formatter\JsonFormatter;
$stdout = new StreamHandler('php://stdout', Level::Info);
$stdout->setFormatter(new JsonFormatter());
$logger->pushHandler($stdout);

Writing to log files

StreamHandler can also write to a file path instead of a PHP stream. Simply pass a file path as the first argument and Monolog will create the file if it does not already exist:

php
123456
$handler = new StreamHandler(
'/path/to/app.log',
Level::Info
);
$handler->setFormatter(new JsonFormatter());
$logger->pushHandler($handler);

The obvious problem with writing to a single file is that it grows without bound. For this reason, Monolog provides RotatingFileHandler, which automatically creates a new file each day and can remove files older than a configurable threshold:

php
1234567891011
use Monolog\Handler\RotatingFileHandler;
// Creates files like app-2026-03-19.log
// and keeps the most recent 10 days.
$handler = new RotatingFileHandler(
'/path/to/app.log',
10, // maximum number of files
level: Level::Info
);
$handler->setFormatter(new JsonFormatter());
$logger->pushHandler($handler);

This is convenient for development and low-traffic applications, but for busy production systems on Linux, the dedicated logrotate utility is the more robust choice.

It handles compression, retention policies, and post-rotation hooks (like signaling a process to reopen its file descriptors) at the OS level, which keeps the concern of log rotation entirely outside your application code.

When using logrotate, pair it with a plain StreamHandler pointed at a fixed file path and let the OS take care of the rest.

Capturing detailed context only when it matters

One of the hardest trade-offs in logging is choosing between verbosity and cost. You want DEBUG-level detail when investigating a failure, but logging at that level continuously in production generates enormous volumes of data that is expensive to store and almost never read.

The FingersCrossedHandler eliminates this trade-off by silently accumulating all records in memory without writing them anywhere. The moment a record at or above a configurable trigger level (e.g., ERROR) arrives, the handler flushes the entire buffer to the wrapped handler.

The result is that during normal operation, nothing is written. But the instant something goes wrong, you get a complete timeline of everything that happened leading up to the failure:

php
123456789
use Monolog\Handler\FingersCrossedHandler;
use Monolog\Handler\StreamHandler;
$inner = new StreamHandler('php://stdout', Level::Debug);
$inner->setFormatter(new JsonFormatter());
$logger->pushHandler(
new FingersCrossedHandler($inner, Level::Error)
);

This is particularly valuable for request-scoped logging in web applications. Each request starts with a clean buffer, and if the request completes successfully, nothing is written.

But if an error occurs, you get the full diagnostic context without having paid the cost of logging it on every successful request.

Other notable handlers to be aware of

Monolog ships with a large collection of handlers beyond the ones covered above. A few others worth knowing about:

  • DeduplicationHandler wraps another handler and suppresses duplicate log records within a configurable time window. This is useful for preventing log storms when the same error fires repeatedly in a tight loop.

  • SamplingHandler wraps another handler and only forwards a configurable fraction of records (such as 1 in 10). This is one way to keep high-volume logging enabled in production without paying the full storage cost.

  • FilterHandler lets you specify an exact range of levels that should be forwarded to the wrapped handler, rather than just a minimum threshold. For example, you could route only WARNING and NOTICE records to one destination while sending ERROR and above to another.

  • FallbackGroupHandler tries each wrapped handler in order and stops as soon as one succeeds. This is useful for building failover chains where a primary destination is preferred but a backup is available if it fails.

  • NullHandler discards all records. It is primarily useful for library authors that want to provide a default logger that does nothing unless the consuming application configures one.

You can find the complete list of built-in handlers in the Monolog documentation.

Logging errors and exceptions

Errors and exceptions deserve special attention in any logging strategy. The goal is not just to record that something failed, but to capture enough context to understand the chain of events that led to the failure.

Since Monolog understands PHP exceptions natively, you can just pass an exception in the context array under the exception key, the JsonFormatter will serialize it including the class name, message, code, file, line, and full stack trace:

php
123456789101112131415161718192021
function connectToDatabase(): void
{
try {
throw new \PDOException(
'Connection refused on port 5432'
);
} catch (\PDOException $inner) {
throw new \RuntimeException(
'Database unavailable',
previous: $inner
);
}
}
try {
connectToDatabase();
} catch (\RuntimeException $e) {
$logger->alert('Service degraded: database layer down.', [
'exception' => $e,
]);
}

This produces a JSON record where the exception details are fully structured and queryable:

json
12345678910111213141516171819202122
{
"message": "Service degraded: database layer down.",
"context": {
"exception": {
"class": "RuntimeException",
"message": "Database unavailable",
"code": 0,
"file": "/home/ayo/dev/dash0/demo/php-monolog-logging/index.php:72",
"previous": {
"class": "PDOException",
"message": "Connection refused on port 5432",
"code": 0,
"file": "/home/ayo/dev/dash0/demo/php-monolog-logging/index.php:68"
}
}
},
"level": 550,
"level_name": "ALERT",
"channel": "app",
"datetime": "2026-03-19T17:34:35.294565+00:00",
"extra": {}
}

Handling uncaught exceptions

Exceptions that escape all try/catch blocks cause PHP to terminate the script with a fatal error. You can intercept these with set_exception_handler() to ensure they are logged before the process exits:

php
12345678
set_exception_handler(
function (\Throwable $e) use ($logger): void {
$logger->critical(
'Uncaught exception, application terminating.',
['exception' => $e]
);
}
);

In a web application framework like Laravel or Symfony, the framework's error handler already takes care of this, but for standalone scripts, CLI tools, or queue workers, setting a global exception handler is a sensible precaution.

Connecting Monolog to OpenTelemetry

In a production environment, your PHP application is only one source of log data among many. The database emits its own logs, the web server writes access logs, the container runtime captures stdout, background workers produce their own output, and each of these sources uses a different format, a different severity scheme, and a different set of field names.

When you need to investigate an incident that cuts across these boundaries, the lack of a common data model turns what should be a straightforward query into an exercise in manual correlation across incompatible formats.

OpenTelemetry solves this by providing a vendor-neutral standard for collecting and exporting telemetry data. When your Monolog logs flow through an OpenTelemetry pipeline, they are normalized into the OTel log data model alongside logs from every other component in your infrastructure.

Severity levels are mapped to a common scale, contextual attributes follow consistent naming conventions, and resource metadata (service name, version, deployment environment) is attached uniformly. The result is that every log record in your system, regardless of its origin, becomes queryable through the same interface with the same field names.

A second benefit is log-trace correlation. Once your logs participate in the OTel ecosystem, each record can carry a trace_id and span_id that link it to a specific distributed trace. Instead of searching for timestamps and guessing at causality across services, you click on a log entry and see the complete request path it belongs to. This connection between logs and traces is what transforms logging from a debugging aid into a full observability signal.

The fastest path: zero-code PSR-3 instrumentation

Since Monolog implements the PSR-3 LoggerInterface (as discussed earlier), you can use OpenTelemetry's automatic PSR-3 instrumentation to bridge your logs into an OTel pipeline with no code changes whatsoever.

The auto-instrumentation packages rely on the OpenTelemetry PHP extension to hook into function calls at the engine level. You need to install this extension before the Composer packages will work. The easiest way is via PECL:

bash
1
sudo pecl install opentelemetry

Then enable it in your php.ini:

ini
12
[opentelemetry]
extension=opentelemetry.so

Or you can use PHP's module system directly to ensures the extension is loaded across all SAPIs (CLI, FPM, Apache) at once:

bash
123
# Adjust the PHP version to match yours (8.4, 8.5, etc.)
echo "extension=opentelemetry.so" | \
sudo tee /etc/php/8.4/mods-available/opentelemetry.ini
text
1
sudo phpenmod opentelemetry

Verify it is loaded with:

bash
1
php -m | grep opentelemetry

With the extension in place, install the following Composer packages:

bash
1234567
composer require \
open-telemetry/sdk \
open-telemetry/exporter-otlp \
open-telemetry/opentelemetry-auto-psr3 \
google/protobuf \
guzzlehttp/guzzle \
guzzlehttp/psr7

The exporter-otlp package is responsible for sending your telemetry to an OTLP endpoint over HTTP, but it does not bundle its own HTTP client. Instead, it depends on the PSR-18 (HTTP Client) and PSR-17 (HTTP Factories) interfaces, so you need to provide a concrete implementation. Guzzle satisfies both. If you already have a different PSR-18 client in your project (like symfony/http-client), you can use that instead.

Then set the following environment variables to activate the SDK autoloader and configure the export:

bash
123456
OTEL_PHP_AUTOLOAD_ENABLED=true
OTEL_PHP_PSR3_MODE=inject,export
OTEL_LOGS_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
OTEL_SERVICE_NAME=my-php-app

That's it! Every call to a PSR-3 LoggerInterface method anywhere in your application is now automatically bridged to OpenTelemetry. The SDK autoloader bootstraps at the start of each PHP request (via Composer's file autoloading mechanism), configures the global LoggerProvider from the OTEL_* environment variables, and the PSR-3 auto-instrumentation hooks into every LoggerInterface call to route records through the OTel pipeline.

The OTEL_PHP_PSR3_MODE variable controls how the auto-instrumentation behaves. The default inject mode adds traceId and spanId fields (if available) into the Monolog context array so they appear in your existing log output.

When set to export, each record is converted an OTel LogRecord and sent through the configured exporter.

With export mode active and a collector running with the debug exporter, you'll see output like this for each log record:

text
123456789101112131415161718192021222324252627282930313233343536
ResourceLog #0
Resource SchemaURL:
Resource attributes:
-> service.name: Str(my-php-app)
-> host.name: Str(falcon)
-> host.arch: Str(x86_64)
-> os.type: Str(linux)
-> os.description: Str(6.6.87.2-...)
-> process.runtime.name: Str(cli)
-> process.runtime.version: Str(8.4.18)
-> process.pid: Int(13830)
-> process.executable.path: Str(/usr/bin/php8.4)
-> telemetry.sdk.name: Str(opentelemetry)
-> telemetry.sdk.language: Str(php)
-> telemetry.sdk.version: Str(1.13.0)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope io.opentelemetry.contrib.php.psr3
LogRecord #0
ObservedTimestamp: 2026-03-20 13:44:44.551... +0000 UTC
Timestamp: 1970-01-01 00:00:00 +0000 UTC
SeverityText:
SeverityNumber: Info(9)
Body: Str(Incoming request.)
Attributes:
-> http.request.method: Str(POST)
-> http.route: Str(/api/v2/orders)
-> server.address: Str(api.acme.io)
-> user_agent.original: Str(AcmeMobile/3.4.1 ...)
-> client.address: Str(203.0.113.42)
-> user.id: Str(usr_8f3a2c91)
-> user.role: Str(customer)
-> tenant.id: Str(tenant_acme_corp)
Trace ID:
Span ID:
Flags: 0

The Resource attributes section contains metadata about the process and environment that the OTel SDK attaches automatically such as the service name, hostname, OS details, PHP version, and SDK version.

Your Monolog context data appears under Attributes, flattened as top-level keys, while the Trace ID and Span ID fields are empty in this example because there was no active trace, but in a web application with tracing enabled, these fields would be populated automatically, linking every log entry to its originating request.

Explicit control: the OTel Monolog handler

If you want finer control over which loggers participate in the OTel pipeline, the open-telemetry/opentelemetry-logger-monolog package provides a dedicated Monolog handler that you can add to specific loggers as part of your handler stack:

bash
1234567
composer require \
open-telemetry/opentelemetry-logger-monolog \
open-telemetry/sdk \
open-telemetry/exporter-otlp \
google/protobuf \
guzzlehttp/guzzle \
guzzlehttp/psr7
php
1234567891011121314151617181920212223242526272829303132333435
<?php
declare(strict_types=1);
require __DIR__ . '/vendor/autoload.php';
use Monolog\Logger;
use Monolog\Level;
use Monolog\Handler\StreamHandler;
use Monolog\Formatter\JsonFormatter;
use OpenTelemetry\API\Globals;
use OpenTelemetry\Contrib\Logs\Monolog\Handler
as OTelHandler;
$logger = new Logger('app');
// Local console output (always active)
$consoleHandler = new StreamHandler(
'php://stdout',
Level::Debug
);
$consoleHandler->setFormatter(new JsonFormatter());
$logger->pushHandler($consoleHandler);
// OTel export (active when the SDK is configured)
$loggerProvider = Globals::loggerProvider();
$otelHandler = new OTelHandler(
$loggerProvider,
Level::Info,
);
$logger->pushHandler($otelHandler);
$logger->info('This log goes to both stdout and OTel.', [
'order_id' => 'ord-48291',
]);

The OTelHandler sits alongside your existing handlers as part of the normal stack. When a LogRecord reaches it, the handler converts it into the OpenTelemetry model and routes it through the OTel SDK's LoggerProvider (retrieved using Globals::loggerProvider()) for export.

The handler also converts Monolog's context and extra arrays into OTel log record attributes. By default, these are nested under context.* and extra.* prefixes:

text
12
Attributes:
-> context: Map({"order_id":"ord-48291"})

If your keys already follow the OpenTelemetry semantic conventions and you want them as top-level attributes, set the OTEL_PHP_MONOLOG_ATTRIB_MODE=otel environment variable:

text
12
Attributes:
-> order_id: Str(ord-48291)

Centralizing your Monolog logs in an observability platform

With your Monolog logs flowing through an OpenTelemetry pipeline, the next step is sending them to a backend that can make them useful at scale. Any OTLP-compatible observability platform will accept the data your collector exports, and because your logs already conform to the OTel data model, there is no proprietary translation step or vendor-specific agent required.

The real payoff comes from choosing a backend that is built around the OTel data model rather than one that bolts OTel support onto a legacy architecture. When logs, traces, and metrics all share the same resource attributes and semantic conventions natively, the barriers between signals disappear. You can start from a spike in error rate on a dashboard, drill into the logs that contributed to it, and jump from a single log entry to the distributed trace that produced it, all without mentally translating between incompatible schemas. Every structured attribute you attached in Monolog becomes a first-class filter, queryable across every service and every signal type.

Dash0 is one such platform, designed from the ground up around OpenTelemetry which means there's no translation layer that loses fidelity and no penalty for using standard OTel attributes. The investment you made in log quality and OpenTelemetry integration pays off directly in faster debugging, root cause analysis, and the kind of cross-service visibility that standalone JSON logs simply cannot provide.

Sign up for a free Dash0 trial to see your PHP logs alongside traces and metrics in a single, unified view.

Final thoughts

Monolog has earned its position as the standard PHP logging library because it is flexible enough to handle almost any requirement. But that flexibility can be a double-edged sword if it leads to ad hoc configurations that produce noisy, unstructured, and ultimately useless log data.

The patterns in this guide are designed to prevent that outcome. Structured JSON output makes your logs machine-parsable from the get-go, processors inject consistent context without cluttering your application code, and OpenTelemetry connects your logs to the broader observability ecosystem, giving you a unified data model and trace correlation across your entire infrastructure.

The best time to invest in all of this is well before you need it. Your future self, debugging a production incident with well-structured context-rich logs and a trace ID to follow, will be immensely grateful.

For further reading, check out the Monolog documentation and the OpenTelemetry PHP documentation.

Thanks for reading!

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah