Dash0 Raises $110M Series B at $1B Valuation

Last updated: April 19, 2026

Choosing a Python Logging Library in 2026

Python's logging ecosystem has always looked different from most languages. Rather than leaving developers to choose between competing third-party options, Python shipped a comprehensive logging module in the standard library from day one. That head start means most Python code already uses logging at some level, and every third-party alternative builds on top of it, wraps around it, or deliberately replaces it.

But the standard library module has real friction points. Configuration is verbose, structured output requires extra work, and the API carries design decisions from the early 2000s that feel dated against modern expectations. That friction is why other libraries exist and continue to grow.

This guide covers what's worth considering in 2026: which libraries matter, how they compare, where they overlap with the standard module, and when each one makes sense.

1. The standard library logging module

The standard logging module is where most Python applications start, and for good reason. It ships with Python, every third-party framework and library emits logs through it, and the entire ecosystem of handlers, formatters, and integrations (including OpenTelemetry) is built around its interfaces.

Even if you end up adopting a different library for your application code, you'll still interact with the logging module because that's what your dependencies use under the hood.

In production, most Python applications configure logging through logging.config.dictConfig. If you've worked with Django, you'll recognize the pattern: a LOGGING dictionary in your settings that declares formatters, handlers, and logger routing in one place. The same approach works in any Python application.

Here's a minimal configuration that logs JSON to stdout:

python
12345678910111213141516171819202122232425262728293031
import logging.config
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"json": {
"()": (
"pythonjsonlogger"
".json.JsonFormatter"
),
"format": (
"%(asctime)s %(name)s"
" %(levelname)s %(message)s"
),
},
},
"handlers": {
"stdout": {
"class": "logging.StreamHandler",
"formatter": "json",
"stream": "ext://sys.stdout",
},
},
"root": {
"level": "INFO",
"handlers": ["stdout"],
},
}
logging.config.dictConfig(LOGGING)

This pairs the logging module with python-json-logger for structured JSON output. The dictConfig approach keeps logging configuration declarative, separate from application code, and easy to override per environment.

Beyond the basics, the standard module's handler ecosystem is where its flexibility shows. QueueHandler and QueueListener push logging off the main thread for latency-sensitive paths, while MemoryHandler gives you ring-buffer behavior, accumulating records in memory and only flushing when a threshold severity is hit, so you capture the debug context leading up to an error without paying I/O cost on every call. And the Filter interface lets you selectively suppress, modify, or route records in flight before they reach any handler.

The standard module also provides the cleanest path into OpenTelemetry-native logs. The opentelemetry-instrumentation-logging package hooks directly into the standard library's logging infrastructure, injecting trace and span IDs into every log record automatically. Because the OTel Python SDK was designed around logging.Handler, you can route your log records through the OpenTelemetry pipeline seamlessly:

python
12345678910111213141516171819202122232425262728
from opentelemetry.instrumentation.logging import (
LoggingInstrumentor,
)
# Injects trace_id, span_id, and
# resource.service.name into log records
LoggingInstrumentor().instrument(
set_logging_format=True,
)
# Or attach the OTel handler directly
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._logs.export import (
BatchLogRecordProcessor,
)
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import (
OTLPLogExporter,
)
logger_provider = LoggerProvider()
logger_provider.add_log_record_processor(
BatchLogRecordProcessor(OTLPLogExporter())
)
handler = LoggingHandler(
logger_provider=logger_provider
)
logging.getLogger().addHandler(handler)

When you pair this with active spans in your application, the resulting OTel log records carry matching trace and span IDs. Your observability backend can then correlate logs with the traces they belong to automatically.

Where logging falls short

Even with dictConfig handling the setup, the standard module demands more upfront ceremony than most alternatives. You need to understand the relationship between loggers, handlers, formatters, and filters before you can configure anything non-trivial, and small mistakes (a missing disable_existing_loggers: False, a handler attached to the wrong logger name) can silently swallow log output in ways that are hard to debug.

Per-request context propagation is possible through contextvars combined with a custom Filter that injects fields into every log record automatically (our Python logging guide covers this pattern in detail). It works, but it requires you to write and wire up the filter yourself while libraries like structlog and Loguru provide this out of the box with less ceremony.

2. Loguru

Loguru is the most popular third-party Python logging library on GitHub, with over 21,000 stars, and its popularity comes from a simple proposition: it removes almost all of the configuration boilerplate that the standard module demands.

You only need to import a pre-configured logger object and start writing log statements immediately:

python
12345678
from loguru import logger
logger.info(
"Request processed",
method="GET",
status=200,
latency_ms=47,
)

There's no handler setup, no formatter configuration, no getLogger(__name__) pattern. By default, Loguru writes colored, human-readable output to stderr, which is the right default for development. But when you need to change the output destination, format, or filtering behavior, the entire configuration API is a single add() function:

python
123456789101112
from loguru import logger
import sys
# Remove default stderr handler
logger.remove()
# JSON output to stdout for production
logger.add(
sys.stdout,
serialize=True, # enables JSON output
level="INFO",
)

The serialize=True flag is worth noting because it converts every log record to JSON before sending it to the configured destination. This means you can get structured output without writing a custom formatter or installing an extra package.

Exception handling is another area where Loguru shines. The @logger.catch decorator wraps a function and logs the full traceback with local variable values when an exception occurs, while logger.exception() method does the same thing inline:

python
123456
@logger.catch
def process_order(order_id: str):
# If this raises, Loguru logs the full
# traceback with variable values
order = db.get_order(order_id)
return order.process()

Loguru also provides bind() for attaching contextual fields to a logger instance, and contextualize() as a context manager for scoped context that cleans itself up:

python
12345
with logger.contextualize(request_id="abc-123"):
logger.info("Processing started")
do_work()
logger.info("Processing complete")
# request_id is removed from context here

For projects that already use the standard logging module, Loguru provides an InterceptHandler pattern that routes all standard library log records through Loguru's pipeline, so you can adopt it incrementally without rewriting existing code:

python
123456789101112131415
import logging
from loguru import logger
class InterceptHandler(logging.Handler):
def emit(self, record):
level = logger.level(record.levelname).name
logger.opt(
depth=6, exception=record.exc_info
).log(level, record.getMessage())
logging.basicConfig(
handlers=[InterceptHandler()],
level=0,
force=True,
)

The main tradeoff with Loguru is that it uses a single global logger object, which means configuration is process-wide. In applications where different components need different logging behavior, you have to rely on a different mechanism rather than the named logger hierarchy that the standard module provides.

Loguru also doesn't have native OpenTelemetry integration at the time of writing. You can bridge it through the standard library interceptor pattern, but it's an extra layer of indirection compared to libraries that hook into logging.Handler directly.

3. structlog

structlog takes a fundamentally different approach from both the standard module and Loguru. Instead of treating log messages as format strings that happen to carry some context, it treats every log entry as a dictionary of key-value pairs that passes through a configurable chain of processors.

python
1234567891011
import structlog
log = structlog.get_logger()
log.info(
"request_processed",
method="GET",
path="/api/orders",
status=200,
latency_ms=47.3,
)

The output you get depends entirely on your processor configuration. In development, structlog can render colorized, human-readable console output through its ConsoleRenderer (which uses Rich for pretty exception formatting if installed). In production, you swap to JSONRenderer and get machine-parseable structured logs without changing a single logging call:

python
12345678910111213141516171819
import structlog
structlog.configure(
processors=[
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(
fmt="iso"
),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.JSONRenderer(),
],
wrapper_class=structlog.make_filtering_bound_logger(
logging.INFO
),
context_class=dict,
logger_factory=structlog.PrintLoggerFactory(),
)

The processor chain is the core idea. Each processor is a callable that receives the logger, the method name, and the event dictionary, and returns a modified event dictionary (or raises DropEvent to suppress the record). This makes it straightforward to build custom processing logic like scrubbing sensitive data, sampling, enrichment, or conditional routing:

python
12345
def redact_sensitive_fields(logger, method, event_dict):
for key in ("password", "token", "api_key"):
if key in event_dict:
event_dict[key] = "[REDACTED]"
return event_dict

Context management in structlog uses Python's contextvars module, which means it works correctly across asyncio tasks and thread boundaries without manual propagation. You bind context at the start of a request and it flows through your entire call stack automatically:

python
123456789101112131415
import structlog
structlog.contextvars.bind_contextvars(
request_id="abc-123",
user_id="user-456",
)
# All subsequent log calls in this context
# will include request_id and user_id
log.info("order_created", order_id="ord-789")
# Clean up when the request ends
structlog.contextvars.unbind_contextvars(
"request_id", "user_id"
)

structlog also integrates tightly with the standard library. You can configure it to use logging as its output backend, which means all of logging's handler ecosystem (file rotation, syslog, queues, the OpenTelemetry handler) is available to structlog without any additional adapters:

python
123456789101112131415161718
import logging
import structlog
structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(
fmt="iso"
),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
)

This dual-mode operation is structlog's biggest strength for production systems. You get structlog's ergonomic API and processor pipeline for your application code, with the standard library's mature handler infrastructure for output routing. It also means structlog inherits OpenTelemetry support "for free" through the standard library integration.

The main downside is the learning curve. structlog's documentation is thorough, but the concepts (bound loggers, processor chains, wrapper classes, logger factories) take time to internalize. The initial configuration can feel overwhelming compared to Loguru's single add() call, and getting the processor chain right for your specific needs requires understanding how all the pieces fit together.

4. picologging

picologging is a Microsoft-backed project that reimplements the standard library's logging module in C for raw speed. The goal is a drop-in replacement that runs 4 to 17 times faster without requiring any code changes:

python
123456789
import picologging as logging
logging.basicConfig()
logger = logging.getLogger()
logger.info("A log message!")
logger.warning(
"A log message with %s", "arguments"
)

The idea is compelling, but the project has stalled. The last PyPI release (v0.9.3) was in September 2023, and the repository has seen minimal activity since. It never left beta, and not every feature of the standard module is implemented. Python 3.13 and 3.14 aren't supported in released builds.

It's mentioned here because it still appears in comparison articles and search results, and the approach itself remains interesting. If the project resumes development, it could become a meaningful option for applications where logging throughput is a genuine bottleneck. But as of 2026, it's not something you should depend on for production use.

Performance considerations

Python logging performance is rarely the bottleneck in a real application. Network I/O, database queries, and serialization overhead dominate most request lifecycles by orders of magnitude. That said, if you're logging in a tight loop or processing events at very high throughput, the differences between libraries do become measurable.

The standard logging module's main cost is LogRecord creation. Each call constructs a new object, resolves the caller's frame information (if enabled), and runs it through the handler chain. In benchmarks, a simple log call with a single handler runs in the low microseconds range.

Loguru's overhead is comparable to the standard module for most use cases, with the serialize=True JSON mode adding the expected cost of JSON serialization. The library's internal dispatch is slightly more expensive than raw logging because of the global logger's sink routing, but the difference is negligible in practice.

structlog's performance depends heavily on the processor chain configuration. A minimal chain with JSONRenderer is competitive with the standard module. Adding processors for timestamp formatting, context merging, and exception rendering increases the per-call cost proportionally, but each processor is doing useful work that you'd otherwise be doing in a custom formatter.

For the vast majority of Python applications, any of these libraries is fast enough. If profiling shows logging as a bottleneck, the first thing to check is whether you're logging too much at too high a frequency, not which library you're using.

Picking the right Python logging library

For libraries and packages that other people will import, use the standard logging module directly. This is a hard rule. Anything else forces your dependency choices on downstream consumers.

For new applications where developer experience matters and you want to start logging quickly with sensible defaults, Loguru is the fastest path to production-ready logging. Its single global logger and add() configuration model eliminate the boilerplate that slows teams down, and the InterceptHandler pattern means you can capture logs from third-party libraries that use the standard module.

For applications where structured logging is a first-class requirement, particularly in microservice architectures where logs need to be machine-parseable and carry rich context across request boundaries, structlog is the strongest choice. Its processor pipeline gives you the most control over what your log records contain and how they're processed, and its standard library integration means you don't lose access to the logging handler ecosystem.

If you're already using the standard module and need structured JSON output without adopting a new framework, a lightweight JSON formatter like python-json-logger gets you there with minimal effort and no API changes. It's not a standalone logger, but it solves the most common pain point with the standard module in a few lines of configuration.

If you're using OpenTelemetry, the standard logging module gives you the most direct integration path. The OTel Python SDK's LoggingHandler attaches to the root logger and routes log records through the OTel pipeline as first-class signals, correlating them with traces automatically. structlog achieves the same thing through its standard library backend. Loguru can get there through the InterceptHandler bridge, but it's an extra layer.

Final thoughts

The library matters less than the practices around it. Structured output, consistent context propagation, sensible log levels, and good field hygiene are what make logs useful in production. Get those right and any library on this list will serve you well.

If you're looking for an observability platform that's built around OpenTelemetry and treats logs, traces, and metrics as connected signals rather than separate tools, give Dash0 a try.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah