Dash0 Raises $110M Series B at $1B Valuation

Last updated: April 16, 2026

Choosing a Go Logging Library in 2026

For most of Go's history, picking a logging library meant choosing between Logrus, zap, zerolog, and a handful of others that each brought their own API, idioms, and opinions about how structured logging should work.

That era is largely over. Since Go 1.21, log/slog provides a standard frontend that the ecosystem has converged around. That shift simplifies things, but it doesn't eliminate the decision entirely.

This guide covers what's worth considering in 2026: which libraries still matter, how they perform, where they differ, and when slog on its own is all you need.

1. Slog

If you're starting a new Go project, your application code should just use log/slog. Not because it's the fastest option or because it has the best API, but because it's in the standard library, the ecosystem has aligned behind it, and you can plug in a faster backend later if you actually need one.

The slog.Handler interface decouples your application code from the encoding engine underneath it. If slog's built-in JSON handler turns out to be too slow for a particular service, you can swap in a different backend by changing your initialization code without touching any of your logging statements.

A typical setup with the JSONHandler looks like this:

go
123456789
opts := &slog.HandlerOptions{
AddSource: true,
Level: slog.LevelInfo,
}
logger := slog.New(
slog.NewJSONHandler(os.Stderr, opts),
)
slog.SetDefault(logger)

This gives you structured JSON output, source file attribution, and runtime level control through slog.LevelVar (which is safe to update from any goroutine), with no external dependencies.

The API supports three calling conventions:

go
1234567891011121314
// Loosely typed key-value pairs (simplest to write)
slog.Info("request", "method", "GET", "status", 200)
// Context-accepting variant (recommended as the default)
slog.InfoContext(ctx, "request",
slog.String("method", "GET"),
slog.Int("status", 200),
)
// Typed attributes only (catches type errors at compile time)
logger.LogAttrs(ctx, slog.LevelInfo, "request",
slog.String("method", "GET"),
slog.Int("status", 200),
)

Another feature worth mentioning is the LogValuer interface which helps you control how types appears in log output. Its main use case is log redaction without relying on developers to remember to sanitize or omit sensitive values at every call site:

go
12345
type APIKey string
func (APIKey) LogValue() slog.Value {
return slog.StringValue("REDACTED")
}

slog also provides the cleanest path into OpenTelemetry-native logs. The otelslog bridge implements slog.Handler and routes your log records through the OpenTelemetry Logs SDK.

Instead of writing JSON entries to stdout, your logs become first-class OTel signals, exported alongside traces and metrics through whatever pipeline you've configured:

go
12345678910111213
import (
"go.opentelemetry.io/contrib/bridges/otelslog"
"go.opentelemetry.io/otel/log/global"
)
// After initializing your OTel SDK and setting
// the global LoggerProvider:
logger := otelslog.NewLogger(
"otelslog-demo",
otelslog.WithLoggerProvider(LoggerProvider),
)
slog.SetDefault(logger)

One important detail is that the bridge reads span context from the context.Context you pass in, which means you need to use the context-accepting methods (InfoContext(), ErrorContext(), etc) and have an active span in that context.

When you do, the resulting OTel log records carry the matching trace and span IDs, allowing your observability backend to automatically correlate logs with the traces they belong to.

go
123456789
// This gets trace correlation (context carries the active span):
logger.InfoContext(ctx, "processing request",
slog.String("order_id", orderID),
)
// This doesn't (no context, no span to correlate):
logger.Info("processing request",
slog.String("order_id", orderID),
)

Where slog falls short

slog gets a lot right, but its API has some real weaknesses. The loosely typed key-value API (slog.Info("msg", "key", val)) is the most convenient way to write logging calls, but it's also the most error-prone.

Passing an odd number of arguments, mistyping a key, or accidentally swapping a key and value all compile without complaint and produce silently malformed output at runtime. The typed slog.Attr constructors fix this but make every log call noticeably more verbose.

slog also doesn't include Trace or Fatal levels by default and omits features like deduplication, ring-buffer logging, and sampling that third-party libraries have long treated as standard.

The ecosystem has addressed most of these gaps. sloglint catches malformed log calls and enforces consistent argument styles, and a growing collection of community packages fills in everything else, from sampling and enrichment to log routing and testing.

2. Zerolog

Zerolog remains the top performer in encoding benchmarks, and its chained API is one of the more pleasant interfaces in the Go ecosystem. Each method returns the same *Event, so calls flow as a single expression:

go
1234567891011
logger := zerolog.New(os.Stderr).
With().
Timestamp().
Caller().
Logger()
log.Info().
Str("method", "GET").
Int("status", 200).
Dur("latency", 47*time.Millisecond).
Msg("request completed")

Its context integration is also first-class. You can attach a logger to a context.Context, accumulate fields across the lifetime of a request, and retrieve it anywhere:

go
123456
ctx = log.With().
Str("request_id", "abc-123").
Logger().
WithContext(ctx)
log.Ctx(ctx).Info().Msg("processing")

It also provides a sampling system that permits a burst of messages per time window and then throttles to a probabilistic rate, which is useful in high-throughput systems where log deduplication matters.

If raw encoding speed is your primary concern and you're willing to use zerolog's API directly, it's the fastest option available with a mature community behind it.

You can also use it as a slog backend through community adapters, which gives you slog's standard API with zerolog's encoder underneath. You'll lose some speed compared to calling zerolog natively (the slog handler abstraction adds some overhead), but it's still faster than slog's built-in JSON handler:

go
1234567891011
zl := zerolog.New(os.Stderr).
With().Timestamp().Logger()
slog.SetDefault(
slog.New(
slogzerolog.Option{
Level: slog.LevelDebug,
Logger: &zl,
}.NewZerologHandler(),
),
)

The main footgun with Zerolog's API to watch for is that if you forget to call .Msg() or .Send() at the end of a chain, the log entry is silently dropped. The zerolog.Event object is pooled, so a missing terminator also leaks memory.

3. Zap

Zap is the most widely deployed high-performance Go logger, battle-tested at Uber's scale for years. Its central design choice is offering two APIs in one package: the zero-allocation typed Logger for hot paths and the loosely-typed SugaredLogger for everything else.

go
123456789101112131415
logger, _ := zap.NewProduction()
defer logger.Sync()
// Typed Logger: zero allocations
logger.Info("request completed",
zap.String("method", "GET"),
zap.Int("status", 200),
zap.Duration("latency", 47*time.Millisecond),
)
// SugaredLogger: slightly slower, more concise
sugar := logger.Sugar()
sugar.Infow("request completed",
"method", "GET", "status", 200,
)

Where zap distinguishes itself is the zapcore.Core interface which separates encoding, output, and level filtering into composable pieces, allowing you to wire up sophisticated pipelines:

go
12345678910111213
core := zapcore.NewTee(
zapcore.NewCore(
jsonEncoder, fileOut, zap.InfoLevel,
),
zapcore.NewCore(
consoleEncoder, os.Stderr, zap.DebugLevel,
),
)
logger := zap.New(
core,
zap.AddCaller(),
zap.AddStacktrace(zap.ErrorLevel),
)

Its testing support is among the best in the ecosystem through the zaptest/observer package which captures structured log entries for programmatic assertion, which makes it straightforward to verify that your code logs the right information under the right conditions:

go
12345678910
core, logs := observer.New(zap.DebugLevel)
logger := zap.New(core)
doSomething(logger)
require.Equal(t, 1,
logs.FilterField(
zap.String("event", "login"),
).Len(),
)

It also provides an slog adapter that makes using zap as an slog backend straightforward:

go
1234
zapL, _ := zap.NewProduction()
slog.SetDefault(
slog.New(zapslog.NewHandler(zapL.Core())),
)

One notable omission is that despite being probably the most extensible logger in the ecosystem, zap doesn't support custom log levels and there is no built-in TRACE level.

4. phuslu/log

phuslu/log is the fastest Go logging library available today. It's also one of the least known, with ~840 GitHub stars to zerolog's 12,000 and zap's 24,000, likely because searching for a library called log turns up everything except what you're looking for.

It started as a zerolog-inspired project and then systematically eliminated every remaining allocation. The API will feel immediately familiar to zerolog users:

go
1234
log.Info().
Str("foo", "bar").
Int("n", 42).
Msg("hello world")

But where it differentiates itself is in three areas:

  1. Its printf-style logging achieves zero allocations even with interface{} arguments.

  2. It ships a capable FileWriter with built-in size-based rotation, max backup count, and timestamp-based filenames, saving you the dependency on lumberjack or similar:

    go
    123456789
    logger := log.Logger{
    Level: log.InfoLevel,
    Writer: &log.FileWriter{
    Filename: "/var/log/app/service.log",
    MaxSize: 50 * 1024 * 1024,
    MaxBackups: 7,
    LocalTime: true,
    },
    }
  3. It includes writers for syslog, journald, and Windows Event Log out of the box, plus an AsyncWriter backed by a channel for non-blocking writes.

phuslu/log also has built-in slog support through its .Slog() method, so you can use it as a slog backend without a third-party adapter:

go
123456
slog.SetDefault((&log.Logger{
Level: log.InfoLevel,
TimeField: "time",
TimeFormat: log.TimeFormatUnixMs,
Caller: 1,
}).Slog())

The primary limitation is community size as there are fewer community examples, fewer integrations, fewer people answering questions, and a single maintainer. For something as important as logging, that's a meaningful risk to factor into the decision.

There's also no built-in sampling, and support for OpenTelemetry is a known gap that has been requested but not yet implemented.

5. Logrus

Logrus taught a generation of Go developers what structured logging could look like. With over 25k GitHub stars and over 249k importing packages, it remains the most-used Go logging library by raw count. But that number reflects historical adoption rather than current momentum.

The project's README is clear: logrus is in maintenance mode, with no new features planned. So if you have an existing codebase with deep logrus integration and a mature hook setup, the best thing is to migrate to slog.

Start by identifying the hot paths, the request handlers and background workers that log most frequently, and move those to slog with a performant backend. The rest of the codebase can continue using logrus until you get to it.

Definitely don't start new code against logrus. The performance gap is too large (~15x slower than slog and ~50x slower than zerolog), the map[string]interface{} architecture can't be fixed without breaking the API, and the Go ecosystem is broadly moving away from it.

6. charmbracelet/log

The libraries above are all optimized for production services where logs are consumed by machines. If you're building a CLI tool where a human reads the output directly in a terminal, charmbracelet/log is worth a look.

It's built by the Charm team (the people behind Bubble Tea, Lip Gloss, and the rest of the Charm TUI ecosystem) and it's designed specifically for terminal output that's pleasant to read. Logs get intelligent coloring, icons, and spacing that make them scannable at a glance:

go
12345678910
logger := log.NewWithOptions(os.Stderr, log.Options{
ReportTimestamp: true,
ReportCaller: true,
Level: log.DebugLevel,
})
logger.Info("starting server", "host", "localhost",
"port", 8080)
logger.Error("connection failed", "err", err,
"retries", 3)

The v2 release brought automatic color downsampling through the colorprofile library, so output adapts to whatever terminal it's running in. Logs look correct whether you're in a true-color terminal, a basic 16-color SSH session, or piped to a file.

It also supports text, JSON, and logfmt output formats, and implements slog.Handler, so you can use it as a slog backend. That means you can still write your CLI application against *slog.Logger and get Charm's styled output without coupling your code to their API:

go
123456789101112
import (
"log/slog"
clog "github.com/charmbracelet/log"
)
handler := clog.NewWithOptions(os.Stderr, clog.Options{
ReportTimestamp: true,
Level: clog.DebugLevel,
})
slog.SetDefault(slog.New(handler))
slog.Info("using slog with charm output")

It also integrates with Gum for logging in shell scripts, supports sub-loggers through log.With(), and includes a custom log.Fatal level that the standard slog deliberately omits.

charmbracelet/log isn't a replacement for zerolog or zap in a production backend. It's the right choice when your audience is a developer staring at a terminal, and you want the output to be as polished as the rest of your CLI.

How they compare in terms of performance

Now that you've seen what each library offers, here's how they stack up on raw performance. The numbers below were measured using their latest versions locally on a 16-core machine, logging a message with pre-accumulated context fields to io.Discard to isolate encoding overhead from disk I/O:

Libraryns/opB/opallocs/op
phuslu/log26.6000
zerolog30.6100
zap53.6900
zap (Sugared)86.88161
slog116.7000
logrus1,75092420
charm2,4371,11322

And as slog backends:

Backendns/opallocs/op
phuslu/log43.260
zerolog57.670
zap77.210
slog (std)116.700

This second table is worth paying attention to as it shows the cost of using slog as your frontend with a faster encoder underneath. phuslu/log behind slog is still nearly 3x faster than slog's built-in JSON handler, and zerolog behind slog is roughly 2x faster. You give up some speed compared to calling these libraries natively, but you keep the slog API and its ecosystem benefits.

One thing most comparisons omit is that once you introduce real workloads and I/O, the differences between the top libraries largely disappear and they become nearly interchangeable on throughput.

Picking the right Go logging library

The majority of services should write all application code against *slog.Logger, and start with slog.NewJSONHandler. This will usually be fast enough and you won't need to think about it again.

If profiling shows logging as a bottleneck, swap in a faster backend without changing your application code. As the slog backend benchmarks show, phuslu/log and zerolog behind slog are 2-3x faster than the built-in handler while keeping the standard API. That's the simplest upgrade path.

If you need maximum throughput and are willing to forgo slog's frontend for a library-specific API directly, zerolog offers the best combination of speed and community size. Zap is the better choice when you value extensibility and production tooling like AtomicLevel, zapcore.Core composition, and zaptest/observer for testing.

If you're using OpenTelemetry, slog plus the otelslog bridge gives you the cleanest integration. Make sure to use the Context variants of the log methods with active spans for trace correlation.

For CLI tools, using charmbracelet/log as the backend for slog gives you polished terminal output without sacrificing the standard API.

Final thoughts

Whichever combination you choose, the important thing is that your logs are structured, correlated with traces where possible, and flowing into a backend that lets you actually use them to solve problems quickly.

If you're looking for an observability platform that's built around OpenTelemetry and treats logs, traces, and metrics as connected signals rather than separate tools, give Dash0 a try today.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah