Last updated: July 21, 2025
Logging in Go with Slog: A Practitioner's Guide
Logging in Go has come a long way. For years, the community relied on the simple standard log
package or turned to powerful third-party libraries like zap and zerolog.
With the introduction of log/slog in Go 1.21, the language now has a native, high-performance, structured logging solution designed to be the new standard.
slog
isn’t just another logger; it’s a new foundation that provides a common API (the frontend) that separates logging logic from the final output, which is controlled by various logging implementations (the backend).
This guide will take you through slog
from its fundamentals to advanced patterns, showing you how to make logging a useful signal for observing your applications.
Understanding slog fundamentals
The log/slog
package is built around three core types: the Logger
, the Handler
, and the Record
. The Logger
is the frontend you’ll interact with, the Handler
is the backend that does the actual logging work, and the Record
is the data passed between them.
Record
A Record
represents a single log event. It contains all the necessary information about the event including:
- The time of the event.
- The severity level (
INFO
,WARN
, etc.). - The log message.
- All structured key-value attributes.
Essentially, a Record
is the raw data for each log entry before it’s formatted.
Handler
A Handler
is an interface that’s responsible for processing Records. It’s the engine that determines how and where logs are written. It’s responsible for:
- Formatting the
Record
into a specific output, like JSON or plain text. - Writing the formatted output to a destination like the console or a file.
The log/slog
package includes built-in concrete TextHandler
and JSONHandler
implementations, but you can create custom handlers to meet any requirement. This interface is what makes slog
so flexible.
Logger
The Logger
is the entry point for creating logs, and it’s what provides the user-facing API with methods like Info()
, Debug()
, and Error()
.
When you call one of these methods, the Logger
creates a Record
with the message, level, and attributes you provided. It then passes that Record
to its configured Handler
for processing.
Here’s how the entire process works:
12345// Creates a new Logger that uses a JSONHandler to write to standard outputlogger := slog.New(slog.NewJSONHandler(os.Stdout,nil))// This call creates a Record and passes it to the JSONHandlerlogger.Info("user logged in","user_id",123)
Since the JSONHandler
is configured to log to the stdout
, this yields:
1{"time":"...","level":"INFO","msg":"user logged in","user_id":123}
A closer look at the Logger API
The slog.Logger
type offers a flexible API that’s designed to handle various logging scenarios, from simple messages to complex, context-aware events. Let’s explore its key methods below.
Level-based methods
The most common way to log is through the four level-based methods: Debug()
, Info()
, Warn()
, and Error()
which correspond to a specific severity level:
1logger.Info("an info message")
output1{"time":"...","level":"INFO","msg":"an info message"}
slog
also provides a context-aware version for each level, such as InfoContext()
. These variants accept a context.Context
type as their first argument, allowing context-aware handlers (if configured) to extract and log values carried within the context:
1logger.InfoContext(context.Background(),"an info message")
Note that slog
’s context-aware methods will not automatically pull values from the provided context when using the built-in handlers. You must use a context-aware handler for this pattern to work.
For more programmatic control or when using custom levels, you can use the generic Log()
and LogAttrs()
methods, which require you to specify the level explicitly:
12logger.Log(context.Background(), slog.LevelInfo,"an info message")logger.LogAttrs(context.Background(), slog.LevelInfo,"an info message")
Adding contextual attributes to your logs
After choosing a level and a log message for an event, the next step is to add contextual attributes which allow you to enrich your log entries with structured, queryable data.
slog
provides a few ways to do this. The most convenient way is to pass them as a sequence of alternating keys and values after the log message:
1logger.Info("incoming request","method","GET","status",200)
output1234567{"time": "...","level": "INFO","msg": "incoming request","method": "GET","status": 200}
This convenience comes with a significant drawback. If you provide an odd number of arguments (e.g., a key without a value), slog
doesn’t panic or return an error. Instead, it silently creates a broken log entry by pairing the value-less field with a special !BADKEY
key:
12// The `resource` key is missing a valuelogger.Warn("permission denied","user_id",12345,"resource")
output1234{[...],"!BADKEY": "resource"}
This silent failure is an API footgun that can corrupt your logging data, and you might only discover the problem during a critical incident when your observability tools fail you.
To guarantee correctness, you must use the strongly-typed slog.Attr
helpers. They makes it impossible to create an unbalanced pair by catching errors at compile time:
12345logger.Warn("permission denied",slog.Int("user_id", 12345),slog.String("resource", "/api/admin"),)
While slightly more verbose, using slog.Attr
is the only right way to log in Go. It ensures your logs are always well-formed, reliable, and safe from runtime surprises.
Enforcing consistency with linters
While using slog.Attr
is the safer approach, there’s nothing stopping anyone from using the simpler key, value
style in a different part of the codebase.
The solution is to make this best practice into an automated, enforceable rule using a linter. For slog
, the best tool for this is sloglint.
You’ll typically integrate it into your development environment and CI/CD pipeline through golangci-lint:
.golangci.yml123456789linters:default: noneenable:- sloglintsettings:sloglint:# Enforce using attributes only.# This will raise an error for any key-value pair arguments.attr-only: true
By adding this simple check, you will guarantee that every log statement in your project adheres to the safest and most consistent style, preventing !BADKEY
occurrences across the entire project.
A tour of slog levels
slog
operates with four severity levels. Internally, each level is just an int
, and the gaps between them are intentional to leave room for custom levels:
slog.LevelDebug
(-4)slog.LevelInfo
(0)slog.LevelWarn
(4)slog.LevelError
(8)
All loggers are configured to log at slog.LevelInfo
by default, meaning that DEBUG
messages will be suppressed:
1234logger.Debug("a debug message")logger.Info("an info message")logger.Warn("a warning message")logger.Error("an error message")
output123{"time":"2025-07-17T10:32:26.364917642+01:00","level":"INFO","msg":"an info message"}{"time":"2025-07-17T10:32:26.364966625+01:00","level":"WARN","msg":"a warning message"}{"time":"2025-07-17T10:32:26.36496905+01:00","level":"ERROR","msg":"an error message"}
If you have some expensive operations to prepare some data before logging, you’ll want to check logger.Enabled()
to confirm if the desired log level is active before performing the expensive work:
1234if logger.Enabled(context.Background(), slog.LevelDebug) {// This code will not run when the logger's level is INFO or greaterlogger.Debug("operation complete", "data", getExpensiveDebugData())}
This simple check ensures that expensive operations only run when their output is guaranteed to be logged, thus preventing an unnecessary performance hit.
Setting the minimum level
You can control the minimum level that will be processed through slog.HandlerOptions
:
1234handler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{Level: slog.WarnLevel,})logger := slog.New(handler)
To set the level based on an environmental variable, you may use this pattern:
1234567891011121314151617181920func getLogLevelFromEnv() slog.Level {levelStr := os.Getenv("LOG_LEVEL")switch strings.ToLower(levelStr) {case "debug":return slog.LevelDebugcase "warn":return slog.LevelWarncase "error":return slog.LevelErrordefault:return slog.LevelInfo}}func main() {logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{Level: getLogLevelFromEnv(),}))}
Dynamically updating log verbosity
For production services where you might need to change log verbosity without a restart, slog provides the slog.LevelVar
type. It is a dynamic container for the log level that allows you to change it concurrently and safely at any time with Set()
.:
12345678var logLevel slog.LevelVar // INFO is the zero value// the initial value is set from the environment and you can call Set() anytime// to update this valuelogLevel.Set(getLogLevelFromEnv())logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{Level: &logLevel,}))
For even greater control of severity levels on a per-package basis, you can use the slog-env package which provides a handler that allows setting the log level via the GO_LOG
environmental variable:
1logger := slog.New(slogenv.NewHandler(slog.NewJSONHandler(os.Stderr,nil)))
Let’s say your program defaults to the INFO
level and you're seeing the following logs:
123{"time":"...","level":"INFO","msg":"main: an info message"}{"time":"...","level":"WARN","msg":"main: a warning message"}{"time":"...","level":"ERROR","msg":"main: an error message"}
You can enable DEBUG
messages with:
1GO_LOG=debug ./myapp
12345{"time":"...","level":"DEBUG","msg":"app: a debug message"}{"time":"...","level":"DEBUG","msg":"main: a debug message"}{"time":"...","level":"INFO","msg":"main: an info message"}{"time":"...","level":"WARN","msg":"main: a warning message"}{"time":"...","level":"ERROR","msg":"main: an error message"}
You can then raise the minimum level for the main
package alone with:
1GO_LOG=debug,main=error go run main.go
The DEBUG
logs still show up for other packages, but package main
is now raised to the ERROR
level:
12{"time":"...","level":"DEBUG","msg":"app: a debug message"}{"time":"...","level":"ERROR","msg":"main: an error message"}
Creating custom levels
If you’re missing a log level like TRACE
or FATAL
, you can easily create them by defining new constants:
1234const (LevelTrace = slog.Level(-8) // More verbose than DEBUGLevelFatal = slog.Level(12) // More severe than ERROR)
To use these custom levels, you must use the generic logger.Log()
method:
1logger.Log(context.Background(), LevelFatal,"database connection lost")
However, their default output name isn’t ideal (DEBUG-4
, ERROR+4
):
1{"time":"...","level":"ERROR+4","msg":"database connection lost"}
You can fix this by providing a ReplaceAttr()
function in your HandlerOptions
to map the level’s integer value to a custom string:
1234567891011121314opts := &slog.HandlerOptions{ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {if a.Key == slog.LevelKey {level := a.Value.Any().(slog.Level)switch level {case LevelTrace:a.Value = slog.StringValue("TRACE")case LevelFatal:a.Value = slog.StringValue("FATAL")}}return a},}
You’ll see a more normal output now:
1{"time":"...","level":"FATAL","msg":"database connection lost"}
Note that the ReplaceAttr()
is called once for every attribute on every log, so always keep its logic as fast as possible to avoid performance degradation.
Controlling the logger output with Handlers
The Handler
is the backend of the logging system that’s responsible for taking a Record
, formatting it, and writing it to a destination.
A key feature of slog
handlers is their composability. Since handlers are just interfaces, it’s easy to create “middleware” handlers that wrap other handlers.
This allows you to build a processing pipeline to enrich, filter, or modify log records before they are finally written. You’ll see some examples of this pattern as we go along.
The log/slog
package ships with two built-in handlers:
JSONHandler
, which formats logs as JSON.TextHandler
, which formats logs askey=value
pairs.
12345jsonLogger := slog.New(slog.NewJSONHandler(os.Stdout,nil))textLogger := slog.New(slog.NewTextHandler(os.Stdout,nil))jsonLogger.Info("database connected","db_host","localhost","port",5432)textLogger.Info("database connected","db_host","localhost","port",5432)
output12{"time":"...","level":"INFO","msg":"database connected","db_host":"localhost","port":5432}time=... level=INFO msg="database connected" db_host=localhost port=5432
This article will focus primarily on JSON logging since its the de facto standard for production logging.
Customizing handlers with HandlerOptions
You can configure the behavior of the built-in handlers using slog.HandlerOptions
, and you’ve already seen this approach for setting the Level
and using ReplaceAttrs
to provide custom level names.
The final option is AddSource
, which automatically includes the source code file, function, and line number in the log output:
123456opts := &slog.HandlerOptions{AddSource: true,}logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))logger.Warn("storage space is low")
output12345678910{"time": "...","level": "WARN","source": {"function": "main.main","file": "/path/to/your/project/main.go","line": 15},"msg": "storage space is low"}
While source information is handy to have, it comes with a performance penalty because slog must call runtime.Caller()
to get the source code information, so keep that in mind.
That’s pretty much all you can do to customize the built-in handlers. To go farther than this, you’ll need to utilize third-party handlers created by the community or create a custom one by implementing the Handler interface.
Some notable handlers you might find useful include:
- slog-sampling: A handler for dropping repetitive log entries.
- slog-json: Uses the JSON v2 library (coming in Go v1.25) for improved correctness and performance.
- tint: Writes colorized logs to the console for development environment.
- slog-multi: Provides advanced composition patterns for fanout, buffering, conditional routing, failover, and more.
A note on duplicate keys in logs
One notable behavior of the built-in handlers is that they do not de-duplicate keys which can cause unpredictable or undefined behavior in telemetry pipelines and observability tools:
123jsonLogger := slog.New(slog.NewJSONHandler(os.Stdout,nil))childLogger := jsonLogger.With("app","my-service")childLogger.Info("User logged in", slog.String("app","auth-module"))
123456{"time": "...","level": "INFO","msg": "User logged in","app": "auth-module"}
There’s currently no consensus on the “correct” behavior, though the relevant GitHub issue remains open and could still evolve.
For now, if de-duplication is needed, you must use a third-party “middleware” handler, like slog-dedup, to fix the keys before they are written.
It supports various strategies, including overwriting, ignoring, appending, and incrementing the duplicate keys. For example, you could overwrite duplicate keys as follows:
1jsonLogger := slog.New(slogdedup.NewOverwriteHandler(slog.NewJSONHandler(os.Stdout,nil),nil))
output123456{"time": "...","level": "INFO","msg": "User logged in","app": "auth-module"}
Logging to files
The best practice for modern applications is often to log to stdout
or stderr
, and allow the runtime environment to manage the log stream.
However, if your application needs to write directly to a file, you can simply pass an *os.File
instance to the slog
handler:
123456789101112logFile, err := os.OpenFile("app.log", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)if err != nil {panic(err)}defer logFile.Close()logger := slog.New(slog.NewJSONHandler(logFile, nil))logger.Info("Starting server...", "port", 8080)logger.Warn("Storage space is low", "remaining_gb", 15)logger.Error("Database connection failed", "db_host", "10.0.0.5")
For managing the rotation of log files, you can use the standard logrotate utility or the lumberjack package.
Contextual logging patterns with slog
Choosing how to make a logger available across your application is a key architectural decision. This involves trade-offs between convenience, testability, and explicitness. While there’s no single “right” answer, understanding the common patterns will help you select the best approach for your project.
This guide explores the three most common patterns for contextual logging in Go: using a global logger, embedding the logger in the context, and passing the logger explicitly as a dependency.
1. Using a global logger with a context handler
Using the global logger via slog.Info()
is a convenient approach as it avoids the need to pass a logger instance through every function call.
You only need to configure the default logger once at the entry point of the program, and then you’re free to use it anywhere in your application:
1234567891011func main() {// Configure the default logger once.slog.SetDefault(slog.New(slog.NewJSONHandler(os.Stdout, nil)))doSomething()}func doSomething() {// Use it anywhere without passing it.slog.Info("doing something")}
When you want to log contextual attributes across scopes, you only need to use the context.Context
type to wrap the attributes and then use the Context
variants of the level methods accordingly.
This requires the use of a context-aware handler, and there are a few of these already created by the community. One example is slog-context which allows you place slog attributes into the context and have them show up anywhere that context is used.
Here’s a detailed example showing this pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566package mainimport ("log/slog""net/http""os""github.com/google/uuid"slogctx "github.com/veqryn/slog-context")const (correlationHeader = "X-Correlation-ID")func requestID(next http.Handler) http.Handler {return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {ctx := r.Context()correlationID := r.Header.Get(correlationHeader)if correlationID == "" {correlationID = uuid.New().String()}ctx = slogctx.Prepend(ctx, slog.String("correlation_id", correlationID))r = r.WithContext(ctx)w.Header().Set(correlationHeader, correlationID)next.ServeHTTP(w, r)})}func requestLogger(next http.Handler) http.Handler {return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {slog.InfoContext(r.Context(),"incoming request",slog.String("method", r.Method),slog.String("path", r.RequestURI),slog.String("referrer", r.Referer()),slog.String("user_agent", r.UserAgent()),)next.ServeHTTP(w, r)})}func hello(w http.ResponseWriter, r *http.Request) {slog.InfoContext(r.Context(), "hello world!")}func main() {h := slogctx.NewHandler(slog.NewJSONHandler(os.Stdout, nil), nil)slog.SetDefault(slog.New(h))mux := http.NewServeMux()mux.HandleFunc("/", hello)wrappedMux := requestID(requestLogger(mux))http.ListenAndServe(":3000", wrappedMux)}
The requestID()
middleware intercepts every incoming request, generates a unique correlation_id
, and uses slogctx.Prepend()
to attach this ID as a logging attribute to the request’s context.
The requestLogger()
middleware and the final hello()
handler both use slog.InfoContext()
. They don’t need to know about the correlation_id
explicitly; they just pass the request’s context to the global logger.
When slog.InfoContext()
is called, the configured slogctx.Handler
intercepts the call, inspects the provided context, finds the correlation_id
attribute, and automatically adds it to the log record before it’s written out by the JSONHandler
:
output12{"time":"...","level":"INFO","msg":"incoming request","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d","method":"GET","path":"/","referrer":"","user_agent":"curl/8.5.0"}{"time":"...","level":"INFO","msg":"hello world!","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d"}
This pattern ensures that every log statement related to a single HTTP request is tagged with the same correlation_id
, making it possible to connect a set of logs to a single request.
2. Embedding the logger in the context
Another common pattern is placing the logger itself in a context.Context
instance. You can also use the slog-context
package to implement this pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778package mainimport ("log/slog""net/http""os""github.com/google/uuid"slogctx "github.com/veqryn/slog-context")const (correlationHeader = "X-Correlation-ID")func requestID(next http.Handler) http.Handler {return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {ctx := r.Context()correlationID := r.Header.Get(correlationHeader)if correlationID == "" {correlationID = uuid.New().String()}ctx = slogctx.With(ctx, slog.String("correlation_id", correlationID))r = r.WithContext(ctx)w.Header().Set(correlationHeader, correlationID)next.ServeHTTP(w, r)})}func requestLogger(next http.Handler) http.Handler {return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {logger := slogctx.FromCtx(r.Context())logger.Info("incoming request",slog.String("method", r.Method),slog.String("path", r.RequestURI),slog.String("referrer", r.Referer()),slog.String("user_agent", r.UserAgent()),)next.ServeHTTP(w, r)})}func ctxLogger(logger *slog.Logger, next http.Handler) http.Handler {return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {ctx := slogctx.NewCtx(r.Context(), logger)r = r.WithContext(ctx)next.ServeHTTP(w, r)})}func hello(w http.ResponseWriter, r *http.Request) {logger := slogctx.FromCtx(r.Context())logger.Info("hello world!")}func main() {h := slogctx.NewHandler(slog.NewJSONHandler(os.Stdout, nil), nil)logger := slog.New(h)mux := http.NewServeMux()mux.HandleFunc("/", hello)wrappedMux := ctxLogger(logger, requestID(requestLogger(mux)))http.ListenAndServe(":3000", wrappedMux)}
Here, The outermost middleware, ctxLogger()
, takes the application’s base logger and uses slogctx.NewCtx()
to place it into the request’s context. This makes the logger available to all subsequent handlers.
Next, the requestID
middleware retrieves the logger from the context. It then uses slogctx.With
to create a new child logger that includes the correlation_id
. This new, more contextual logger is then placed back into the context, replacing the base logger.
Any subsequent middleware or handler, like requestLogger()
and hello()
, can now retrieve the fully contextualized child logger using slogctx.FromCtx()
. They can log messages without needing to know anything about the correlation_id
; it’s automatically included because it’s part of the logger instance that was retrieved.
The result is exactly the same as before:
output12{"time":"...","level":"INFO","msg":"incoming request","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d","method":"GET","path":"/","referrer":"","user_agent":"curl/8.5.0"}{"time":"...","level":"INFO","msg":"hello world!","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d"}
What happens if you use slogctx.FromCtx()
but there’s no associated logger? The default logger (slog.Default()
) will be returned.
3. Explicitly passing the logger
This approach treats the logger as a formal dependency, which is provided to components either through function parameters or as a field in a struct.
The logger is provided once when the struct is created, and all its methods can then access it via the receiver:
123456789101112131415161718192021type UserService struct {logger *slog.Loggerdb *sql.DB}func NewUserService(logger *slog.Logger, db *sql.DB) *UserService {return &UserService{logger: logger.With(slog.String("component", "UserService")), // Create a child logger for the componentdb: db,}}func (s *UserService) CreateUser(ctx context.Context, user *User) {l := s.logger.With(slog.Any("user", user))l.InfoContext(ctx, "creating new user")// ...l.InfoContext(ctx, "user created successfully")}
For context-aware logging, you would then rely on adding attributes to the context with slogctx.Prepend()
as shown earlier.
Which should you use?
slog
’s design encourages handlers to read contextual values from a context.Context
. This makes putting the Logger
instance itself in the context unnecessary, and thus not recommended.
The initial slog
proposal originally included helper functions like slog.NewContext()
and slog.FromContext()
for adding and retrieving the logger from the context, but they were removed from the final version due to strong community opposition from the “anti-pattern” camp.
The key decision is thus between two patterns: using a global logger or using dependency injection. The former is extremely convenient but adds a hidden dependency that’s hard to test, while the latter is more verbose but makes dependencies explicit, resulting in highly testable and flexible code.
You can use sloglint
to enforce whatever style you choose to implement throughout your codebase, so do check out the full list of options that it provides.
Controlling log output with the LogValuer
interface
The LogValuer
interface provides a powerful mechanism for controlling how your custom types appear in log output.
This becomes particularly important when dealing with sensitive data, complex structures, or when you want to provide consistent representation of domain objects across your logging.
The interface is elegantly simple:
123type LogValuer interface {LogValue() slog.Value}
When slog
encounters a value that implements LogValuer
, it calls the LogValue()
method instead of using the default representation. This gives you complete control over what information appears in your logs.
Consider an application where you frequently log user information. Without implementing LogValuer
, logging a User
struct directly might expose more information than intended:
12345678910111213141516171819202122232425262728type User struct {ID stringEmail stringFirstName stringLastName stringPasswordHash stringCreatedAt time.TimeLastLogin time.TimeIsActive bool}func main() {logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))user := &User{ID: "user-123",Email: "john@example.com",FirstName: "John",LastName: "Doe",PasswordHash: "encrypted-password-hash",CreatedAt: time.Now(),LastLogin: time.Now().Add(-24 * time.Hour),IsActive: true,}// This logs all fields, including sensitive oneslogger.Info("user operation", slog.Any("user", user))}
output123456789101112131415{"time": "2025-07-17T17:18:22.090974193+01:00","level": "INFO","msg": "user operation","user": {"ID": "user-123","Email": "john@example.com","FirstName": "John","LastName": "Doe","PasswordHash": "encrypted-password-hash","CreatedAt": "2025-07-17T17:18:22.090965054+01:00","LastLogin": "2025-07-16T17:18:22.090965107+01:00","IsActive": true}}
By implementing LogValuer
, you can control exactly what information appears. For example, you can limit it to just the id
:
123456// Implement LogValuer to control log representationfunc (u *User) LogValue() slog.Value {return slog.GroupValue(slog.String("id", u.ID),)}
This now produces clean, controlled output that hides all sensitive or unnecessary fields:
output12345678{"time": "2024-01-15T10:30:45.123Z","level": "INFO","msg": "User operation","user": {"id": "user-123"}}
If you add a new field later on, it won’t be logged until you specifically add it to the LogValue()
method. While this adds some extra work, it guarantees that sensitive data won’t be accidentally logged.
Error logging with slog
Error logging in slog
requires thoughtful consideration of what information will be most valuable during debugging. Unlike simple string-based logging, structured error logging allows you to capture rich context alongside the error itself.
The most straightforward approach uses slog.Any()
to log error values:
1234err := errors.New("payment gateway unreachable")if err != nil {logger.Error("Payment processing failed", slog.Any("error", err))}
You’ll see the error message accordingly:
123456{"time": "2025-07-17T17:25:05.356666995+01:00","level": "ERROR","msg": "Payment processing failed","error": "payment gateway unreachable"}
If you’re using a custom error type, you can implement the LogValuer
interface to enrich your error logs:
123456789101112131415161718192021222324252627282930type PaymentError struct {Code stringMessage stringCause error}func (pe PaymentError) Error() string {return pe.Message}func (pe PaymentError) LogValue() slog.Value {return slog.GroupValue(slog.String("code", pe.Code),slog.String("message", pe.Message),slog.String("cause", pe.Cause.Error()),)}func main() {logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))causeErr := errors.New("network timeout")err := PaymentError{Code: "GATEWAY_UNREACHABLE",Message: "Failed to reach payment gateway",Cause: causeErr,}logger.Error("Payment operation failed", slog.Any("error", err))}
output12345678910{"time": "2025-07-17T17:25:05.356666995+01:00","level": "ERROR","msg": "Payment processing failed","error": {"code": "GATEWAY_UNREACHABLE","message": "Failed to reach payment gateway","cause": "network timeout"}}
This approach provides structured error information that’s much more valuable than simple error strings when analyzing failures in production systems.
You can go even farther by capturing the structured stack trace of an error in your logs. You’ll need to integrate with a third-party package like go-errors or go-xerrors to achieve this:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849package mainimport ("context""log/slog""os"xerrors "github.com/mdobak/go-xerrors")func replaceAttr(_ []string, a slog.Attr) slog.Attr {if err, ok := a.Value.Any().(error); ok {if trace := xerrors.StackTrace(err); len(trace) > 0 {errGroup := slog.GroupValue(slog.String("msg", err.Error()),slog.Any("trace", formatStackTrace(trace)),)a.Value = errGroup}}return a}func formatStackTrace(trace xerrors.Callers) []map[string]any {frames := trace.Frames()s := make([]map[string]any, len(frames))for i, v := range frames {s[i] = map[string]any{"func": v.Function,"source": v.File,"line": v.Line,}}return s}func main() {h := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{ReplaceAttr: replaceAttr,})logger := slog.New(h)ctx := context.Background()err := xerrors.New("something happened")logger.ErrorContext(ctx, "image uploaded", slog.Any("error", err))}
output12345678910111213141516171819202122232425{"time": "2025-07-18T09:16:14.870855023+01:00","level": "ERROR","msg": "image uploaded","error": {"msg": "something happened","trace": [{"func": "main.main","line": 46,"source": "/home/ayo/dev/dash0/demo/golang-slog/main.go"},{"func": "runtime.main","line": 283,"source": "/home/ayo/.local/share/mise/installs/go/1.24.2/src/runtime/proc.go"},{"func": "runtime.goexit","line": 1700,"source": "/home/ayo/.local/share/mise/installs/go/1.24.2/src/runtime/asm_amd64.s"}]}}
The performance question: is slog good enough?
While slog
was designed with performance in mind, it consistently benchmarks as slower than some highly optimized third-party libraries such as zerolog and zap.
While absolute numbers may vary based on the specific benchmark conditions, the relative rankings have been shown to be consistent:
Package | Time | % Slower | Objects allocated |
---|---|---|---|
zerolog | 380 ns/op | +0% | 1 allocs/op |
zap | 656 ns/op | +73% | 5 allocs/op |
zap (sugared) | 935 ns/op | +146% | 10 allocs/op |
slog (LogAttrs) | 2479 ns/op | +552% | 40 allocs/op |
slog | 2481 ns/op | +553% | 42 allocs/op |
logrus | 11654 ns/op | +2967% | 79 allocs/op |
This performance profile is not an accident but a result of deliberate design choices. The Go team’s own analysis revealed that their optimization efforts were focused on the most common logging patterns they observed in open-source projects where calls with five or fewer attributes accounted for over 95% of use cases.
Only you can decide if this performance gap is relevant for your use case. If you need bridge this gap for a high-throughput or latency-sensitive use case, you have two practical options:
- Retain
slog
as the frontend API and wire it to a high-performance third-party logging handler for modest gains. - Ditch
slog
entirely and log directly with zerolog or zap to squeeze out every last nanosecond.
As always, ensure to run your own benchmarks before committing either way.
Bringing your logs into an observability pipeline
Once your Go application is producing high-quality, structured logs with slog
, the next step is to get them off individual servers and into a centralized observability pipeline.
Centralizing your logs transforms them from simple diagnostic records into a powerful, queryable dataset. More importantly, it allows you to correlate slog entries with other critical telemetry signals, like distributed traces and metrics, to get a complete picture of your system’s health.
Modern observability platforms can ingest the structured JSON output from slog’s JSONHandler
. They provide powerful tools for searching, creating dashboards, and alerting on your log data.
To unlock true correlation, however, your logs must share a common context (like a TraceID
) with your traces. The standard way to achieve this is by integrating slog with OpenTelemetry using the otelslog bridge.
A full demonstration is beyond the scope of this guide, but you can consult the official OpenTelemetry documentation to learn how to configure the log bridge accordingly.
Once your OpenTelemetry-enriched log data is fed into an OpenTelemetry-native platform like Dash0, your slog
entries will appear alongside traces and metrics in a unified view, giving you end-to-end visibility into every request across your distributed system.
Final thoughts
The introduction of log/slog
was a pivotal moment for the Go ecosystem that finally acknowledged the need for robust tooling to support building highly observable systems right out of the box.
Throughout this guide, we’ve journeyed from the core concepts of Logger
, Handler
, and Record
to patterns for contextual and error logging. While the API has a few rough edges and isn’t the most elegant, its establishment reduces the fragmentation of past approaches and provides the Go community with a consistent, shared language for structured logging.
By treating logging not as an afterthought but as a fundamental signal for observability, you’ll transform your services from opaque black boxes into systems that are transparent, diagnosable, and easier to troubleshoot.
Thanks for reading!
