Last updated: June 16, 2025
Mastering Docker Logs: A Comprehensive Tutorial
Your container just crashed. Your application is throwing 500 errors. What’s the first thing you do? Check the logs.
In a containerized environment, however, logging isn’t always straightforward. Logs are ephemeral, dispersed across multiple containers, and can grow unmanageable without the right strategy.
This guide covers everything you need to know about Docker logs. We’ll start with the simplest commands to view logs in real-time and progress to designing a robust, production-grade logging strategy for your entire containerized infrastructure.
Let’s get started!
Quick start: the docker logs
cheat sheet
For when you need answers now. Here are the most common commands you’ll use every day.
Action | Command |
---|---|
View all logs for a container | docker logs <container> |
Follow logs in real-time (tail) | docker logs -f <container> |
Show the last 100 lines | docker logs --tail 100 <container> |
Show logs from the last 15 minutes | docker logs --since 15m <container> |
View logs for a Docker Compose service | docker compose logs <service> |
Follow logs for all Compose services | docker compose logs -f |
Mastering the docker logs
command
The docker logs command is your primary tool for inspecting container output. To get all logs currently stored for a container, simply provide its name or ID:
1docker logs <container_name_or_id>
This dumps the entire log history of the specified container to your terminal, which is probably not what you’re after.
For a container that’s been running for a while, or one that’s particularly noisy, this can mean scrolling through thousands of lines of output.
To pinpoint the information you need, you can use Docker’s built-in filtering flags to narrow the output by time or by the number of lines.
Let’s explore the most useful options next. Note that all options must come before the container name or ID:
1docker logs [<options>] <container_name_or_id>
Filtering logs by time (--since
and --until
)
For more precise debugging, you can retrieve logs from a specific time frame using the following options:
--since
: Shows logs generated after a specified point in time.--until
: Shows logs generated before a specified point in time.
With either flag, you can provide a relative time (like 10m
for 10 minutes, 3h
for 3 hours) or an absolute timestamp (such as 2025-06-13T10:30:00
).
1docker logs --since 30m <container_name_or_id> # Show logs from the last 30 minutes
1docker logs --until 2025-06-13T10:00:00 <container_name_or_id> # Show logs from this morning, before 10 AM
You can also combine the two:
1docker logs --since 2025-06-13T18:00:00 --until 2025-06-13T18:15:00 <container_name_or_id>
Tailing Docker container logs
While filtering helps you analyze past events, the most common task during live debugging is to see what’s happening right now. For this, you need to “tail” the logs, which provides a continuous, real-time stream of a container’s output.
To enable this mode, use the -f
or --follow
flag:
1docker logs -f <container_name_or_id>
Note that unlike the tail -f
command often used with log files, docker logs -f
will first print the container’s entire log history before it starts streaming new entries. The standard tail
command, by contrast, only shows the last 10 lines by default.
For a container with a long history, this initial dump of information can be overwhelming. The most common and effective pattern is to combine --follow
with --tail
(or its shorthand -n
). This gives you the best of both worlds: a small amount of recent history for context, followed by the live stream.
1docker logs -f--tail 100 <container_name_or_id>
This command shows the last 100 lines for context and then streams any new logs in real-time. When you’re ready to stop following, press Ctrl+C
.
Searching Docker container logs
The docker logs
command doesn’t have a built-in search feature, but you can easily pipe its output to standard shell utilities like grep
:
1docker logs <container_name_or_id> | grep -i "ERROR"
Managing logs in Docker Compose
A huge amount of Docker development happens with Docker Compose. Managing logs here is just as easy. The key is to use docker compose logs
instead of docker logs
.
The usage syntax is:
1docker compose logs [options][service...]
Where [service...]
is an optional list of service names. The key concept to grasp is that a single service can be scaled to run across multiple containers.
When you request logs for a service, Docker Compose automatically aggregates the output from all containers belonging to that service.
Let’s look at a few common usage patterns.
Viewing logs for a single service
To see logs from just one service defined in your Compose file, specify the service name:
1docker compose logs image-provider
You can also specify multiple service names:
1docker compose logs image-provider shipping otel-collector
Docker Compose will color-code the output by service, making it easy to follow:
Viewing logs for all services
To see an interleaved stream of logs from all services in your stack, run the command without a service name:
1docker compose logs
Tailing and filtering
All the flags you learned for docker logs
for tailing and filtering work with docker compose logs
too:
1docker compose logs -f -n 10 image-provider cart
1docker compose logs --since '10m' db
Inspecting Docker logs with a GUI
If you prefer a graphical interface, these tools provide excellent alternatives to the command line.
Docker Desktop
The built-in dashboard in Docker Desktop has a Logs tab for any running container. It provides a simple, real-time view with basic search functionality.
Dozzle
Dozzle is a lightweight, web-based log viewer with a slick interface. It’s incredibly easy to run as a Docker container itself:
1234docker run -d--name dozzle \-p 8888:8080 \--volume /var/run/docker.sock:/var/run/docker.sock \amir20/dozzle:latest
Navigate to http://localhost:8888
in your browser to get a real-time view of all your container logs.
Understanding how Docker logging works
Docker is designed to capture the standard output (stdout
) and standard error (stderr
) streams from the main process running inside a container. This means any console output from your application is automatically collected as logs.
A logging driver acts as the backend for these logs. It receives the streams from the container and determines what to do with them: store them in a file, forward them to a central service, or discard them.
The default logging driver is json-file
. It captures the log streams and writes them to a JSON file on the host machine, typically located at /var/lib/docker/containers/<container-id>/<container-id>-json.log
.
You can find the path to this file for any container:
1docker inspect -f '{{.LogPath}}' <container_name_or_id>
This outputs:
1/var/lib/docker/containers/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac/612646b55e41d73a3f1a24afa736ef173981ed753506097d1a888e7b9cb7d6ac-json.log
Choosing a logging driver
While json-file
is the default, Docker supports a variety of other logging drivers to suit different needs:
none
: Disables logging entirely. Useful when logs are unnecessary or handled externally.local
: Recommended for most use cases. It offers better performance and more efficient disk usage thanjson-file
.syslog
: Sends logs to the system’s syslog daemon.journald
: Write log output to the journald logging system.fluentd
,gelf
,awslogs
,gcplogs
, etc: Forward logs to external logging services or cloud platforms for centralized aggregation and analysis.
Configuring logging drivers
Configuring Docker logging is done by editing the Docker daemon’s configuration file at /etc/docker/daemon.json
. If the file doesn’t exist, ensure to create it first.
/etc/docker/daemon.json12345678{"log-driver": "json-file","log-opts": {"max-size": "50m","max-file": "4","compress": "true"}}
The json-file
driver’s most significant drawback is that it does not rotate logs by default. Over time, these log files will grow indefinitely, which can consume all available disk space and crash your server.
This configuration addresses this by telling Docker to:
- Rotate log files when they reach 50MB (
max-size
). - Keep a maximum of four old log files (
max-file
). - Compress the rotated log files to save space (
compress
).
For most use cases, the local driver is a better choice than json-file
. It uses a more efficient file format and has sensible rotation defaults built-in. You can configure it as follows:
/etc/docker/daemon.json123{"log-driver": "local"}
By default, the local driver retains 100MB of logs per container (as five 20MB files). You can customize this using the same log-opts as the json-file
driver.
To configure other drivers like fluentd
, syslog
, or journald
, consult the Docker logging documentation for their unique set of options.
After editing daemon.json
, you must restart the Docker daemon for the changes to take effect for newly created containers. Existing containers need to be recreated to adopt the updated configuration.
1sudo systemctl restart docker
You can override the global logging configuration for specific services directly in your Compose file. This is useful for services that require special log handling:
docker-compose.yml12345678<service_name>:image: <image_name>logging:driver: "local"options:max-file: "4"max-size: "50m"compress: "true"
To avoid repetition, you can use YAML anchors to define a logging configuration once and reuse it across multiple services.
docker-compose.yml123457891112x-default-logging: &loggingdriver: "local"options:max-size: "50m"max-file: "4"services:<service_a>:logging: *logging<service_b>:logging: *logging
Understanding Docker’s log delivery mode
When your application generates a log, it faces a fundamental choice: should it pause to ensure the log is safely delivered, or should it hand the log off quickly and continue its work? This is the core trade-off managed by Docker’s log delivery mode, a crucial setting that lets you tune your logging for either maximum reliability or maximum performance.
Docker supports two modes for delivering logs from your container to the configured logging driver.
1. Blocking mode
In the default blocking mode, log delivery is synchronous. When your application emits a log, it must wait for the Docker logging driver to process and accept that message before it can continue executing.
This approach is best for scenarios where every log message is critical and you are using a fast, local logging driver like local
or json-file
.
With slower drivers (those that send logs over a network), blocking mode can introduce significant latency and even stall your application if the remote logging service is slow or unreachable.
2. Non-blocking mode
As an alternative, you can configure a non-blocking delivery mode. In this mode, log delivery is asynchronous. When your application emits a log, the message is immediately placed in an in-memory buffer, and your application continues running without any delay. The logs are then sent to the driver from this buffer in the background.
The trade-off for this mode is a risk of losing logs. If the in-memory buffer fills up faster than the driver can process logs, new incoming messages will be dropped.
To mitigate the risk of losing logs in non-blocking mode, you can increase the size of the in-memory buffer from its 1MB default:
/etc/docker/daemon.json1234567{"log-driver": "awslogs","log-opts": {"mode": "non-blocking","max-buffer-size": "50m"}}
Centralizing Docker logs with OpenTelemetry
While docker logs is fine for development, production environments present a different challenge.
Manually accessing logs across multiple hosts doesn’t scale and provides a fractured, incomplete picture. To gain visibility into such a system, you need a centralized logging strategy.
Modern applications are dynamic and distributed. Containers are ephemeral—they are created, destroyed, and replaced constantly. A centralized system captures their output, ensuring logs persist long after the container that created them is gone.
By consolidating your logs in an observability platform like Dash0, you gain the ability to perform complex searches across your entire infrastructure, build real-time dashboards to visualize trends, and correlate logs with other telemetry signals like metrics or traces.
One way to ship your Docker logs is via the OpenTelemetry Collector which supports a variety of ways to collect logs from the host machine.
A common and effective approach is to set up journald as your Docker logging driver:
/etc/docker/daemon.json123456{"log-driver": "journald","log-opts": {"tag": "opentelemetry-demo"}}
This will send your container logs to the systemd journal and tag it with some metadata (such as CONTAINER_ID
, CONTAINER_NAME
, IMAGE_NAME
etc) so that you can easily filter relevant container logs.
You can then configure the journald receiver to read the logs and filter only those you’re interested in:
docker-compose.yml12345789101112141516171819212223242526receivers:journald:directory: /var/log/journalmatches:- CONTAINER_TAG: "opentelemetry-demo" # the journald driver maps `tag` to `CONTAINER_TAG`processors:batch:resourcedetection/system:detectors: [system]system:hostname_sources: [os]exporters:otlphttp/dash0:endpoint: <your_dash0_endpoint>headers:Authorization: Bearer <your_dash0_token>Dash0-Dataset: <your_dash0_dataset>service:pipelines:logs:receivers: [journald]processors: [batch, resourcedetection/system]exporters: [otlphttp/dash0]
Once you replace the placeholders with your actual account values, you can run the OpenTelemetry Collector through Docker:
1234docker run \-v $(pwd)/otelcol.yaml:/etc/otelcol-contrib/config.yaml \-v /var/log/journal:/var/log/journal:ro \otel/opentelemetry-collector-contrib:latest
Then you’ll start seeing your logs in the Dash0 interface:
Troubleshooting common Docker log issues
Docker logging usually works seamlessly, but there are a couple of common issues you might run into and how to identify and resolve them.
1. docker logs
shows no output
What’s happening: Your application likely isn’t writing to stdout
or stderr
. It might be logging directly to a file inside the container instead. Since Docker’s logging drivers only capture standard output streams, it won’t pick up logs written to internal files.
How to fix it: Ideally, update your application’s logging configuration to write directly to stdout
and stderr
. If modifying the application isn’t feasible, you can redirect file-based logs by creating symbolic links to the appropriate output streams in your Dockerfile
.
Dockerfile123# Example for an Nginx containerRUN ln -sf /dev/stdout /var/log/nginx/access.log && \ln -sf /dev/stderr /var/log/nginx/error.log
This ensures that even file-based logs are routed through Docker’s logging mechanism.
2. Logging driver does not support reading
1Error response from daemon: configured logging driver does not support reading
What’s happening: Remote logging drivers such as awslogs
, splunk
, or gelf
forward logs directly to an external system without storing anything locally. Normally, Docker caches the logs using its dual logging functionality, but if this feature is disabled for the container, the docker logs
command can’t retrieve any output.
How to fix it: You need to ensure cache-disabled
is false
in the logging options. This tells Docker to send logs to the remote driver and keep a local copy for docker logs
to use.
/etc/docker/daemon.json123456{"log-driver": "awslogs","log-opts": {"cache-disabled": "false"}}
Final thoughts
You’ve now journeyed from the basic docker logs
command to understanding the importance of logging drivers, log rotation, and centralized logging strategies.
By mastering these tools and concepts, you’re no longer just guessing when things go wrong. You have the visibility you need to build, debug, and run resilient, production-ready applications.
Whenever possible, structure your application’s logs as JSON. A simple text line is hard to parse, but a JSON object with fields like level, timestamp, and message is instantly machine-readable, making your logs infinitely more powerful in any logging platform.
Thanks for reading!
