Dash0 Raises $35 Million Series A to Build the First AI-Native Observability Platform

Last updated: October 20, 2025

Getting Started with Logging in Caddy

In 2025, logs are not just lines of text. They are part of a larger ecosystem of signals that describe system health, performance, and behavior.

When combined with metrics and traces, they provide the foundation for observability, the ability to understand what your system is doing internally just by looking at its outputs.

In this guide, you'll learn how to configure and customize logging in Caddy, one of the most flexible and modern web servers available. You'll learn not only how to enable request logs, but also how to format, filter, and extend them so that they integrate cleanly into your observability pipeline.

Let's start with the basics.

Understanding Caddy's logging model

Unlike other servers (like NGINX) that print plain text to a file, Caddy outputs logs as structured JSON objects. This makes them machine-readable and easy to parse by log processors or observability pipelines.

Caddy uses the Zap logging library under the hood, which is designed for high-performance structured logging. It avoids memory allocations and is suitable for production workloads with large amounts of traffic.

There are two main kinds of logs in Caddy:

  1. System logs: These describe what Caddy itself is doing such as loading configuration, renewing TLS certificates, starting listeners, and so on.
  2. Request logs: These describe individual HTTP requests processed by your server.

By default, only system logs are enabled. To gain visibility into HTTP traffic, you'll need to explicitly turn on request logging in your configuration.

Setting up Caddy in Docker

Using Docker makes experimentation easy. Let's start by running a clean Caddy instance inside a container.

Create a working directory for this tutorial:

bash
1
mkdir caddy-logging && cd caddy-logging

Then create the Caddyfile for your server in the current directory:

text
123
:80 {
respond "Hello from Caddy!"
}

The run the container, and mount the configuration file:

bash
12345
docker run -d \
--name caddy-server \
-p 80:80 \
-v ./Caddyfile:/etc/caddy/Caddyfile:ro \
caddy

When you visit http://localhost, you'll see the "Hello from Caddy message". However, checking the logs with docker logs caddy-server reveals that no request logs were created.

By default, Caddy only logs internal events, but not individual requests unless you configure it to. Let's fix that.

Enabling request logging

Request logs are configured in the Caddyfile using the log directive:

text
1234
:80 {
log
respond "Hello from Caddy!"
}

You need to restart your Caddy container for the changes to take effect:

bash
1
docker restart caddy-server

Now visit http://localhost again, and then check the logs:

bash
1
docker logs caddy-server

You'll see output similar to this (truncated):

json
12345678910111213
{
"level": "info",
"ts": 1760912861.8783047,
"logger": "http.log.access",
"msg": "handled request",
"request": {...},
"bytes_read": 0,
"user_id": "",
"duration": 0.000012145,
"size": 17,
"status": 200,
"resp_headers": {...}
}

You've just enabled request logging in Caddy, and every log entry is a structured object with multiple fields that can be indexed, queried, or correlated later.

Choosing where logs go

Caddy sends logs to the stderr by default, but you can change this by configuring an output block inside your log directive:

text
12345
:80 {
log {
output stdout
}
}

Logging to the standard output or standard error is recommended when running Caddy in a containerized environment since orchestrators automatically capture both streams.

If you need to log to a file, you can use the following configuration:

text
123456789
:80 {
log {
output file /var/log/caddy/access.log {
roll_size 5MB
roll_keep 2
roll_keep_days 7
}
}
}

Caddy's file output supports automatic rotation. For example, it can keep five rolled files, each capped at 10 MB.

text
12345
:80 {
log {
output discard
}
}

You can also forward logs directly over the network but this isn't recommended:

text
1
output net <address>

This allows you to stream logs to a remote log processor, collector, or OpenTelemetry endpoint.

Log formats: JSON, console, and beyond

Caddy supports multiple log formats (or encoders). The two most common are:

  • json Structured, machine-readable, default format.
  • console — More readable, formatted text, useful for local debugging.

Example:

text
12345
:80 {
log {
format console
}
}

This changes output to something like:

text
1
2025/10/19 12:31:54.123 INFO http.log.access.log0 handled request {"request": {"method": "GET", "host": "localhost", "uri": "/"},"status":200}

Readable, but still structured. For production, JSON remains the best option since it integrates seamlessly with observability tools.

You can customize JSON fields as well:

text
12345678910111213
:80 {
log {
format json {
message_key msg
level_key severity
time_key timestamp
time_format "2006-01-02 15:04:05 MST"
time_local
duration_format "ms"
level_format upper
}
}
}

Now logs will appear as:

json
12345678
{
"severity": "INFO",
"timestamp": "2025-10-19 13:22:04 UTC",
"logger": "http.log.access.log0",
"msg": "handled request",
"status": 200,
"duration": 2.5
}

This level of customization makes it easier to align Caddy's logs with the structure of your organization's telemetry data.

6. Filtering and transforming logs

In large deployments, logs can quickly become noisy. You rarely need everything , only the parts that are actionable or relevant. Caddy's filter encoder helps you reshape logs before they're written or transmitted.

You can delete, rename, or modify fields using filters.

Remove unnecessary fields

text
12345678
:80 {
log {
format filter {
request>headers delete
wrap json
}
}
}

This removes all request headers from logs. You can also delete nested fields:

text
123456789
:80 {
log {
format filter {
request>headers>Cookie delete
resp_headers>Server delete
wrap json
}
}
}

Rename fields

text
12345678
:80 {
log {
format filter {
request>uri rename path
status rename http_status
}
}
}

Replace or anonymize values

text
1234567
:80 {
log {
format filter {
user_id replace "[REDACTED]"
}
}
}

Or hash them for anonymization:

text
1234567
:80 {
log {
format filter {
request>client_ip hash
}
}
}

Hashing provides deterministic pseudonyms that maintain correlation without exposing personal data.

Mask IP addresses

text
12345678910
:80 {
log {
format filter {
request>client_ip ip_mask {
ipv4 24
ipv6 56
}
}
}
}

This masks the last octets of IPs while still preserving geographic or subnet-level information — ideal for privacy compliance.

Adding context to logs

Structured logs are powerful, but context is what turns them into observability data. Adding metadata allows you to connect logs with metrics and traces.

You can use the append format to add extra fields:

text
12345678
:80 {
log {
format append {
environment {env.SERVER_ENV}
hostname {system.hostname}
}
}
}

Caddy's logging integrates beautifully with OpenTelemetry when you have the tracing directive enabled:

text
123
:80 {
tracing
}

When enabled, it will propagate an existing trace context or initialize a new one and the standard traceID and spanID fields will be added to your logs.

Log query parameters and sensitive data

Sometimes query strings include sensitive data such as API keys, emails, or tokens. Caddy can handle these safely using the query filter.

text
1234567891011
:80 {
log {
format filter {
request>uri query {
delete apikey
replace session [REDACTED]
hash email
}
}
}
}

If a request like /api?apikey=12345&email=user@example.com arrives, Caddy transforms it before writing the log:

text
1
/api?email=f0e4c2f7

This feature keeps your telemetry secure and compliant with data protection regulations without breaking your observability pipeline.

10. Integrating caddy logs with OpenTelemetry

The future of observability lies in unifying logs, metrics, and traces. Caddy's structured logs make this straightforward.

While Caddy doesn't directly export in the OTLP (OpenTelemetry Protocol) format yet, you can use collectors like Fluent Bit, Vector, or the OpenTelemetry Collector to forward JSON logs to your telemetry backend.

Here's an example OpenTelemetry Collector configuration that receives logs from Docker and exports them in OTLP:

yaml
1234567891011121314151617181920
receivers:
filelog:
include: [/var/lib/docker/containers/*/*.log]
start_at: beginning
operators:
- type: json_parser
id: parse_caddy
output: logs
exporters:
otlp:
endpoint: "otel-collector:4317"
tls:
insecure: true
service:
pipelines:
logs:
receivers: [filelog]
exporters: [otlp]

Once ingested, the logs are enriched with OpenTelemetry's semantic conventions, allowing correlation with other signals.

For example:

  • Log field request.duration maps to trace span duration.
  • request.host, status, and method map to OpenTelemetry's HTTP attributes.
  • The otel_trace_id (if appended earlier) links the log entry directly to a trace.

You can visualize these relationships in any observability backend that supports OpenTelemetry data such as Dash0.

Final thoughts

Caddy’s logging system is among the most advanced of any web server. It’s built on modern principles — structured data, zero allocation overhead, and deep configurability. Whether you’re managing a single site or operating across a global edge network, it gives you the visibility you need to understand what’s happening beneath the surface.

In this guide, you learned how to enable and customize access logs, adjust their format for readability or structured analysis, filter and protect sensitive fields, and enrich them with metadata for trace correlation. You also saw how these logs can integrate into a broader OpenTelemetry pipeline, forming part of a unified observability strategy.

The next step is to apply these techniques in your own environment. Connect your logs with metrics and traces, observe how they reinforce each other, and watch your understanding of the system deepen. Once you can truly see your system, you can finally begin to understand it.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah