Dash0 Raises $35 Million Series A to Build the First AI-Native Observability Platform

Last updated: October 22, 2025

The Top 7 Node.js Logging Libraries Compared

Building reliable Node.js applications requires visibility into what your code is doing at every level. Whether you're tracking API calls, debugging production issues, or analyzing user behavior, logging is how you connect system behavior to real-world outcomes.

While the built-in console object works for simple debugging, it falls short once your application scales. Production logging demands structured data, configurable verbosity, and the ability to integrate with observability systems like OpenTelemetry and centralized log pipelines.

The Node.js ecosystem offers several mature libraries that go far beyond console.log(). In this guide, we'll explore seven of the most widely used and actively maintained options and examine how each fits into a modern observability strategy.

Why logging libraries matter

Modern logging isn't just about printing text to the console. It's about producing data that can be parsed, filtered, correlated, and visualized across distributed systems. To achieve that, your logging setup needs to:

  • Structure data consistently (usually in JSON)
  • Include timestamps, levels, and context
  • Support multiple output targets (files, transports, APIs)
  • Handle asynchronous workloads efficiently
  • Integrate with tracing and metrics systems
  • Avoid performance bottlenecks

Each library we'll explore brings its own philosophy to these challenges. While the Node.js ecosystem has dozens of tools, they aren't all created equal. To help you choose, we've broken down the "Top 7" not as a flat list, but into three distinct categories:

  1. The Production Standards: The two libraries that dominate modern, high-performance, structured logging.
  2. The Legacy Giants: Mature, stable libraries that you will absolutely find in older codebases.
  3. The Specialists: Tools designed for a specific job, like request logging or developer-friendly CLI output.

Let's begin with the one that has become synonymous with performance and production-grade logging in Node.js.

1. Pino

Pino has become the de facto choice for high-performance logging in Node.js. Its design focuses on speed, low overhead, and structured JSON output, making it ideal for microservices and observability pipelines.

You can install Pino from npm:

bash
1
npm install pino

Then, initialize a logger and log a message:

JavaScript
1234
import pino from "pino";
const logger = pino();
logger.info("Application started");

You'll see structured JSON output like this:

json
1234567
{
"level": 30,
"time": 1739914456819,
"pid": 9418,
"hostname": "falcon",
"msg": "Application started"
}

Each field serves a clear purpose: the numeric level denotes severity, time records the timestamp, and msg holds your message. Because it's structured JSON, every field can be parsed automatically by log collectors.

Why developers choose Pino

Pino's biggest selling point is speed. It's written with zero-allocation techniques and avoids expensive string formatting. Benchmarks consistently show it outperforming alternatives by large margins.

It's also deeply configurable. You can adjust timestamps, change log levels, redact sensitive fields, or define multiple output targets without adding much complexity.

JavaScript
12345
const logger = pino({
level: process.env.LOG_LEVEL || "info",
timestamp: pino.stdTimeFunctions.isoTime,
redact: { paths: ["user.password"] },
});

Logging with context

Structured logs become truly valuable when enriched with metadata. Pino supports this via its second parameter, often called the mergingObject:

JavaScript
1
logger.info({ userId: "abc123", route: "/login" }, "User login successful");

This produces:

json
123456789
{
"level": 30,
"time": "2025-10-19T19:33:24.249Z",
"pid": 1209,
"hostname": "falcon",
"userId": "abc123",
"route": "/login",
"msg": "User login successful"
}

This contextual format makes searching logs in your observability platform far easier. You can filter by userId, query by route, or aggregate counts over time.

Error handling and serializers

Pino automatically serializes Error objects, including the message and stack trace. It also includes built-in serializers for requests and responses, useful when integrating with web frameworks.

JavaScript
1
logger.error(new Error("Database connection failed"));

You can also define custom serializers for specific objects:

JavaScript
123456
const logger = pino({
serializers: {
user: (u) => ({ id: u.id, email: u.email }),
},
});
logger.info({ user: { id: "u1", email: "test@example.com", token: "secret" } });

Integrations and transports

By default, Pino writes logs to stdout. This is the usual practice in containerized environments, as it allows a separate, dedicated log collector to handle log shipping.

For development, you can pipe this JSON output through a pretty-printer:

bash
1
node my-app.js | pino-pretty

If you must handle log routing from within your application, pino.transport runs in a separate worker thread to keep your main event loop unblocked. For example, you could write to both a file and stdout:

JavaScript
123456789101112
const transport = pino.transport({
targets: [
{
level: "info",
target: "pino/file",
options: { destination: "logs/app.log" },
},
{ level: "info", target: "pino/stdout", options: {} },
],
});
const logger = pino({ level: "info" }, transport);

Pino in frameworks

Pino also integrates seamlessly with popular Node.js frameworks, making it easy to capture structured logs from HTTP requests and responses. In Fastify, it's the default logger: every request is tagged with a unique reqId, and response logs automatically include metadata such as status codes and durations.

In Express, you can use the pino-http middleware for similar behavior—it logs each request and response and attaches a scoped req.log instance to every handler.

These integrations make Pino a natural fit for production web services where structured, contextual logging is essential to understanding application behavior and diagnosing issues quickly

OpenTelemetry alignment

Pino also aligns well with modern observability standards. Using the @opentelemetry/instrumentation-pino package, it can automatically inject trace_id and span_id fields, allowing your logs to be correlated with active traces in OpenTelemetry. This means every log entry can be connected to the broader context of a distributed request.

Taken together, Pino's combination of structured output, minimal overhead, and native observability integration makes it the clear choice for production logging in Node.js. If you're starting a new service or modernizing an existing one, Pino should be your default logger.

2. Winston

If Pino is about speed, Winston is about flexibility. It's one of the oldest and most established Node.js loggers, with a modular design built around transports, formats, and custom levels.

Install it from npm with:

bash
1
npm install winston

Then create a logger instance:

JavaScript
12345678910
import winston from "winston";
const { combine, timestamp, json } = winston.format;
const logger = winston.createLogger({
level: "info",
format: combine(timestamp(), json()),
transports: [new winston.transports.Console()],
});
logger.info("Application started");

This produces a structured JSON log similar to Pino's but with customizable fields. You'll n

Working with log levels

Winston supports multiple log level schemes (npm, syslog, and cli) and allows defining your own:

JavaScript
123456789101112131415
const customLevels = {
levels: {
fatal: 0,
error: 1,
warn: 2,
info: 3,
debug: 4,
trace: 5,
},
};
const logger = winston.createLogger({
levels: customLevels.levels,
transports: [new winston.transports.Console()],
});

This flexibility makes Winston a great fit for large teams with specific severity conventions or OpenTelemetry-aligned naming.

Contextual metadata

You can attach metadata globally using defaultMeta or per log entry via an object parameter:

JavaScript
123456
const logger = winston.createLogger({
defaultMeta: { service: "payment-service" },
transports: [new winston.transports.Console()],
});
logger.info("Payment processed", { transactionId: "tx-342" });

Result:

json
1234567
{
"level": "info",
"service": "payment-service",
"transactionId": "tx-342",
"message": "Payment processed",
"timestamp": "2025-10-19T19:42:02.317Z"
}

Error handling

Winston doesn't automatically serialize Error objects unless you enable the errors({ stack: true }) formatter. This is a common "gotcha" for new users.

JavaScript
12345678
const { errors, combine, timestamp, json } = winston.format;
const logger = winston.createLogger({
format: combine(errors({ stack: true }), timestamp(), json()),
transports: [new winston.transports.Console()],
});
logger.error(new Error("Database unavailable"));

This ensures you capture both the message and the full stack trace in JSON format.

Transports and routing logs

Winston's transport layer is its core strength. You can route logs to multiple destinations:

JavaScript
1234567
import "winston-daily-rotate-file";
const logger = winston.createLogger({
transports: [
new winston.transports.Console(),
new winston.transports.DailyRotateFile({ filename: "app-%DATE%.log" }),
],
});

Beyond console and file transports, there are community transports for HTTP and several observability tools.

OpenTelemetry integration

The @opentelemetry/instrumentation-winston package automatically maps Winston logs into OpenTelemetry's log model, adding trace context and severity metadata. This makes Winston a good choice if your stack already uses OTel for metrics and tracing.

When to choose Winston

If you need advanced routing, multiple formats, or compatibility with older codebases, Winston remains a powerful and mature choice. It's not as fast as Pino, but it excels in flexibility.

3. Bunyan

Bunyan was one of the first Node.js libraries to promote structured JSON logging. Pino is its spiritual successor, but you will still find Bunyan in many established, large-scale applications. Its API remains simple and its design focused.

Getting started

JavaScript
123
import bunyan from "bunyan";
const logger = bunyan.createLogger({ name: "myapp" });
logger.info("Server started");

The output looks like:

json
123456789
{
"name": "myapp",
"hostname": "falcon",
"pid": 3451,
"level": 30,
"msg": "Server started",
"time": "2025-10-19T19:59:11.043Z",
"v": 0
}

Context and child loggers

Bunyan supports child loggers to automatically include contextual fields in all subsequent logs:

JavaScript
12
const requestLogger = logger.child({ requestId: "req-182" });
requestLogger.info("Processing request");

This makes it ideal for multi-request servers or long-running background workers.

Error handling

Bunyan automatically serializes errors and includes their stack traces:

JavaScript
1
logger.error(new Error("Cache unavailable"));

Pretty-printing logs

During development, you can pipe logs through Bunyan's CLI tool for readability:

bash
1
node app.js | npx bunyan

Or filter only error-level logs:

bash
1
node app.js | npx bunyan -l error

Limitations

Although Bunyan introduced many best practices, its maintenance has slowed. If you're building something new, Pino offers a faster, more actively maintained alternative.

4. Log4js

If you're coming from a Java or .NET enterprise background, Log4js will feel very familiar. It's designed around the "category" and "appender" model of its namesake, Log4j, and brings that structured model into Node.js.

Basic usage

JavaScript
123456789
import log4js from "log4js";
log4js.configure({
appenders: { out: { type: "stdout" } },
categories: { default: { appenders: ["out"], level: "info" } },
});
const logger = log4js.getLogger();
logger.info("Application initialized");

This outputs a timestamped, colorized line in the console. For structured logging, you can use a JSON layout:

JavaScript
12
import jsonLayout from "log4js-json-layout";
log4js.addLayout("json", jsonLayout);

Appenders and categories

Appenders define where logs go — for example, to a file, console, TCP socket, or third-party service.

JavaScript
12345678910
log4js.configure({
appenders: {
console: { type: "stdout" },
file: { type: "file", filename: "app.log" },
},
categories: {
default: { appenders: ["console"], level: "info" },
fileLogs: { appenders: ["file"], level: "debug" },
},
});

Now you can use separate loggers for different components:

JavaScript
12345
const consoleLogger = log4js.getLogger();
const fileLogger = log4js.getLogger("fileLogs");
consoleLogger.info("Info log");
fileLogger.debug("Debug log written to file");

Pros and cons

Pros

  • Mature and stable.
  • Configurable via appenders and categories.
  • Supports dynamic log level changes.

Cons

  • No JSON support by default.
  • Slower than Pino or Winston.
  • Limited OpenTelemetry integration.

Log4js is still relevant for teams needing category-based control over output destinations, especially in large, monolithic systems.


The next set of tools are not general-purpose loggers. They are designed to solve one problem very well and are often used alongside a primary logger like Pino or Winston.


5. Morgan

Morgan is not a general-purpose application logger. It is a highly specialized Express middleware for one thing: HTTP request (access) logs.

Basic usage

JavaScript
12345678
import express from "express";
import morgan from "morgan";
const app = express();
app.use(morgan("combined"));
app.get("/", (req, res) => res.send("Hello world"));
app.listen(3000);

Morgan automatically logs each request in a predefined format (like Apache's combined log format):

text
1
::1 - - [19/Oct/2025:20:21:10 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/8.5.0"

Limitations and modern alternatives

While Morgan is great for simple text-based access logs, modern observability practice is to use a middleware that integrates with your main structured logger, like pino-http or express-winston.

Why? Because an integrated logger automatically injects the reqId (request ID) from the access log into all your subsequent application logs for that request. This lets you correlate a single GET /api/user/123 log with all the "database query" and "service call" logs that happened inside it. Morgan, running in isolation, cannot do this.

6. Roarr

Roarr is a structured logger designed for both Node.js and browser environments. Its main selling point is its deep integration with Node.js's AsyncLocalStorage (ALS).

This allows it to "adopt" a context (like a userId or traceId) and automatically apply it to all logs made within that asynchronous call stack, without you having to manually pass the logger object down through every function.

Basic example

JavaScript
1234
import { Roarr } from "roarr";
const logger = Roarr.child({ service: "inventory" });
logger.info({ userId: 42 }, "Fetching user inventory");

Roarr only outputs logs if the ROARR_LOG environment variable is set to true:

bash
1
ROARR_LOG=true node app.js

Output:

json
123456
{
"context": { "logLevel": 30, "service": "inventory", "userId": 42 },
"message": "Fetching user inventory",
"time": 1739920142817,
"version": "2.0.0"
}

Context propagation

This is incredibly powerful for complex, nested async code:

JavaScript
1234567
const base = Roarr.child({ app: "checkout" });
base.adopt({ requestId: "r-99" }, () => {
// Any logger.info() called in here, even in deep
// async functions, will automatically have "requestId: r-99"
base.info("Request started");
});

Limitations

Roarr doesn't implement its own transport system. Instead, it expects logs to be piped to stdout for a shipper like to process.

7. Signale

Let's be perfectly clear: Signale is not a production logger, so do not use it for your backend API. It produces colorful, human-readable text, which is the exact opposite of the structured JSON data that observability platforms need.

So why is it on this list? Because it is excellent at its one specific job: making your CLI tools and build scripts look beautiful.

If you're writing a create-my-app bootstrapper, a database migration script, or a webpack plugin, Signale is a perfect choice. It's for logging to a developer's terminal, not to a log collector.

Example

JavaScript
12345678
import { Signale } from "signale";
const logger = new Signale({ scope: "setup" });
logger.start("Initializing project");
logger.success("Configuration complete");
logger.warn("Using default environment");
logger.error("Failed to fetch remote data");

This produces colorized, symbol-prefixed logs that are easy to scan in the terminal.

Timed logging

You can measure operation durations using time() and timeEnd():

JavaScript
12
logger.time("build");
setTimeout(() => logger.timeEnd("build"), 1200);

Pros and cons

Pros

  • Clean, colorful, and readable output.
  • Supports scopes, timers, and filtering.

Cons

  • Not JSON structured.
  • No OpenTelemetry support.
  • Not for server applications.

Choosing the right logger

Each of these libraries serves a different purpose. The choice isn't "which of the 7 is best", but "which category of tool do I need?"

LibraryCategoryIdeal Use CaseOTel-Friendly?
PinoProduction StandardHigh-performance microservicesYes
WinstonProduction StandardApps needing many transports/formatsYes
BunyanLegacyMaintaining older, large codebasesYes
Log4jsLegacyEnterprise/Java-style monolithsNo
RoarrSpecialist (Async)Complex async logic (needs ALS)No
MorganSpecialist (HTTP)Simple Express access logsNo (Use pino-http)
SignaleSpecialist (CLI)Build scripts, CLI tools (Dev-only)No (Not its purpose)

If you're building a modern backend, the best choice is typically Pino for its performance, structured output, and native OpenTelemetry support. If you need complex routing or legacy compatibility, Winston remains an excellent option. For CLI tools and utilities, Signale is a great option.

Final thoughts

A well-chosen library doesn't just help you debug problems; it gives you a structured view of how your system behaves in production.

Pino and Winston dominate modern Node.js logging for good reason. They combine mature ecosystems with robust integrations, enabling everything from local debugging to full trace correlation in OpenTelemetry-based systems.

Whatever library you choose for your application, aim for structured, contextual, and consistent logs. They are your system's narrative, and the only way to understand what really happened when things go wrong.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah