Last updated: October 20, 2025
The Top 7 Node.js Logging Libraries Compared
Building reliable Node.js applications requires visibility into what your code is doing at every level. Whether you’re tracking API calls, debugging production issues, or analyzing user behavior, logging is how you connect system behavior to real-world outcomes.
While the built-in console
object works for simple debugging, it falls short
once your application scales. Production logging demands structured data,
configurable verbosity, and the ability to integrate with observability systems
like OpenTelemetry and centralized log pipelines.
The Node.js ecosystem offers several mature libraries that go far beyond
console.log()
. In this guide, we’ll explore seven of the most widely used and
actively maintained options and examine how each fits into a modern
observability strategy.
Why logging libraries matter
Modern logging isn’t just about printing text to the console. It’s about producing data that can be parsed, filtered, correlated, and visualized across distributed systems. To achieve that, your logging setup needs to:
- Structure data consistently (usually in JSON)
- Include timestamps, levels, and context
- Support multiple output targets (files, transports, APIs)
- Handle asynchronous workloads efficiently
- Integrate with tracing and metrics systems
- Avoid performance bottlenecks
Each library we’ll explore brings its own philosophy to these challenges. Some prioritize speed and minimal overhead, while others emphasize flexibility or developer ergonomics.
Let’s begin with the one that has become synonymous with performance and production-grade logging in Node.js.
1. Pino
Pino has become the de facto choice for high-performance logging in Node.js. Its design focuses on speed, low overhead, and structured JSON output, making it ideal for microservices and observability pipelines.
You can install Pino from npm:
bash1npm install pino
Then, initialize a logger and log a message:
JavaScript1234import pino from "pino";const logger = pino();logger.info("Application started");
You’ll see structured JSON output like this:
json1234567{"level": 30,"time": 1739914456819,"pid": 9418,"hostname": "falcon","msg": "Application started"}
Each field serves a clear purpose: the numeric level
denotes severity, time
records the timestamp, and msg
holds your message. Because it’s structured
JSON, every field can be parsed automatically by log collectors.
Why developers choose Pino
Pino’s biggest selling point is speed. It’s written with zero-allocation techniques and avoids expensive string formatting. Benchmarks consistently show it outperforming alternatives by large margins.
It’s also deeply configurable. You can adjust timestamps, change log levels, redact sensitive fields, or define multiple output targets without adding much complexity.
JavaScript12345const logger = pino({level: process.env.LOG_LEVEL || "info",timestamp: pino.stdTimeFunctions.isoTime,redact: { paths: ["user.password"] },});
Logging with context
Structured logs become truly valuable when enriched with metadata. Pino supports
this via its second parameter, often called the mergingObject
:
JavaScript1logger.info({ userId: "abc123", route: "/login" }, "User login successful");
This produces:
json123456789{"level": 30,"time": "2025-10-19T19:33:24.249Z","pid": 1209,"hostname": "falcon","userId": "abc123","route": "/login","msg": "User login successful"}
This contextual format makes searching logs in your observability platform far
easier. You can filter by userId
, query by route
, or aggregate counts over
time.
Error handling and serializers
Pino automatically serializes Error
objects, including the message and stack
trace. It also includes built-in serializers for requests and responses, useful
when integrating with web frameworks.
JavaScript1logger.error(new Error("Database connection failed"));
You can also define custom serializers for specific objects:
JavaScript123456const logger = pino({serializers: {user: (u) => ({ id: u.id, email: u.email }),},});logger.info({ user: { id: "u1", email: "test@example.com", token: "secret" } });
Integrations and transports
By default, Pino writes logs to stdout
. For production, you can configure
multiple transports to send logs to files or telemetry endpoints.
JavaScript12345678const transport = pino.transport({targets: [{ target: "pino/file", options: { destination: "logs/app.log" } },{ target: "pino-pretty", options: { colorize: true } },],});const logger = pino({ level: "info" }, transport);
Each transport runs in a separate worker thread, keeping your main event loop unblocked.
Pino in frameworks
- Fastify: Pino is Fastify’s default logger. All requests and responses
include contextual data like
reqId
and response times automatically. - Express: You can integrate with
pino-http
middleware to log incoming requests and attachreq.log
to handlers.
OpenTelemetry alignment
Pino integrates seamlessly with OpenTelemetry using
the @opentelemetry/instrumentation-pino
package. This automatically injects
trace_id
and span_id
fields, allowing you to correlate logs with traces and
metrics.
Pino’s structured output and low overhead make it the most production-ready logger in the Node.js ecosystem today.
2. Winston
If Pino is about speed, Winston is about flexibility. It’s one of the oldest and most established Node.js loggers, with a modular design built around transports, formats, and custom levels.
Setting up Winston
Install it from npm:
bash1npm install winston
Create a logger instance:
JavaScript12345678910import winston from "winston";const { combine, timestamp, json } = winston.format;const logger = winston.createLogger({level: "info",format: combine(timestamp(), json()),transports: [new winston.transports.Console()],});logger.info("Application started");
This produces a structured JSON log similar to Pino’s but with customizable fields.
Working with log levels
Winston supports multiple log level schemes (npm
, syslog
, and cli
) and
allows defining your own:
JavaScript123456789101112131415const customLevels = {levels: {fatal: 0,error: 1,warn: 2,info: 3,debug: 4,trace: 5,},};const logger = winston.createLogger({levels: customLevels.levels,transports: [new winston.transports.Console()],});
This flexibility makes Winston a great fit for large teams with specific severity conventions or OpenTelemetry-aligned naming.
Contextual metadata
You can attach metadata globally using defaultMeta
or per log entry via an
object parameter:
JavaScript123456const logger = winston.createLogger({defaultMeta: { service: "payment-service" },transports: [new winston.transports.Console()],});logger.info("Payment processed", { transactionId: "tx-342" });
Result:
json1234567{"level": "info","service": "payment-service","transactionId": "tx-342","message": "Payment processed","timestamp": "2025-10-19T19:42:02.317Z"}
Error handling
Winston doesn’t automatically serialize Error
objects unless you enable the
errors({ stack: true })
formatter:
JavaScript12345678const { errors, combine, timestamp, json } = winston.format;const logger = winston.createLogger({format: combine(errors({ stack: true }), timestamp(), json()),transports: [new winston.transports.Console()],});logger.error(new Error("Database unavailable"));
This ensures you capture both the message and the full stack trace in JSON format.
Transports and routing logs
Winston’s transport layer is its core strength. You can route logs to multiple destinations:
JavaScript1234567import "winston-daily-rotate-file";const logger = winston.createLogger({transports: [new winston.transports.Console(),new winston.transports.DailyRotateFile({ filename: "app-%DATE%.log" }),],});
Beyond console and file transports, there are community transports for HTTP, CloudWatch, Elasticsearch, Datadog, and more.
OpenTelemetry integration
The @opentelemetry/instrumentation-winston
package automatically maps Winston
logs into OpenTelemetry’s log model, adding trace context and severity metadata.
This makes Winston a good choice if your stack already uses OTel for metrics and
tracing.
When to choose Winston
If you need advanced routing, multiple formats, or compatibility with older codebases, Winston remains a powerful and mature choice. It’s not as fast as Pino, but it excels in flexibility.
3. Log4js
Log4js brings the structured, category-based logging model from the Java world into Node.js. It uses appenders to control where logs go and layouts to control how they look.
Basic usage
JavaScript123456789import log4js from "log4js";log4js.configure({appenders: { out: { type: "stdout" } },categories: { default: { appenders: ["out"], level: "info" } },});const logger = log4js.getLogger();logger.info("Application initialized");
This outputs a timestamped, colorized line in the console. For structured logging, you can use a JSON layout:
JavaScript12import jsonLayout from "log4js-json-layout";log4js.addLayout("json", jsonLayout);
Appenders and categories
Appenders define where logs go — for example, to a file, console, TCP socket, or third-party service.
JavaScript12345678910log4js.configure({appenders: {console: { type: "stdout" },file: { type: "file", filename: "app.log" },},categories: {default: { appenders: ["console"], level: "info" },fileLogs: { appenders: ["file"], level: "debug" },},});
Now you can use separate loggers for different components:
JavaScript12345const consoleLogger = log4js.getLogger();const fileLogger = log4js.getLogger("fileLogs");consoleLogger.info("Info log");fileLogger.debug("Debug log written to file");
Pros and cons
Pros
- Mature and stable.
- Configurable via appenders and categories.
- Supports dynamic log level changes.
Cons
- No JSON support by default.
- Slower than Pino or Winston.
- Limited OpenTelemetry integration.
Log4js is still relevant for teams coming from the Java ecosystem or needing category-based control over output destinations.
4. Bunyan
Bunyan was one of the first Node.js libraries to promote structured JSON logging. Its API remains simple and its design focused.
Getting started
JavaScript123import bunyan from "bunyan";const logger = bunyan.createLogger({ name: "myapp" });logger.info("Server started");
The output looks like:
json123456789{"name": "myapp","hostname": "falcon","pid": 3451,"level": 30,"msg": "Server started","time": "2025-10-19T19:59:11.043Z","v": 0}
Context and child loggers
Bunyan supports child loggers to automatically include contextual fields in all subsequent logs:
JavaScript12const requestLogger = logger.child({ requestId: "req-182" });requestLogger.info("Processing request");
This makes it ideal for multi-request servers or long-running background workers.
Error handling
Bunyan automatically serializes errors and includes their stack traces:
JavaScript1logger.error(new Error("Cache unavailable"));
Pretty-printing logs
During development, you can pipe logs through Bunyan’s CLI tool for readability:
bash1node app.js | npx bunyan
Or filter only error-level logs:
bash1node app.js | npx bunyan -l error
Limitations
Although Bunyan introduced many best practices still used today, its maintenance has slowed in recent years. It remains solid but lacks integration with OpenTelemetry and modern transports.
If you’re building something new, Pino offers a faster, more actively maintained alternative. But Bunyan still works well in legacy systems or smaller projects.
5. Roarr
Roarr is a structured logger designed for both Node.js and browser environments. It focuses on context propagation and library-level compatibility, allowing applications and dependencies to produce logs in a consistent format.
Basic example
JavaScript1234import { Roarr } from "roarr";const logger = Roarr.child({ service: "inventory" });logger.info({ userId: 42 }, "Fetching user inventory");
Roarr only outputs logs if the ROARR_LOG
environment variable is set to
true
:
bash1ROARR_LOG=true node app.js
Output:
json123456{"context": { "logLevel": 30, "service": "inventory", "userId": 42 },"message": "Fetching user inventory","time": 1739920142817,"version": "2.0.0"}
Context propagation
Roarr’s adopt()
and child()
methods let you attach contextual information
that persists across asynchronous code — something many other loggers don’t
handle elegantly.
JavaScript12345const base = Roarr.child({ app: "checkout" });base.adopt({ requestId: "r-99" }, () => {base.info("Request started");});
This keeps requestId
present in all logs generated within that context.
Limitations
Roarr doesn’t implement its own transport system. Instead, it expects logs to be piped to a shipper like Fluentd, Vector, or Logstash for processing.
Pros
- Context propagation across async boundaries.
- Lightweight, minimal design.
- Works in both Node.js and browsers.
Cons
- Requires external log shippers.
- No native transport or rotation support.
Roarr is a strong option for library developers or anyone building code that runs in both Node.js and browser contexts.
6. Signale
Signale takes a different approach. Instead of focusing on structured JSON, it’s optimized for human-readable, colorized console output, making it a great fit for CLI tools and developer utilities.
Example
JavaScript12345678import { Signale } from "signale";const logger = new Signale({ scope: "setup" });logger.start("Initializing project");logger.success("Configuration complete");logger.warn("Using default environment");logger.error("Failed to fetch remote data");
This produces colorized, symbol-prefixed logs that are easy to scan in the terminal. Each scope can have its own settings, output streams, and timers.
Timed logging
You can measure operation durations using time()
and timeEnd()
:
JavaScript12logger.time("build");setTimeout(() => logger.timeEnd("build"), 1200);
When to use Signale
Signale isn’t designed for production logging or observability pipelines, but it’s perfect for interactive tools and developer CLIs.
Pros
- Clean, colorful, and readable output.
- Supports scopes, timers, and filtering.
- Highly customizable formatting.
Cons
- Not JSON structured.
- No OpenTelemetry support.
- Limited for server applications.
If you’re writing a build tool, migration CLI, or local automation script, Signale keeps your logs visually organized without heavy setup.
7. Morgan
Morgan is an Express middleware for logging HTTP requests. It’s not a full logging framework but remains popular for lightweight web services.
Basic usage
JavaScript12345678import express from "express";import morgan from "morgan";const app = express();app.use(morgan("combined"));app.get("/", (req, res) => res.send("Hello world"));app.listen(3000);
Morgan automatically logs each request in a predefined format (like Apache’s
combined
log format):
::1 - - [19/Oct/2025:20:21:10 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/8.5.0"
Custom tokens
You can define custom tokens to enrich request logs:
JavaScript12morgan.token("id", (req) => req.headers["x-request-id"]);app.use(morgan(":id :method :url :status :response-time ms"));
This produces:
text142 GET /users 200 8.4 ms
Limitations
Morgan excels at simple HTTP request logging, but it’s not intended for application-level or structured logging. For production-grade systems, it’s often combined with Winston or Pino.
Choosing the right logger
Each of these libraries serves a different purpose. Here’s how they compare conceptually:
Library | Strengths | Ideal Use Case |
---|---|---|
Pino | Fast, JSON structured, OTel-friendly | Production services and microservices |
Winston | Flexible formats, transports | Applications needing multiple outputs |
Log4js | Configurable appenders and categories | Legacy or enterprise-style setups |
Bunyan | Simple structured logs | Lightweight projects or legacy apps |
Roarr | Context propagation, browser support | Libraries and hybrid environments |
Signale | Colorized CLI logging | Development tools and scripts |
Morgan | HTTP request logs | Express web servers |
If you’re building a modern backend, the best choice is typically Pino for its
performance, structured output, and native OpenTelemetry support.
If you need complex routing or legacy compatibility, Winston remains an
excellent option.
For tools and utilities, Signale and Morgan provide quick wins.
Final thoughts
A well-chosen library doesn’t just help you debug problems; it gives you a structured view of how your system behaves in production.
Pino and Winston dominate modern Node.js logging for good reason. They combine mature ecosystems with robust integrations, enabling everything from local debugging to full trace correlation in OpenTelemetry-based systems.
Whatever library you choose, aim for structured, contextual, and consistent logs. They are your system’s narrative and the only way to understand what really happened when things go wrong.
