Which logging library to use in Node.js application
In software development, logs are the place where we will keep track of the history of actions done on an application and where error messages and stack traces will be written. [...] This article will explore best practices for effectively managing logs within a Node.js application.

Introduction
“Log”, a shortening of “Logbook” make reference to the place sailors were originally writing records of observations and in particular the ship’s speed. At that time they were measuring the speed with a chip log giving its name to the book.
In software development, logs are the place where we will keep track of the history of actions done on an application and where error messages and stack traces will be written.
This part should not be neglected because it can really help to understand bugs, and it can provide a lot of important information to understand what was done on the application.
This article will explore best practices for effectively managing logs within a Node.js application. We'll begin by defining the expectations for a logging system, delve into security considerations related to logging, and finally, explore key libraries commonly employed for logging purposes.
TL;DR
- Understand log expectations and utilize log levels for effective logging.
- Exercise caution when logging data, especially sensitive information.
- Choose suitable storage locations and implement log rotation for efficient log management.
- Optimize log performance to prevent application slowdowns.
- Prioritize log security to protect against unauthorized access.
- Consider decentralized log management systems for enhanced security and monitoring.
- Leverage logs for monitoring, error detection, and debugging in web applications.
- Choose a logging library that best fits your specific logging needs.
What to expect from a logging library
Log Level
In general, we will want to log different types of information, starting with the error that could happen, any status on a process, actions triggered by a user, and any information useful for debugging, …
The things that can be logged are numerous, it will depend on our use case. However, not all those things will require the same attention.
For that reason, we generally want to use different log levels. That’s why most loggers offer the possibility to specify which level of log should be used amongst: error, warn, info, debug, and trace.
What to not log?
While it's possible to log a wide range of information, it's crucial to exercise extreme caution and avoid logging certain sensitive data. For instance, when recording a user's connection to an application, it's imperative to ensure that their password and any other private information are never included in the log files under any circumstances.
So you must be very careful about what data you are actually logging. In some cases, it can also be nice to use redaction tools to automatically hide sensitive information. You can configure such tools to automatically detect and hide sensitive information before it’s written in the log file. Some logging library like Pino.js allow to do this.
Where do we write logs?
When dealing with scripts or command-line interfaces (CLIs), it's essential to employ a logger to display information in the console, either through the standard output (stdout) or standard error (stderr) streams. This approach allows the user to promptly view any errors or prompts in their terminal and offers the flexibility to redirect the output to a designated file.
You might be familiar with the redirection:
node script.js >> output.log 2>> error.log
Similarly, when dealing with an application, it will be mandatory to direct your logs to a specific log file for later reference.
As a general guideline, it’s not advised to have a single log file to store the entire historical log data from the initial application launch. As your application runs over time, an increasing volume of log data is generated. If all this data were stored in a single file, it could rapidly expand to occupy gigabytes of space on your server. In extreme cases, this accumulation of data could even lead to server crashes due to insufficient disk space.
The solution is rather simple: we implement log rotation. This means that when a log file reaches a specified size, it is compressed, and a new log file is created to record the latest log entries. This approach ensures that older log files occupy less space when archived. On linux system that’s possible to achieve this with logrotate.
Performance considerations
As any I/O operations can have performance impacts, writing logs can be a source of slowness. When trying to write any log on a resource (like a log file), that resource will be locked until the writing operation has been completed. It means that if your application is trying to create a lot of logs it can have a lot of latency because it’s actually waiting for the logs to be written.
In an ideal logging system, any log operation should not have any impact on the performance of your application. So we will want our logging system to use the minimum amount of resources, to avoid having any throttling effect.
We must consider that Node.js is single-threaded, so if log operations are handled on the same process it could lead to performance issues. The event loop could be overloaded with log operation, and spend most of the execution time writing some logs instead of doing something else.
So the best would be to have another process that handles log writing, in most logging libraries those log processors are called transport.
Protecting your logs
Logs are typically the initial point of examination in the event of a server or application attack. They encompass a record of all system operations, making them the primary source for forensic analysis. If a malicious actor gains access to the logs, they may attempt to delete them to conceal their actions, that’s why it is imperative to ensure the robust protection of logs.
Thereby, when considering log storage, it's essential to choose wisely where to store them. If you’re managing a web application hosted on a server, it’s generally not advisable to store the logs on the same server where the application is running.
To protect the log even more, we will want to avoid any writing operation that could override the log content.
That’s where having a decentralized log management system comes into play. There are a lot of solutions available on the market. If you are a user of Amazon Web Services you might be aware of CloudWatch. At Jolimoi we are also using Newrelics. Using a decentralized log management system will make sure your logs are safe and easy to monitor.
Monitor logs
If you are overseeing a web application or a service that necessitates monitoring, logs can be handy.
Modern monitoring tools rely on logs from your application to detect and track any occurring errors. With proper configuration, these tools can promptly identify outages and correlate them with the specific error entries in the logs. This capability significantly expedites the process of pinpointing the error's source and aids in swiftly devising solutions to rectify it. In this aspect Newrelics offers advanced error tracking features, and Datadog provides similar capabilities.
Basic logging in Javascript
If you know about Javascript, you must know about console.log(). That’s the default way to log something in Javascript.
If running the Javascript code on a browser, with console.log() you will have access to the browser console. So the Console API will depend on the implementation of the browser.
Currently, there is no standardization of the implementation of the console by browser, most of them provide similar API, but implementation can vary.
For example, you can have different results while running this piece of code in different browsers:
If running this on Chrome you will have the following result:
While if you run that in Firefox you will get this:
So if you are using advanced features of the console you might check if those are compatible with every browser.
If you are running the Javascript code on Node.js, you will have access to the Node.js console. By default, those logs will be sent to the standard output.
Both Browser and Node.js implementations provide console.log(), console.debug(), console.warn(), console.error(), … and it can be interesting to use that:
- In the browser if you want to help the developer find the source of some error that could have happened on the front.
- In a Node.js script if you don’t have specific needs for performance and if you only need to give feedback to the person that launches the scripts.
As stated in the Node.js Console module documentation, it is designed to "provides a simple debugging console" making it well-suited for debugging purposes. However, given the considerations we've discussed earlier, using the Node.js console API as a production log system for an application is not advisable.
Which logging library to use?
So what logging library should we use that would be acceptable for production?
On the most used library, we can find small logging libraries that just give the basic logging feature, amongst them you can find Bunyan, Bole, npmlog, … But those don’t provide any transports that would make it easy to send logs somewhere else.
Then you can find Pino and Winston, which provide a lot of transport and make it really easy to send logs to different services. You can see the list of transports bundled with Winston, and the list of known transports for Pino.
At Jolimoi we are using Winston on our legacy and we are using Pino on our new application. We made the change because we are using Fastify and Fastify is using Pino. If both need to be compared, Pino is actually having better performance than Winston.
What we expect from those libraries:
- Be able to log to standard output
- Being able to write logs as JSON
- Having different log levels
- Sending logs to different services (NewRelics, Cloudwatch, …) using easily configurable transports
- It does not affect the performance of our application
- In bonus, redaction: we want to have the possibility of anonymizing some data in the log automatically
Example with Pino.js
You can find in the documentation of Pino.js a list of example using different frameworks.
We will just give a little example here without any framework, to explain how it works.
We created a custom class to encapsulate the logger, that way we could simply switch to another library by recreating a similar class with the same methods.
First, we create a module **development.ts. This module exports the logger options object that allows us to configure our logger. We create this module to configure the logger when we are in development, it allows us to set a different configuration than what we need in production. Here we are using the transport Pino-pretty.
import type { LoggerOptions } from 'pino'
const formatTime = 'HH:MM:ss Z'
const development: LoggerOptions = {
level: 'debug',
transport: {
target: 'pino-pretty',
options: {
translateTime: formatTime,
ignore: 'pid,hostname',
},
},
}
export default development
Then we create a module **loggerOptions.ts. That module’s purpose is to load the correct configuration depending on the environment. By default, if no environment is set in the .env, it will use the development configuration.
import type { LoggerOptions } from 'pino'
import development from './development' // (1)
import production from './production'
type AvailableLoggerConfig = Record<string, LoggerOptions>
const availableLoggerConfig: AvailableLoggerConfig = { // (2)
development,
production
}
const loggerOptions: LoggerOptions = availableLoggerConfig[process.env.ENV ?? 'development'] // (3)
export default loggerOptions
- First, we import the configuration
- Then we create an object that will contain all the available configuration
- We get the configuration depending on the environment
Finally we create a module **logger.ts, that will export the logger instance, so we can use it everywhere.
import loggerOptions from '@./loggerOptions' // (1)
import type { Bindings, ChildLoggerOptions, Logger, LoggerOptions } from 'pino'
import pino from 'pino'
class JolimoiLogger { // (2)
private log: Logger<LoggerOptions> // (3)
constructor() {
this.log = pino(loggerOptions) // (4)
}
public debug(obj: object, msg?: string, args?: any[]) {
return this.log.debug(obj, msg, args)
}
public info(obj: object, msg?: string, args?: any[]) {
return this.log.info(obj, msg, args)
}
public warn(obj: object, msg?: string, args?: any[]) {
return this.log.warn(obj, msg, args)
}
public error(obj: unknown, msg?: string, args?: any[]) {
return this.log.error(obj, msg, args)
}
}
const logger = new JolimoiLogger() // (5)
export default logger // (6)
- Firstly, we load our configuration to provide the appropriate options for initializing our logger; this options object is loaded based on the environment.
- Next, we establish a custom class responsible for managing our logger instance.
- We create a private property within this class, which will receive the Pino instance.
- Within the constructor, we instantiate the Pino instance using the specified options to configure the transport.
- To streamline the process, we create a public method for each logger level, akin to those found in Pino, to encapsulate the invocation of the original Pino instance method.
- Finally, we create a new instance and export it, ensuring a singleton pattern, which guarantees there will be only one logger instance for our entire application.
Conclusion
Logging plays a crucial role in web applications, impacting various aspects such as performance, security, monitoring, and debugging. When improperly utilized, it can result in significant costs. Therefore, it is imperative to give careful consideration to logging and choose the appropriate logging library based on your specific needs.
I would recommend using Pino or Winston which offer excellent solutions for sending logs to multiple destinations. With a wide range of available transports, these logging libraries provide flexibility and ease of use. Additionally, if none of the existing options meet your requirements, both libraries allow you to create custom transports tailored to your unique logging needs.