Troubleshooting issues in your applications can be a complicated task requiring visibility into various components. In the worst-case scenario, to understand what is happening and why it is happening you will need JVM metrics, Java logs, and traces combined together. Having that information will give you the possibility to slice and dice the data and get to the root cause efficiently. In this article, we will focus on logs and how to configure logging for your Java applications.
There are multiple ways you can log your Java applications. For example, you can just write data to a file, but there are far better ways on how to do that, which we explained in our Java logging tutorial. In order to choose the best option for your application, you should be aware of the available options. That’s why I’ll introduce you to the next Java logging library.
Today we will look into Logback, the logging library that promises to improve the first version of Log4j and provide up to ten times better performance, graceful I/O failures recovery, automatic, policy-based logs removal, and more.
What Is Logback
Logback is a logging library used for Java-based applications and it starts where the first version of Log4j (check out our Log4j tutorial to learn more about what this library can do) ends and promises to provide improvements to that. For the purpose of this blog post, I created a simple project and shared it in our Github account.
Logback uses SLF4J as the logging facade, which you can read more about in our SLF4J tutorial.
In order to use it we need to include it as the project dependency. In addition to that, we need the logback-core and logback-classic libraries. Our Gradle build file dependency section looks as follows:
dependencies { implementation 'ch.qos.logback:logback-core:1.2.3' implementation 'ch.qos.logback:logback-classic:1.2.3' implementation 'org.slf4j:slf4j-api:1.7.30' }
I also created a simple class for the purpose of showing how the Logback logging framework works:
package com.sematext.blog.logging; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Logback { private static final Logger LOGGER = LoggerFactory.getLogger(Logback.class); public static void main(String[] args) { LOGGER.info("This is an INFO level log message!"); LOGGER.error("This is an ERROR level log message!"); } }
As you can see, we generate two log messages. But before that, we start by creating the Logger by using the SLF4J LoggerFactory class and its getLogger method. Once we have that we can start generating log messages by using appropriate Logger methods:
LOGGER.info("This is an INFO level log message");
By default, when no configuration is provided the output generated by the execution of the above code will look as follows:
12:44:39.162 [main] INFO com.sematext.blog.logging.Logback - This is an INFO level log message! 12:44:39.165 [main] ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
The default behavior will be activated if the logback-test.xml file is not found in the classpath and logback.groovy is not found in the classpath and the logback.xml file is not present in the classpath. The default configuration uses the ConsoleAppender which prints the log messages to the standard output and uses the PatternLayoutEncoder with the pattern used for formatting the log messages defined as follows %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} – %msg%n. The default Logger is assigned to the DEBUG log level, so it can be quite verbose.
Logback Configuration
To configure Logback we can either use the standard XML files as well as Groovy ones. We’ll use the XML method. Let’s just create the logback.xml file in the resources folder in our project with the following contents:
<configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="info"> <appender-ref ref="STDOUT" /> </root> </configuration>
We have the root configuration element which is responsible for keeping the whole configuration. Inside we see two sections. The first one is the appender section. We created a single appender with the STDOUT name that prints the output to the console and uses an Encoder with a given pattern. The Encoder is responsible for formatting the LogRecord. Finally, we defined the root element to use the INFO level as the default log level and by default send all the messages to the appender with the name STDOUT.
The output of the execution of the above code with the created logback.xml file present in the classpath results in the following output:
12:49:47.730 INFO com.sematext.blog.logging.Logback - This is an INFO level log message! 12:49:47.732 ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
Logback Loggers
Logger is the main entity that our application uses to create LogRecord, so to create the log events that we want. To get an instance of the Logger in Logback you just need to use the getLogger method of the LoggerFactory class:
private static final Logger LOGGER = LoggerFactory.getLogger(SematextLogbackTutorial.class);
Logback Levels
If you’re new to the concept of logging levels, you should have a look at our tutorial on log levels, where we dive deeper into what they are and how to use them. But shortly put, a log level or log severity is a piece of information telling how important a given log event is.
You can think of them as a way to filter the critical information about your system state and the one that is purely informative. The log levels can help in reducing the information noise and reduce alert fatigue.
In Logback, the available log levels are:
- TRACE – the lowest level of information, mostly used for very deep code debugging, usually not included in production logs.
- DEBUG – low level of information used for debugging purposes, usually not included in production logs.
- INFO – a log severity carrying information, like an operation that started or finished.
- WARN – a log level informing about an event that may require our attention, but is not critical and may be expected.
- ERROR – a log level telling that an error, expected or unexpected, usually meaning that part of the system is not working properly.
[sematext_banner type=”logs”]
Logback Appenders
Logback appender is the component that Logback uses to write log events. They have their name and a single method that can process the event.
The logback-core library lays the foundation for the Logback appenders and provides a few classes that are ready to be used. Those are:
- ConsoleAppender – appends the log events to the System.out or System.err
- OutputStreamAppender – appends the log events to java.io.Outputstream providing the basic services for other appenders
- FileAppender – appends the log events to a file
- RollingFileAppender – appends the log events to a file with the option of automatic file rollover
The logback-classic library that we included in our example project extends the list of available appenders and provide the appenders that are able to send data to the external systems:
- SocketAppender – appends the log events to a socket
- SSLSocketAppender – appends the log events to a socket using secure connection
- SMTPAppender – accumulates data in batches and send the content of the batch to a user-defined email after a user-specified event occurs
- DBAppender – appends data into a database tables
- SyslogAppender – appends data into Syslog compatible destination
- SiftingAppender – appender that is able to separate logging according to a given runtime attribute
- AsyncAppender – appends the logs events asynchronously
There are many more appenders that are present in Logback extensions, but we won’t mention them all. We won’t be talking about each and every appender either, but let’s look at some of the most common use cases – using the RollingFileAppender and writing logs both to a console and to a file, respectively.
Using Multiple Appenders – Console and File Appenders Together
Let’s assume we have the following use case – we would like to set up our application to log everything to file so we can use it for shipping data to an external log management and analysis solution such as Sematext Logs and we would also like to have the logs from com.sematext.blog loggers printed to the console.
I’ll use the same example application that looks as follows:
package com.sematext.blog.logging; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Logback { private static final Logger LOGGER = LoggerFactory.getLogger(Logback.class); public static void main(String[] args) { LOGGER.info("This is an INFO level log message!"); LOGGER.error("This is an ERROR level log message!"); } }
But this time, the logback.xml will look slightly different:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <appender name="file" class="ch.qos.logback.core.FileAppender"> <file>/tmp/logback.log</file> <append>true</append> <immediateFlush>true</immediateFlush> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <logger name="com.sematext.blog"> <appender-ref ref="console"/> </logger> <root level="info"> <appender-ref ref="file" /> </root> </configuration>
You can see that I have two appenders defined – one called console that appends the data to the System.out or System.err. The second one is called the file and uses the ch.qos.logback.core.FileAppender to write the log events to file. I provided a file configuration property to tell the appender where data should be written, I specified that I want to append data and I want an immediate flush to increase logging throughput. I also added an encoder to format my data and included the thread in the logline.
The difference between the earlier examples are also in the loggers definition. We have a root logger that appends the data to our file appender. It does that for all log events with the INFO log level and higher. We also have a second logger defined with the name of com.sematext.blog, which appends data using the console appender.
After running the above code we will see the following output on the console:
12:54:09.077 INFO com.sematext.blog.logging.Logback - This is an INFO level log message! 12:54:09.078 ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
And the following output in the /tmp/logback.log file:
12:54:09.077 [main] INFO com.sematext.blog.logging.Logback - This is an INFO level log message! 12:54:09.078 [main] ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
You can see that the times are exactly the same, the lines are different though just as our encoders are.
In the normal production environment, you would probably use the RollingFileAppender to roll over the files daily or when they reach a certain size. Keep that in mind when setting up your own production logging.
The Rolling File Appender
Let’s look at one more appender that is commonly used in production environments – the RollingFileAppender. It allows configuring Logback to write log events to a file and rotate the files depending on certain criteria. For the purpose of this blog post we will set up Logback to rotate the log files based on creation date. We will write the newest data to /tmp/logback.log file, to have a maximum size of file of 10MB and keep the log history for 7 days. The rolled files will be also stored in the /tmp directory and they will be named logback.<DATE>.<COUNT>.log. To achieve that we would use the following configuration:
<configuration> <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>/tmp/logback.log</file> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>/tmp/logback.%d{yyyy-MM-dd}.%i.log</fileNamePattern> <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP"> <maxFileSize>10MB</maxFileSize> </timeBasedFileNamingAndTriggeringPolicy> <maxHistory>7</maxHistory> </rollingPolicy> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> <outputPatternAsHeader>true</outputPatternAsHeader> </encoder> </appender> <root level="info"> <appender-ref ref="file" /> </root> </configuration>
We also added an encoder, but we will learn about the Logback Encoders in the next chapter, so let’s skip discussing it at the moment.
The RollingFileAppender is far more powerful than what we can see above. There are different rolling policies apart from the TimeBasedRollingPolicy – we also have the SizeAndTimeBasedRollingPolicy and FixedWindowRollingPolicy or even define the triggeringPolicy to tell Logback when to roll the file. For more information have a look at the Logback documentation.
Logback Encoders
Logback encoder is responsible for transforming a log event into a byte array and writing that byte array to an OutputStream.
Right now there are two encoders available in Logback:
- PatternLayoutEncoder – encoder that takes a pattern and encodes the log event based on that pattern
- LayoutWrappingEncoder – encoder that closes the gap between the current Logback version and the versions prior to 0.9.19 that used to use Layout instances instead of the patterns.
Right now, in most cases you will be using a pattern, for example like in the following appender definition:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> <outputPatternAsHeader>true</outputPatternAsHeader> </encoder> </appender> <root level="info"> <appender-ref ref="console" /> </root> </configuration>
The execution of such configuration along with our example project would result in the following output:
#logback.classic pattern: %d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n 13:16:30.724 INFO com.sematext.blog.logging.Logback - This is an INFO level log message! 13:16:30.726 ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
Logback Layouts
The layout is a component that is responsible for transforming an incoming event into a String. You can write your own layout and include it in your appender using the ch.qos.logback.code.encoder.LayoutWrappingEncoder:
<appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="ch.qos.logback.code.encoder.LayoutWrappingEncoder"> <layout class="com.sematext.blog.logging.LayoutThatDoesntExist" /> </encoder> </appender>
Or you can use the PatternLayout, which is flexible and provides us with numerous ways of formatting our log events.
Logback Filters
Filter is a mechanism for accepting or rejecting a log event based on the criteria defined by the filter itself.
A very simple filter implementation could look as follows:
package com.sematext.blog.logging.logback; import ch.qos.logback.classic.spi.ILoggingEvent; import ch.qos.logback.core.filter.Filter; import ch.qos.logback.core.spi.FilterReply; public class SampleFilter extends Filter<ILoggingEvent> { @Override public FilterReply decide(ILoggingEvent event) { if (event.getMessage().contains("ERROR")) { return FilterReply.ACCEPT; } return FilterReply.DENY; } }
You can see that we extend the Filter<ILoggingEvent> class and provide an implementation for the decide(ILoggingEvent event) method. In our case, we just check if the message contains a given String value and if it does we accept the message by returning FilterReply.ACCEPT. Otherwise, we reject the log event by running FilterReply.DENY.
We can include the filter in our appender by including the Filter tag and providing the class:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> <filter class="com.sematext.blog.logging.logback.SampleFilter" /> </appender> <root level="info"> <appender-ref ref="console" /> </root> </configuration>
If we were to now execute our example code with the above Logback configuration the output would be as follows:
13:52:12.451 ERROR com.sematext.blog.logging.Logback - This is an ERROR level log message!
There are also out of the box filters. The ch.qos.logback.classic.filter.LevelFilter allows us to filter events based on exact log level matching. For example, to reject all INFO level logs we could use the following configuration:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>INFO</level> <onMatch>DENY</onMatch> <onMismatch>ACCEPT</onMismatch> </filter> </appender> <root level="info"> <appender-ref ref="console" /> </root> </configuration>
The ch.qos.logback.classic.filter.ThresholdFilter allows us to filter log events below the specified threshold. For example, to discard all log events that have level lower than WARN, so INFO and beyond we could use the following configuration:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> <filter class="ch.qos.logback.classic.filter.LevelFilter"> <level>INFO</level> <onMatch>DENY</onMatch> <onMismatch>ACCEPT</onMismatch> </filter> </appender> <root level="info"> <appender-ref ref="console" /> </root> </configuration>
There are also other filter implementations and one more additional type of filters called TurboFilters – we suggest looking into Logback documentation regarding filters to learn about them.
Mapped Diagnostic Contexts in Logback
The MDC or Mapped Diagnostic Contexts is a way for the developers to provide the additional context information that will be included along with the log events if we wish. MDC can be used to distinguish log output from different sources – for example in highly concurrent environments. MDC is managed on a per thread basis.
Logback uses SLF4J APIs to use the MDC. For example, let’s look at the following code:
package com.sematext.blog.logging; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.MDC; public class LogbackMDC { private static final Logger LOGGER = LoggerFactory.getLogger(LogbackMDC.class); public static void main(String[] args) { MDC.put("user", "rafal.kuc@sematext.com"); LOGGER.info("This is the first INFO level log message!"); MDC.put("executionStep", "one"); LOGGER.info("This is the second INFO level log message!"); MDC.put("executionStep", "two"); LOGGER.info("This is the third INFO level log message!"); } }
We’ve used the org.slf4j.MDC class and its put method to provide additional context information. In our case, there are two properties that we provide – user and the executionStep.
To be able to display the added context information we need to modify our pattern, for example like this:
<configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} [%X{user}] [%X{executionStep}] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="info"> <appender-ref ref="STDOUT" /> </root> </configuration>
I added two new elements: [%X{user}] [%X{executionStep}]. By using the %{name_of_the_mdc_property} we can easily include our additional context information.
After executing our code with the above Logback configuration we would get the following output on the console:
14:14:10.687 [rafal.kuc@sematext.com] [] INFO com.sematext.blog.logging.LogbackMDC - This is the first INFO level log message! 14:14:10.688 [rafal.kuc@sematext.com] [one] INFO com.sematext.blog.logging.LogbackMDC - This is the second INFO level log message! 14:14:10.688 [rafal.kuc@sematext.com] [two] INFO com.sematext.blog.logging.LogbackMDC - This is the third INFO level log message!
You can see that when the context information is available it is written along with the log messages. It will be present until it is changed or cleared.
Of course, this is just a simple example and the Mapped Diagnostic Contexts can be used in advanced scenarios such as distributed client-server architectures. If you are interested to learn more have a look at the
Logback documentation dedicated to MDC. You can also use MDC as a discriminator value for the Sifting appender and route your logs based on that.
Markers in Logback
Markers are very powerful tools allowing us to enrich our log events. They are named objects that are used to enrich the data – associate certain additional information with a log event. They can be used for filtering, like we’ve seen in our Log4j 2 tutorial, but they are not limited to. For example, we can try sending an email when a log message with an IMPORTANT marker will appear.
First, we need to have some code for that:
package com.sematext.blog.logging; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.Marker; import org.slf4j.MarkerFactory; public class LogbackMarker { private static Logger LOGGER = LoggerFactory.getLogger(LogbackMarker.class); private static final Marker IMPORTANT = MarkerFactory.getMarker("IMPORTANT"); public static void main(String[] args) { LOGGER.info("This is a log message that is not important!"); LOGGER.info(IMPORTANT, "This is a very important log message!"); } }
We’ve already seen that example. The code is the case, but this time we use SLF4J as the API of our choice and Logback as the logging framework.
Our logback.xml file will look as follows:
<configuration> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{HH:mm:ss.SSS} %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <appender name="markersift" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator class="com.sematext.blog.logging.logback.MarkerDiscriminator"> <key>importance</key> <defaultValue>not_important</defaultValue> </discriminator> <sift> <appender name="file-${importance}" class="ch.qos.logback.core.FileAppender"> <file>/tmp/logback-${importance}.log</file> <append>false</append> <immediateFlush>true</immediateFlush> <encoder> <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> </sift> </appender> <logger name="com.sematext.blog"> <appender-ref ref="console" /> </logger> <root level="info"> <appender-ref ref="markersift" /> </root> </configuration>
This time the configuration is a bit more complicated. We’ve used the SiftingAppender to be able to create two files, depending on the value returned by our MarkerDiscriminator. Our discriminator implementation is simple and returns the name of the marker if the IMPORTANT marker is present in the log message and the defaultValue if it is not present. The code looks as follows:
package com.sematext.blog.logging.logback; import ch.qos.logback.classic.spi.ILoggingEvent; import ch.qos.logback.core.sift.AbstractDiscriminator; import org.slf4j.Marker; public class MarkerDiscriminator extends AbstractDiscriminator<ILoggingEvent> { private String key; private String defaultValue; @Override public String getDiscriminatingValue(ILoggingEvent iLoggingEvent) { Marker marker = iLoggingEvent.getMarker(); if (marker != null && marker.contains("IMPORTANT")) { return marker.getName(); } return defaultValue; } public String getKey() { return key; } public void setKey(String key) { this.key = key; } public String getDefaultValue() { return defaultValue; } public void setDefaultValue(String defaultValue) { this.defaultValue = defaultValue; } }
Let’s get back to our Logback configuration though. The sift appender that we have uses the same variable name as the key in our discriminator. That means that for each of the values returned by the discriminator a new file appender will be created. In our case, there will be two files: /tmp/logback-IMPORTANT.log and /tmp/logback-not_important.log.
After the code execution the /tmp/logback-IMPORTANT.log file will contain the following log message:
11:08:47.543 [main] INFO c.s.blog.logging.LogbackMarker - This is a very important log message!
While the second file, the /tmp/logback-not_important.log will contain the following log message:
11:08:47.537 [main] INFO c.s.blog.logging.LogbackMarker - This is a log message that is not important!
The output on the console will be as follows:
11:08:47.537 INFO c.s.blog.logging.LogbackMarker - This is a log message that is not important! 11:08:47.543 INFO c.s.blog.logging.LogbackMarker - This is a very important log message!
As you can see in this simple example, everything works as we wanted.
Java Logging with Log Management Tools: How Can Sematext Logs Help
Even though Logback is not the latest and greatest Java logging framework available it still does an awesome job when it comes to what it was designed for – logging. And now we know how to use and configure it for our Java applications.
But no matter what kind of library we are using in the background to handle our logging, with the growing number of Java applications in our environment you may be overwhelmed by how many logs they will produce. This is especially true for large, distributed applications built out of many microservices.
You may get away with logging to a file and only using them when troubleshooting is needed, but working with huge amounts of data quickly becomes unmanageable and you should end up using a log management tool for log monitoring and centralization. You can either go for an in-house solution based on the open-source software or use one of the products available on the market like Sematext Logs.
A fully-managed log centralization solution such as Sematext Logs will give you the freedom of not needing to manage yet another, usually quite complex, part of your infrastructure. It will allow you to manage a plethora of sources for your logs. If you want to see how Sematext stacks up against similar solutions, we wrote in-depth comparisons to help you understand the options available out there. Read our reviews of the best cloud logging services, log analysis software, and log management tools.
You may want to include logs like JVM garbage collection logs in your managed log solution. After turning them on for your applications and systems working on the JVM you will want to have them in a single place for correlation, analysis, and to help you tune the garbage collection in the JVM instances. Alert on logs, aggregate the data, save and re-run the queries, hook up your favorite incident management software.
Using a log management solution is one of the many tips that can help you get the most of your logs. You can find more of them from our blog post about Java logging best practices.
Wrap up
Logging is invaluable when troubleshooting Java applications. In fact, logging is invaluable in troubleshooting in general, no matter if that is a Java application or a hardware switch or firewall. Both the software and hardware gives us a look into how they are working in the form of parametrized logs enriched with contextual information.
Logback provides the possibility of sorting out logging in your Java application using a modern, fast, and well-working library. We’ve learned how to include it in our projects, how to configure it, and what are the options that are available to us as developers.
I hope this article gave you an idea of how to deal with Java application logs using Logback and what are the functionalities it provides. Don’t forget, the key to efficient application logging is using a good log management tool. Try Sematext Logs to see how much easier things could be. There’s a 14-day free trial available for you to explore all its features.
Happy logging!