Registration is open - Live, Instructor-led Online Classes - Elasticsearch in March - Solr in April - OpenSearch in May. See all classes


Key JVM Metrics to Monitor for Peak Java Application Performance

Monitoring is crucial if you want to see what happens in your system and JVM-based applications are not different. Well, some metrics, like memory and garbage collection, require special attention because they play a major role in your application performance. In this blog post, we will look into the key Java Virtual Machine (JVM) metrics that you should monitor if you care about performance and stability. Those are the memory, the garbage collection, and the JVM threads.

How to Monitor JVM Performance: Critical Metrics You Need to Know

You can measure up to hundreds of metrics from one single Java application, but you don’t need all of them when monitoring availability and performance. Indeed, you’ll probably use more when it’s time to pinpoint what’s wrong with the app. Still, for monitoring, there are a handful of critical metrics you should focus on – memory usage, garbage collection, and thread counts, data that is available via JMX. Not to mention, since monitoring and tuning go hand in hand, understanding how these specific metrics work will help you with JVM performance tuning.

Memory Usage

Memory usage is one of the most important Java resources to monitor when you want to prevent leaks and troubleshoot unexpected issues.

When a JVM-based application starts, it reserves a space in the physical memory called the heap. That space is used by the JVM when you create objects. To give you a few examples – reading data from a file requires memory, opening an HTTP connection requires memory, running a loop requires memory. The memory is occupied until the object is referenced from the code. Once it is not needed anymore, it will be treated like garbage, and the garbage collector will harvest it when the time comes.

The heap itself is divided into different spaces. For example, there is a space called the young generation that holds the objects that were just created and are short-lived; the old generation that keeps the long-living objects; a dedicated space in memory for the code itself; and so on. The organization of the space in memory depends on the garbage collector algorithm that you choose to use.

When dealing with JVM-based applications and memory, the key point is to keep in mind that your code needs memory to work. The more data you process and the more complicated your algorithms are, the more memory you will need. If you don’t have enough memory to create new objects, the Java Virtual Machine will throw an OutOfMemory error, which means that there is not enough contiguous space to assign objects. Even though you may have enough space, it may not be enough to assign a single, large object. Depending on how the application is developed, it may mean that some functionality is not working or even the whole application crashed.

The same can happen if the memory is leaking. When an application keeps the references to objects even though those objects are no longer needed, the garbage collector will not be able to clear those objects from memory. Then, the memory usage will grow up to a point where the JVM heap size is no longer enough and you’ll receive an OutOfMemory error.

The OutOfMemory error usually happens when the JVM garbage collector does extensive work by trying to clean up the memory and fails. This can and will cause extensive resource usage and slow down your application or even stop its execution. When the garbage collector stops the application, we call that a stop-the-world event. Keep that in mind as the problems with memory and extensive garbage collection are one of the common problems with JVM-based applications performance. Read about how to handle Java Lang OutOfMemoryError exceptions.

You may also be tempted to go with a very large heap, just in case. But that is not a good idea either. The more heap space your JVM process has, the more work the garbage collector will potentially have to do to clean that up. Also, some systems can share the OS level caches, such as the I/O cache. Elasticsearch and Apache Solr are two of them. In such cases, leaving more memory for the operating system will be beneficial for overall JVM performance.

And, of course, the heap is not everything. The JVM-based application can also use off-heap memory. It works alongside the memory used as the heap and can be utilized to reduce the amount of heap memory used.

Your Java monitoring tool should provide you with all the necessary JVM metrics regarding heap and off-heap memory utilization. You should be able to see each memory pool utilization, such as the Eden space, survivor space, and old generation space. Such metrics allow you to see if you are getting close to reaching the heap capacity. The overall operating system memory utilization is also needed to see if there is anything free.

Finally, there is no single value that will help you set up your heap, but monitoring is a tool that can help you do that. However, a good starting point is to keep the peak heap utilization around 70 – 80%, depending on your garbage collector settings, and observe the behavior of your Java application.

critical jvm metrics to monitor

Garbage Collection

Seeing as heap memory availability is closely related to garbage collection (GC), it’s critical to keep an eye on garbage collection overhead as well. JVM runs a GC whenever it needs to reclaim space for application use by discarding unused objects. In fact, the garbage collector will be running frequently trying to clean up short and long-lived objects. This process can eat up many computer resources when deciding which memory to free and can lead to overhead and poor JVM performance.

In this case, you need to monitor how often the GC runs and how long each cycle takes. Frequent and long GC cycles, especially in the old generation space, are a sign of decreased performance or can hide Java memory leaks. At the same time, applications require different GC pause time, be it longer or shorter, and monitoring will help you tune the GC accordingly.

It is also worth mentioning that relying on JVM metrics for your logs is one thing, but if you need more detailed information on what is happening inside your Java Virtual Machine, I strongly suggest collecting and analyzing garbage collection logs for fine granularity data. You will get information about individual garbage collection stages, how much memory was released because of the garbage collection, and what caused the garbage collection.

In most cases, a healthy JVM-based system will not spend more than a few percent on average on garbage collection. If you see extensive garbage collector work, one of the first things to check is how close your application is to the memory limits. The lack of heap memory is one of the most common issues with extensive garbage collection we encounter during the consulting projects.

java performance metrics

JVM Threads

Monitoring JVM threads is just as important to ensure optimum application performance.

Threads are how your Java app does the work. They are to JVM what processes are for operating systems. Just like when you have too many processes running, too many active threads can cause spikes in CPU utilization, causing application – and even server – slowdown. At the same time, a higher number of active threads leads to context switching among the CPUs, which will require even more resources from your processor.

Too many active threads can also indicate a poorly responding – or non-responsive – backend. The obvious solution is to limit the number of threads. However, if you expect to receive many concurrent requests, you’ll need to create more threads or change your architecture to use a non-blocking approach to keep the response time as low as possible.

Java Monitoring with Sematext

Java Application Monitoring with Sematext

Without a doubt, keeping your Java application performance up to speed is easier with the proper tools. Multiple tools can help you in monitoring your JVM-based application – metrics, logs, and traces. You can use ad-hoc monitoring solutions that provide information about the current state of things and tools that are working constantly and offer you the possibility of doing a post mortem analysis. There are open-source and proprietary solutions, giving you a different approach to JVM observability. We wrote an in-depth comparison of the best Java monitoring tools if you’re on the market for one.

One such solution is Sematext’s Java monitoring tool, which provides in-depth insights into critical JVM metrics such as memory usage, GC, and threads to help you fine-tune JVM performance. It helps detect bottlenecks faster by profiling your app in real time and identifying slow database operations with transaction tracing.

Whether you are investigating issues with your JVM application performance, tuning it for optimal setup, or just looking at critical JVM metrics, Sematext’s Java monitoring tool will make your job easier. It provides an overview of how your application is behaving – both when it comes to the JVM metrics themselves and the operating system-level metrics.

metrics in java virtual machine

Slice and dice the data, select the metrics that you are interested in, choose the period that you are interested in, and see not only the JVM-related metrics but the operating system or container ones as well. That gives you a unique opportunity to see a broader perspective into the potential root cause of the issue. You can even correlate the data from multiple Apps and Logs, giving you details about how your application is working.

which jvm metrics monitor

Our open-sourced JVM agent gathers all the necessary metrics via the JMX. You can even expand the metrics collected by modifying the JAML configuration files available on Github. But if you just want to start monitoring, create a Sematext Cloud account, create the JVM application and follow the instructions to start.

Conclusion

JVM metrics and logs are the keys to achieving the optimal performance of your applications. The right observability tools will help you during your everyday routine, and for post-mortem analysis. Having the option to slice and dice the metrics, connect them with logs, and traces and create alerts on the key JVM metrics allows you to rely on the observability platform to notify you as soon as things are getting out of order.

However, even the best tools won’t help you if you don’t have the basic knowledge about what the metrics mean and what you should expect from them. Hopefully, this blog post brought you one step closer to understanding the JVM key metrics so that your applications can live a long and healthy life.

Start Free Trial