Last updated on Jan 16, 2018.
Elasticsearch is booming. Together with Logstash, a tool for collecting and processing logs, and Kibana, a tool for searching and visualizing data in Elasticsearch (aka the “ELK” stack), adoption of Elasticsearch continues to grow by leaps and bounds. When it comes to actually using Elasticsearch, there are lots of Elasticsearch metrics generated. Instead of taking on the formidable task of tackling all-things-metrics in one blog post, we’re going to serve up something that we at Sematext have found to be extremely useful in our work as Elasticsearch experts. I should also point out that we make heavy use of Elasticsearch in a large-scale Log Management SaaS built on top of Elasticsearch, where Elasticsearch monitoring is critical. This article should be especially helpful to those readers new to Elasticsearch, but also to experienced users who want a quick start into performance monitoring of Elasticsearch.
Side note: we’re @sematext, if you’d like to follow us.
Top Elasticsearch Metrics to Monitor
Here are the Top 10 Elasticsearch metrics:
- Cluster Health – Nodes and Shards
- Node Performance – CPU
- Node Performance – Memory Usage
- Node Performance – Disk I/O
- Java – Heap Usage and Garbage Collection
- Java – JVM Pool Size
- Search Performance – Request Latency and Request Rate
- Search Performance – Filter Cache
- Search Performance – Field Data Cache
- Indexing Performance – Refresh Times and Merge Times
Most of the charts in this article group metrics either by displaying multiple metrics in one chart or organizing them into monitoring dashboards. This is done to provide context for each of the metrics we are exploring.
To start, here’s a dashboard view of the 10 Elasticsearch metrics we’re going to discuss.
Now, let’s dig into each of the top 10 metrics one by one and see how to interpret them.
1. Cluster Health – Nodes and Shards
Like OS metrics for a server, the cluster health status is a basic metric for Elasticsearch. It provides an overview of running nodes and the status of shards distributed to the nodes.
Tracking running nodes by node type
Putting the counters for the shard allocation status together in one graph visualizes how the cluster recovers over time. Especially in the case of upgrade procedures with round-robin restarts, it’s important to know the time your cluster needs to allocate the shards.
Shard allocation status over time
The process of allocating shards after restarts can take a long time depending on the specific settings of the cluster. Taking some control of shard allocation is given by the Cluster API.
2. Node Performance – CPU
As with any other server, Elasticsearch performance depends strongly on the machine it is installed on. CPU, Memory Usage and Disk I/O are basic operating system metrics for each Elasticsearch node.
In the context of Elasticsearch (or any other Java application) it is recommended that you look into JVM metrics when CPU usage spikes. In the following example the reason for the spike was higher garbage collection activity. It can be recognized by the ‘collection count’ because the time and and count increased.
Higher CPU usage (user space where Elasticsearch lives)
Increased garbage collection activity causing increased CPU usage
3. Node Performance – Memory Usage
The following graph shows a good balance. There is some spare memory and nearly 60% of memory is used, which leaves enough space for cached memory (e.g. file system cache). What you’d see more typically is actually a chart that shows no free memory. People new to looking at memory metrics often panic thinking that having no free memory means the server doesn’t have enough RAM. That is actually not so. It is good not to have free memory. It is good if the server is making use of all the memory. The question is just whether there is any buffered & cached memory (this is a good thing) or if it’s all used. If it’s all used and there is very little or no buffered & cached memory, then indeed the server is low on RAM. Because Elasticsearch runs inside the Java Virtual Machine, JVM memory and garbage collection are the areas to look at for Elasticsearch-specific memory utilization.
A balanced memory usage
Number of nodes (left), Memory Usage, CPU usage (right)
4. Node Performance – Disk I/O
A search engine makes heavy use of storage devices, and watching the disk I/O ensures that this basic need gets fulfilled. As there are so many reasons for reduced disk I/O, it’s considered a key metric and a good indicator for many kinds of problems. It is a good metric to check the effectiveness of indexing and query performance. Distinguishing between read and write operations directly indicates what the system needs most in the specific use case. Typically there are many more reads from queries than writes, although a popular use case for Elasticsearch is log management, which typically has high writes and low reads. When writes are higher than reads, optimizations for indexing are more important than query optimizations. This example shows a logging system with more writes than reads:
Disk I/O – read vs. write operations
The operating system settings for disk I/O are a base for all other optimizations – tuning disk I/O can avoid potential problems. If the disk I/O is still not sufficient, counter-measures such as optimizing the number of shards and their size, throttling merges, replacing slow disks, moving to SSDs or adding more nodes should be evaluated according to the circumstances causing the I/O bottlenecks. For example: while searching, disks get trashed if the indices don’t fit in the OS cache. This can be solved a number of different ways: by either adding more RAM or data nodes, or by reducing the index size (e.g. using time-based indices and aliases), or by being smarter about limiting searches to only specific shards or indices instead of searching all of them, or by caching, etc.
5. Java – Garbage Collection & Heap Usage
Elasticsearch runs in a JVM, so the optimal settings for the JVM and monitoring of the garbage collector and memory usage are critical. There are several things to consider with regards to JVM and operating system memory settings:
- Avoid the JVM process getting swapped to disk. On Unix, Linux, Mac OS X systems: lock the process address space into RAM by setting „bootstrap.memory_lock=true“ in the Elasticsearch configuration file and the environment variable MAX_LOCKED_MEMORY=unlimited (e.g. in /etc/default/elasticsearch). To set swappiness globally in Linux, set “vm.swappiness=1” in /etc/sysctl.conf. On Windows one can simply disable virtual memory.
- Define the heap memory for Elasticsearch by setting the -Xmx java option in the jvm.options config file and follow these rules:
- Choose a reasonable minimum heap memory to avoid ‘out of memory’ errors. The best practice is setting the minimum (-Xms) equal to the maximum heap size (-Xmx); so there is no need to allocate additional memory during runtime. Example: add -Xmx16g and -Xms16g to jvm.options
- As a rule of thumb: set the maximum heap size to 50% of available physical RAM. Typically, one does not want to allocate more than 50-60% of total RAM to the JVM heap. JVM memory tuning is not trivial and requires one to monitor used and cached main memory, as well as JVM memory heap, memory pool utilization, and garbage collection.
- Avoid crossing the 32 GB limit – if you have servers with lots of memory it is generally better to run more Elasticsearch nodes than going over the 32 GB limit for maximal heap size. In short, using -Xmx32g or higher results in the JVM using larger, 64-bit pointers that need more memory. If you don’t go over -Xmx31g the JVM will use smaller, 32-bit pointers by using compressed Ordinary Object Pointers aka OOPs. On Linux, going slightly lower, to -Xmx30g, will also make sure that the JVM uses zero-based compressed OOPs, which should save some CPU.
The report below should be obvious to all Java developers who know how JVM manages memory. Here we see relative sizes of all memory spaces and their total size. If you are troubleshooting performance of the JVM (which one does with pretty much every Java application) this is one of the key places to check first, in addition to looking at the Garbage Collection and Memory Pool Utilization reports (see graph “Pool Utilization”). In this graph we see a healthy sawtooth pattern clearly showing when major garbage collection kicked in.
Typical Garbage Collection Sawtooth
When we watch the summary of multiple Elasticsearch nodes, the sawtooth pattern is not as sharp as usual because garbage collection happens at different times on different machines. Nevertheless, the pattern can still be recognized, probably because all nodes in this cluster were started at the same time and are following similar garbage collection cycles.
Aggregate view of multiple Elasticsearch JVMs in the cluster
6. Java – JVM Pool Size
The Memory Pool Utilization graph shows what percentage of each pool is being used over time. When some of these Memory Pools, especially Old Gen or Perm Gen, approach 100% utilization and stays there, it’s time to worry. Actually, it’s already too late by then. You have alerts set on these metrics, right? When that happens you might also find increased garbage collection times and higher CPU usage, as the JVM keeps trying to free up some space in any pools that are (nearly) full. If there’s too much garbage collection activity, it could be due to one of the following causes:
- One particular pool is stressed, and you can get away with tuning pools.
- JVM needs more memory than has been allocated to it. In this case, you can either lower your requirements or add more heap memory (-Xmx). Lowering the utilized heap in Elasticsearch could theoretically be done by reducing the filter/query cache. In practice it would likely have a negative impact to query performance. Other optimizations could be disabling features that aren’t needed. For example, if you don’t do full-text search on a field, make it keyword instead of text. If you don’t search on it at all, set index to false altogether.
JVM Pool Utilization
A drastic change in memory usage or long garbage collection runs may indicate a critical situation. For example, in a summarized view of JVM Memory over all nodes a drop of several GB in memory might indicate that nodes left the cluster, restarted or got reconfigured for lower heap usage.
JVM Pool Size (left) and Garbage collection (right)
7. Search Performance – Request Latency and Request rate
When it comes to search applications, the user experience is typically highly correlated to the latency of search requests. For example, the request latency for simple queries is typically below 100ms. We say “typically” because Elasticsearch is often used for analytical queries, too, and humans seem to tolerate slowness better in such scenarios. There are numerous things that can affect your queries performance – poorly constructed queries, improperly configured Elasticsearch cluster, JVM memory and garbage collection issues, disk IO, and so on. Through our Elasticsearch consulting practice we have seen many cases where request latency is low and then suddenly jumps as a consequence of something else starting to misbehave in the cluster. Needless to say, query latency being the metric that directly impacts users, make sure you put some alerts on it. Alerts based on query latency anomaly detection will be helpful here. The following charts illustrate just such a case. A spike like the blue 95th percentile query latency spike will trip any anomaly detection-based alerting system worth its salt. A word of caution – query latency that Elasticsearch exposes are actually per-shard query latency metrics. They are not latency values for the overall query.
Number of Elasticsearch nodes dropping (left) causing increase in query latency (right)
Putting the request latency together with the request rate into a graph immediately provides an overview of how much the system is used and how it responds to it.
8. Search Performance – Filter/Query Cache
For some queries, such as date ranges, you don’t need scoring, so you can run them in the filter clause of a bool query. That means that after a few executions, Elasticsearch will store the matching document IDs in a bitset (at least on the bigger segments). Subsequent executions of queries having the same filter will reuse the information stored in the bitset, thus making query execution faster by saving I/O operations and CPU cycles. Even though those bitsets are often small they can take up large portions of the JVM heap if you have a lot of data and numerous different filters. Because of that it is wise to set the “indices.cache.filter.size” property to limit the amount of heap to be used for the filter cache. To find out the best setting for this property keep an eye on filter cache size and filter cache eviction metrics shown in the chart below.
Filter cache size and evictions help optimize filter cache size setting
9. Search Performance – Field Data Cache
For sorting or aggregations, Elasticsearch needs a different data structure than the inverted index, which gives it a mapping between the ID of the document and the value of that field. A columnar store. Elasticsearch 2.x and later use doc values for this by default, which are computed at index time, use very little memory and rely on OS caches to be loaded/unloaded when needed.
Before doc values, there was field data. Like doc values, it’s a columnar data structure used for sorting and aggregations. But unlike doc values, they are loaded at query time in the JVM heap. Which means that simple requests, ran on a large data set, can potentially run your Elasticsearch out of memory. That said, field data is still used in places, mostly if you want to aggregate on text fields (where you’d have to manually enable fielddata=true in the mapping).
Unless you have a small data set, we’d highly recommend avoiding field data. That said, if you end up using it anyway, consider limiting the size of field data cache accordingly by setting the “indices.fielddata.cache.size” property and keeping an eye on it to understand the actual size of the cache.
Field cache size (yellow) and field cache evictions (green)
Another thing worth configuring are circuit breakers, to limit the possibility of breaking the cluster by using a set of queries. Circuit breakers estimate the amount of memory used by a request (field data being one of the consumers to be evaluated), and if this estimate passes the configured threshold, the request fails, instead of putting the whole cluster at risk.
Request Latency (left), Field Cache (center) and Filter Cache (right)
10. Indexing Performance – Refresh Times & Merge Times
Several different things take place in Elasticsearch during indexing and there are many metrics to monitor its performance. When running indexing benchmarks (see Tuning Elasticsearch Indexing Pipeline for Logs and Elasticsearch Refresh Interval vs Indexing Performance, a fixed number of records is typically used to calculate the indexing rate. In production, though, you’ll typically want to keep an eye on the real indexing rate. Elasticsearch itself doesn’t expose the rate itself, but it does expose the number of documents, from which one can compute the rate, as shown here:
Indexing rate (documents/second) and document count
This is another metric worth considering for alerts and anomaly detection. Sudden spikes and dips in indexing rate could indicate issues with data sources. Refresh time and merge time are closely related to indexing performance, plus they affect overall cluster performance. Refresh time increases with the number of file operations for the Lucene index (shard). Reduced refresh times can be achieved by setting the refresh interval to higher values (e.g. from 1 second to 30 seconds). When Elasticsearch (really, Apache Lucene, which is the indexing/searching library that lives at the core of Elasticsearch) merges many segments, or simply a very large index segment, the merge time increases. This is a good indicator of having the right merge policy, shard, and segment settings in place. In addition, Disk I/O indicates intensive use of write operations while CPU usage spikes as well. Thus, merges should be as quick as possible. Alternatively, if merges are affecting the cluster too much, one can limit the merge throughput and increase “indices.memory.index_buffer_size” (to more than 10% on nodes with a small heap) to reduce disk I/O and let concurrently executing queries more CPU cycles.
Growing summarized merge times over all nodes while indexing
Segments merging is very important process for the index performance, but it is not without side-effects. Higher indexing performance usually means allowing more segments to be present and thus making the queries slightly slower.
Refresh and Merge times (left) compared with Disk I/O (right)
So there you have it — the top Elasticsearch metrics to watch for those of you who find yourselves knee deep — or even deeper! — in charts, graphs, dashboards, etc. Elasticsearch is a high-powered platform that can serve your organization’s search needs extremely well, but, like a blazing fast sports car, you’ve got to know what dials to watch and how to shift gears on the fly to keep things running smoothly. Staying focused on the top 10 metrics and corresponding analysis presented here will get and/or keep you on the road to a successful Elasticsearch experience.
So, those are our top Elasticsearch metrics — what are YOUR top 10 metrics? We’d love to know so we can compare and contrast them with ours in a future post. Please leave a comment, or send them to us via email or hit us on Twitter: @sematext.