Disk-related issues with Elasticsearch can present themselves through various symptoms. It is important to understand their root causes and know how to deal with them when they arise. As an Elasticsearch cluster administrator, you are likely to encounter some of the following cluster symptoms:
- Cannot create or modify an index
- Data is no longer written to some indices
- Shards are not getting allocated
- A node is missing
This article presents some helpful Dev Tools commands for troubleshooting and solving the underlying issues, along with recommendations to minimize the risk of disk-related problems occurring in your environment.
Disk-related Dev Tools Commands
The following commands are useful for troubleshooting disk-related Elasticsearch issues with Dev Tools.
GET _cluster/allocation/explain | Reason for unassigned shards |
GET _cat/allocation?v | Node storage utilization |
GET _nodes/stats/fs | Filesystem statistics |
GET _cat/shards?v | Shard size and allocation |
GET _plugins/_ism/explain/ | Explain ISM (OpenSearch) Explain ILM (Elastic) |
GET _cluster/settings?incude_defaults | Cluster settings |
In the following sections, we will take a closer look at each of these commands and their benefits.
Unassigned shards
An Elasticsearch index can consist of multiple shards. One of the first symptoms of Elastic disk issues is shards not being allocated. After Elasticsearch has failed to allocate a shard, you can see the reason for the failure with this request:
GET _cluster/allocation/explain
If shard allocation failure is related to a disk issue, this request will return a message stating that a disk on a particular node has exceeded one or more of the following disk watermarks:
- low – Shards will no longer be allocated to the disk
- high – Shards will be relocated away from the disk
- flood_stage – All write operations to indices having shards on this node are blocked
The next step is to check the storage for all disks in the cluster to further troubleshoot the issue. To modify disk watermarks, see Cluster settings.
Filesystem storage
To determine the state of the cluster storage, a summary of node storage can be seen using the following request:
GET /_cat/allocation?v
This will return the following information:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 22 56.1gb 56.1gb 543.9gb 600gb 9 172.18.0.4 172.18.0.4 node2 22 50.2gb 50.2gb 549.8gb 600gb 9 172.18.0.2 172.18.0.2 node1
Take note of the following fields:
shards
– Number of shards per nodedisk.indices
– Size of index datadisk.used
– Total used disk storagedisk.avail
– Free disk space available to Elasticsearchdisk.total
– Total disk sizedisk.percent
– Percentage of disk storage utilization
If you need more detailed disk-related information for troubleshooting, you can use this Elasticsearch command:
GET _nodes/stats/fs
It will return information relating to each Elastic node’s disk:
{ "cluster_name" : "elastic-cluster", "nodes" : { "35_PnZFrRumgTHEn-4kVuA" : { "name" : "node1", "fs" : { "total" : { "total_in_bytes" : 736223174656, "free_in_bytes" : 713487716352, "available_in_bytes" : 676014391296 }, "data" : [ { "path" : "/usr/share/elastic/data/nodes/0", "mount" : "/usr/share/elastic/data (/dev/sda1)", "type" : "ext4", "total_in_bytes" : 736223174656, "free_in_bytes" : 713487716352, "available_in_bytes" : 676014391296 } ...
Take note of the following fields for each node and its disks:
total_in_bytes
– Total disk spacefree_in_bytes
– Free disk spaceavailable_in_bytes
– Free disk space available to Elasticsearch
If a node is missing from the cluster, its disk may have already reached 100% capacity. Filling up storage capacity will cause Elasticsearch to stop functioning correctly on that node. You will need to increase node disk storage and possibly restart the service to get this node back in the cluster.
If all disks in the cluster have exceeded one or more of their watermarks, you can consider reducing shard replication and data retention time. If this is not possible, the remaining options are to increase node disk storage or add more nodes to the cluster.
If exceeded watermarks are isolated to only some disks in the cluster, the next step is to check the size of all shards on the disks in question to identify shards that are too large.
Your One Stop Shop for Elasticsearch
Platform | Open Source Stack | Community Help | Monitoring – Metrics, Logs. Health, Alerting SaaS | Training, Implementation & Maintenance | Production Support & Consulting |
---|---|---|---|---|---|
Elasticsearch | ✓ | ✓ | |||
Elasticsearch + Sematext | ✓ | ✓ | ✓ | ✓ | ✓ |
Shard size and allocation
To identify any large shards that might be causing exceeded watermarks, shard size and allocation information can be gathered using the following request:
GET _cat/shards?v
The result should look like this:
index shard prirep state docs store ip node movies-000001 0 p STARTED 999999 90.9gb 172.20.0.3 node2 movies-000001 0 r STARTED 999999 90.9gb 172.20.0.2 node1 music-000001 1 r STARTED 200054 10.0gb 172.20.0.3 node2 music-000001 1 p STARTED 200054 10.0gb 172.20.0.2 node1 music-000001 0 p STARTED 200054 10.0gb 172.20.0.3 node2 music-000001 0 r STARTED 200054 10.0gb 172.20.0.2 node1
If an index has become too large, check the rollover conditions of its index state/lifecycle management policy if it has one assigned to it. This will help to determine the cause of the large index size, but you will still need to deal with the index by breaking it up into multiple primary shards.
You can break up the index into smaller shards using the reindex operation. This involves reindexing all its documents into a new index with multiple primary shards. You can create such a new index as follows:
PUT movies-000001-reindexed { "aliases": { "movies": { "is_write_index": false } }, "mappings": {}, "settings": { "number_of_shards": 4, "number_of_replicas": 1 } }
After you have created the new index, you can reindex the documents from the old index using the following command:
POST _reindex?wait_for_completion=false { "source": { "index": "movies-000001" }, "dest": { "index": "movies-000001-reindexed" } }
Executing this command with the wait_for_completion=false
parameter will return a task id as follows:
{ "task" : "35_PnZFrRumgTHEn-4kVuA:10082" }
This task id can now be used to monitor the process of the reindexing task using the following command:
GET _tasks/35_PnZFrRumgTHEn-4kVuA:10082
Once this operation has been completed with no conflicts, it is safe to delete the old index.
Index state management
Indices can be managed using the OpenSearch ISM or Elastic ILM plugin. To quickly view information about the state/lifecycle management of a specific index, use one of these requests:
GET _plugins/_ism/explain/movies-000001 | OpenSearch |
GET movies-000001 /_ilm/explain | Elastic |
If this request returns only null
values, the index is not managed by any index state/lifecycle management policy and you will need to create a policy and add it to the index.
If the index is already has a management policy, the result should look like this:
{ "movies-000001" : { "index.plugins.index_state_management" : { "policy_id" : "movies_default" }, "index" : "movies-000001", "policy_id" : "movies_default", "rolled_over" : false, "state" : { "name" : "hot", }, "action" : { "name" : "rollover", "index" : 0, "failed" : true, "consumed_retries" : 0, "last_retry_time" : 0 }, "step" : { "name" : "attempt_rollover", "step_status" : "failed" }, "retry_info" : { "failed" : true, "consumed_retries" : 0 }, "info" : { "message" : "Missing rollover_alias [index=movies-000001]", "conditions" : { "min_size" : { "condition" : "10gb", "current" : "90.9gb" }, "min_doc_count" : { "condition" : 10000, "current" : 999999 } } }, "enabled" : true }
If an index has failed to roll over, the info.message
field shown in the result above indicates why. If the failure is due to "Failed to process cluster event"
, a rollover can be triggered manually using this request:
POST movies/_rollover
If the failure is due to "Missing rollover_alias"
, the rollover alias for this index is not configured. You can easily resolve this failure by manually setting the plugins.index_state_management.rollover_alias
field as follows:
PUT movies-000001/_settings { "plugins": { "index_state_management": { "rollover_alias": "movies" } } }
Keep in mind that this field name differs for different distributions of Elasticsearch:
plugins.index_state_management.rollover_alias | OpenSearch |
index.lifecycle.rollover_alias | Elastic |
Make sure the appropriate index template is also configured to apply the correct rollover alias to new indices to prevent the issue from reoccurring.
If rollover actions occur as expected but the cluster is still running into storage issues, you should consider reconfiguring index state/lifecycle management policies to store data for shorter periods to use less disk space. Remember that any changes made to a policy will not affect indices already managed by a previous version of that policy unless each index is explicitly updated manually.
Cluster settings
If some of the disks in the cluster have exceeded some of their watermarks, ensure that the cluster is properly configured to rebalance itself. The following command can be used to see all current cluster settings:
GET _cluster/settings?include_defaults
This request returns these disk-related cluster settings:
"persistent": {}, "transient": {}, "defaults": { "cluster": { "routing": { "allocation" : { "type" : "balanced", "disk" : { "threshold_enabled" : "true", "watermark" : { "flood_stage" : "95%", "high" : "90%", "low" : "85%", "enable_for_single_data_node" : "false" }, "include_relocations" : "true", "reroute_interval" : "60s" } } } }
When dealing with cluster settings, it is important to note the following setting types:
defaults
– Default Elasticsearch cluster settingspersistent
– Persists after cluster restart (overrides default settings)transient
– Resets after cluster restart (overrides all settings)
If a node’s disk exceeds the low
watermark, it can no longer have additional shards allocated to it, but will keep writing documents to its existing shards. Furthermore, the node will only start relocating shards to other available nodes after it has exceeded the high
watermark. If a disk exceeds the flood_stage
watermark, indices with shards on that node will be blocked from all write operations to prevent the disk from reaching full capacity. This is indicated by index.blocks.read_only_allow_delete:true
in the index’s settings. Note that this field is automatically set back to false
once the disk returns to a level below the flood_stage
watermark. If many nodes have exceeded one or more of these watermarks, you should consider increasing disk storage or adding more nodes to the cluster.
If the cluster is unbalanced, the root cause is most likely that shards are not evenly sized. When designing an Elasticsearch cluster, it is important to consider the number of shards and their sizes for each index based on the cluster architecture. The simplest way to achieve this would be to have a number of total shards per index that is divisible by the number of nodes in the cluster. This would ensure that indices would use the same amount of disk storage on every node. Assuming that all nodes have equal disk storage, this will ensure that the cluster is balanced at all times.
However, if you are already in the troubleshooting phase, it is likely too late to consider or implement these design practices. In that case, consider making use of watermark levels to automatically rebalance node storage as they fill up. If needed, low
and high
watermarks can be fine-tuned based on cluster storage utilization and architecture to improve overall cluster balance even in the undesirable case of having unevenly sized shards.
Cluster settings can be modified using the following command:
PUT _cluster/settings { "persistent": { "cluster": { "routing": { "allocation" : { "type" : "balanced", "disk" : { "threshold_enabled" : "true", "watermark" : { "flood_stage" : "95%", "high" : "80%", "low" : "75%", "enable_for_single_data_node" : "false" }, "include_relocations" : "true", "reroute_interval" : "60s" } } } }
If the disk watermarks are already optimally configured, consider reducing replica shards or scaling up the cluster by increasing node disk storage or adding more nodes.
Best Practices to Prevent Elasticsearch Disk-Related Issues
To minimize the risk of disk-related issues, here are some important recommendations to keep in mind to utilize cluster storage optimally.
Size shards appropriately
When you configure rollover conditions for an index, it is important to manage the size of index shards. Rollover can be configured to occur when any of the following conditions are met:
min_size
min_index_age
min_doc_count
Carefully consider the velocity of data when setting these conditions to indices from becoming too large, as this could cause cluster performance issues. On the other hand, rolling over indices too soon will lead to an increase in shard count, which can eventually result in an over-sharded cluster. Therefore, it is recommended to keep all shards within the same size range. A good rule of thumb would be between 5GB and 50GB. Keeping shard sizes smaller, for example 10GB, would result in better cluster performance.
Balance the tradeoffs of primary and replica shards
A single index can consist of multiple primary and replica shards. Primary shards contain a subset of index documents, meaning all read and write operations can be distributed across different nodes.
Furthermore, each primary shard can be duplicated on multiple other nodes using replica shards. Replica shards are copies of primary shards, which are very useful for improving search query throughput (i.e. number of parallel requests that still perform well) but come at the cost of additional disk storage.
Fine-tune your storage management
To minimize the costs of disk storage, it is important to make optimal use of all available disks. Here are three things to consider for optimal cluster storage utilization:
Data retention | Make sure you are not storing data that is no longer needed by modifying the appropriate index state/lifecycle management policy. |
Cluster balancing | Carefully consider cluster architecture when configuring index primary and replica shards. If necessary, fine-tune disk watermarks to ensure storage is distributed evenly across nodes. |
Scaling | If all disks exceed their watermarks, scale up the cluster by increasing node storage or adding more nodes. Similarly, if cluster storage is significantly underutilized, consider removing nodes to reduce wasted storage costs. |
Conclusion
As an Elasticsearch cluster administrator, the commands presented in this article will prove helpful when troubleshooting and solving disk-related issues. You’d also want to track down disk- and shard-related metrics over time with a tool like Sematext Cloud. Notice how in this example, we relative balance in the number of shards:
But an imbalance in disk space usage, because not all shards are equal:
In this case we’re nowhere near the storage limits, but we can set up alerts so that we can be one step ahead of the problems: either by redesigning the way we shard data or taking some other corrective action that we discussed here.