When a node leaves the cluster for whatever reason, intentional or otherwise, the master reacts by:
These actions are intended to protect the cluster against data loss by ensuring that every shard is fully replicated as soon as possible.
Even though we throttle concurrent recoveries both at the node level and at the cluster level, this “shard-shuffle” can still put a lot of extra load on the cluster which may not be necessary if the missing node is likely to return soon. Imagine this scenario:
If the master had just waited for a few minutes, then the missing shards could have been re-allocated to Node 5 with the minimum of network traffic. This process would be even quicker for idle shards (shards not receiving indexing requests) which have been automatically sync-flushed.
The allocation of replica shards which become unassigned because a node has
left can be delayed with the index.unassigned.node_left.delayed_timeout
dynamic setting, which defaults to 1m
.
This setting can be updated on a live index (or on all indices):
PUT _all/_settings { "settings": { "index.unassigned.node_left.delayed_timeout": "5m" } }
With delayed allocation enabled, the above scenario changes to look like this:
timeout
expires.
This setting will not affect the promotion of replicas to primaries, nor will it affect the assignment of replicas that have not been assigned previously. In particular, delayed allocation does not come into effect after a full cluster restart. Also, in case of a master failover situation, elapsed delay time is forgotten (i.e. reset to the full initial delay).
If delayed allocation times out, the master assigns the missing shards to another node which will start recovery. If the missing node rejoins the cluster, and its shards still have the same sync-id as the primary, shard relocation will be cancelled and the synced shard will be used for recovery instead.
For this reason, the default timeout
is set to just one minute: even if shard
relocation begins, cancelling recovery in favour of the synced shard is cheap.
The number of shards whose allocation has been delayed by this timeout setting can be viewed with the cluster health API:
If a node is not going to return and you would like Elasticsearch to allocate the missing shards immediately, just update the timeout to zero:
PUT _all/_settings { "settings": { "index.unassigned.node_left.delayed_timeout": "0" } }
You can reset the timeout as soon as the missing shards have started to recover.