The elasticsearch-node
command enables you to perform certain unsafe
operations on a node that are only possible while it is shut down. This command
allows you to adjust the role of a node and may be able to
recover some data after a disaster.
bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster [--ordinal <Integer>] [-E <KeyValuePair>] [-h, --help] ([-s, --silent] | [-v, --verbose])
This tool has three modes:
elasticsearch-node repurpose
can be used to delete unwanted data from a
node if it used to be a data node or a
master-eligible node but has been repurposed not to have one
or other of these roles.
elasticsearch-node unsafe-bootstrap
can be used to perform unsafe cluster
bootstrapping. It forces one of the nodes to form a brand-new cluster on
its own, using its local copy of the cluster metadata.
elasticsearch-node detach-cluster
enables you to move nodes from one
cluster to another. This can be used to move nodes into a new cluster
created with the elasticsearch-node unsafe-bootstap
command. If unsafe
cluster bootstrapping was not possible, it also enables you to move nodes
into a brand-new cluster.
There may be situations where you want to repurpose a node without following
the proper repurposing processes. The elasticsearch-node
repurpose
tool allows you to delete any excess on-disk data and start a node
after repurposing it.
The intended use is:
elasticsearch.yml
by setting node.master
and node.data
as
desired.
elasticsearch-node repurpose
on the node
If you run elasticsearch-node repurpose
on a node with node.data: false
and
node.master: true
then it will delete any remaining shard data on that node,
but it will leave the index and cluster metadata alone. If you run
elasticsearch-node repurpose
on a node with node.data: false
and
node.master: false
then it will delete any remaining shard data and index
metadata, but it will leave the cluster metadata alone.
Running this command can lead to data loss for the indices mentioned if the data contained is not available on other nodes in the cluster. Only run this tool if you understand and accept the possible consequences, and only after determining that the node cannot be repurposed cleanly.
The tool provides a summary of the data to be deleted and asks for confirmation
before making any changes. You can get detailed information about the affected
indices and shards by passing the verbose (-v
) option.
Sometimes Elasticsearch nodes are temporarily stopped, perhaps because of the need to perform some maintenance activity or perhaps because of a hardware failure. After you resolve the temporary condition and restart the node, it will rejoin the cluster and continue normally. Depending on your configuration, your cluster may be able to remain completely available even while one or more of its nodes are stopped.
Sometimes it might not be possible to restart a node after it has stopped. For example, the node’s host may suffer from a hardware problem that cannot be repaired. If the cluster is still available then you can start up a fresh node on another host and Elasticsearch will bring this node into the cluster in place of the failed node.
Each node stores its data in the data directories defined by the
path.data
setting. This means that in a disaster you can
also restart a node by moving its data directories to another host, presuming
that those data directories can be recovered from the faulty host.
Elasticsearch requires a response from a majority of the master-eligible nodes in order to elect a master and to update the cluster state. This means that if you have three master-eligible nodes then the cluster will remain available even if one of them has failed. However if two of the three master-eligible nodes fail then the cluster will be unavailable until at least one of them is restarted.
In very rare circumstances it may not be possible to restart enough nodes to restore the cluster’s availability. If such a disaster occurs, you should build a new cluster from a recent snapshot and re-import any data that was ingested since that snapshot was taken.
However, if the disaster is serious enough then it may not be possible to
recover from a recent snapshot either. Unfortunately in this case there is no
way forward that does not risk data loss, but it may be possible to use the
elasticsearch-node
tool to construct a new cluster that contains some of the
data from the failed cluster.
If there is at least one remaining master-eligible node, but it is not possible
to restart a majority of them, then the elasticsearch-node unsafe-bootstrap
command will unsafely override the cluster’s voting configuration as if performing another
cluster bootstrapping process.
The target node can then form a new cluster on its own by using
the cluster metadata held locally on the target node.
These steps can lead to arbitrary data loss since the target node may not hold the latest cluster metadata, and this out-of-date metadata may make it impossible to use some or all of the indices in the cluster.
Since unsafe bootstrapping forms a new cluster containing a single node, once
you have run it you must use the elasticsearch-node
detach-cluster
tool to migrate any other surviving nodes from the failed
cluster into this new cluster.
When you run the elasticsearch-node unsafe-bootstrap
tool it will analyse the
state of the node and ask for confirmation before taking any action. Before
asking for confirmation it reports the term and version of the cluster state on
the node on which it runs as follows:
Current node cluster state (term, version) pair is (4, 12)
If you have a choice of nodes on which to run this tool then you should choose
one with a term that is as large as possible. If there is more than one
node with the same term, pick the one with the largest version.
This information identifies the node with the freshest cluster state, which minimizes the
quantity of data that might be lost. For example, if the first node reports
(4, 12)
and a second node reports (5, 3)
, then the second node is preferred
since its term is larger. However if the second node reports (3, 17)
then
the first node is preferred since its term is larger. If the second node
reports (4, 10)
then it has the same term as the first node, but has a
smaller version, so the first node is preferred.
Running this command can lead to arbitrary data loss. Only run this tool if you understand and accept the possible consequences and have exhausted all other possibilities for recovery of your cluster.
The sequence of operations for using this tool are as follows:
elasticsearch-node unsafe-bootstrap
command as shown
below. Verify that the tool reported Master node was successfully
bootstrapped
.
elasticsearch-node detach-cluster
tool, described below, on every other node in the cluster.
When you run the tool it will make sure that the node that is being used to bootstrap the cluster is not running. It is important that all other master-eligible nodes are also stopped while this tool is running, but the tool does not check this.
The message Master node was successfully bootstrapped
does not mean that
there has been no data loss, it just means that tool was able to complete its
job.
It is unsafe for nodes to move between clusters, because different clusters have completely different cluster metadata. There is no way to safely merge the metadata from two clusters together.
To protect against inadvertently joining the wrong cluster, each cluster creates a unique identifier, known as the cluster UUID, when it first starts up. Every node records the UUID of its cluster and refuses to join a cluster with a different UUID.
However, if a node’s cluster has permanently failed then it may be desirable to
try and move it into a new cluster. The elasticsearch-node detach-cluster
command lets you detach a node from its cluster by resetting its cluster UUID.
It can then join another cluster with a different UUID.
For example, after unsafe cluster bootstrapping you will need to detach all the other surviving nodes from their old cluster so they can join the new, unsafely-bootstrapped cluster.
Unsafe cluster bootstrapping is only possible if there is at least one
surviving master-eligible node. If there are no remaining master-eligible nodes
then the cluster metadata is completely lost. However, the individual data
nodes also contain a copy of the index metadata corresponding with their
shards. This sometimes allows a new cluster to import these shards as
dangling indices. You can sometimes
recover some indices after the loss of all master-eligible nodes in a cluster
by creating a new cluster and then using the elasticsearch-node
detach-cluster
command to move any surviving nodes into this new cluster.
There is a risk of data loss when importing a dangling index because data nodes may not have the most recent copy of the index metadata and do not have any information about which shard copies are in-sync. This means that a stale shard copy may be selected to be the primary, and some of the shards may be incompatible with the imported mapping.
Execution of this command can lead to arbitrary data loss. Only run this tool if you understand and accept the possible consequences and have exhausted all other possibilities for recovery of your cluster.
The sequence of operations for using this tool are as follows:
elasticsearch-node detach-cluster
tool as shown
below. Verify that the tool reported Node was successfully detached from the
cluster
.
The message Node was successfully detached from the cluster
does not mean
that there has been no data loss, it just means that tool was able to complete
its job.
repurpose
unsafe-bootstrap
detach-cluster
--ordinal <Integer>
0
, meaning to use the first node in the data path.
-E <KeyValuePair>
-h, --help
-s, --silent
-v, --verbose
In this example, a former data node is repurposed as a dedicated master node.
First update the node’s settings to node.master: true
and node.data: false
in its elasticsearch.yml
config file. Then run the elasticsearch-node
repurpose
command to find and remove excess shard data:
node$ ./bin/elasticsearch-node repurpose WARNING: Elasticsearch MUST be stopped before running this tool. Found 2 shards in 2 indices to clean up Use -v to see list of paths and indices affected Node is being re-purposed as master and no-data. Clean-up of shard data will be performed. Do you want to proceed? Confirm [y/N] y Node successfully repurposed to master and no-data.
In this example, a node that previously held data is repurposed as a
coordinating-only node. First update the node’s settings to node.master:
false
and node.data: false
in its elasticsearch.yml
config file. Then run
the elasticsearch-node repurpose
command to find and remove excess shard data
and index metadata:
node$./bin/elasticsearch-node repurpose WARNING: Elasticsearch MUST be stopped before running this tool. Found 2 indices (2 shards and 2 index meta data) to clean up Use -v to see list of paths and indices affected Node is being re-purposed as no-master and no-data. Clean-up of index data will be performed. Do you want to proceed? Confirm [y/N] y Node successfully repurposed to no-master and no-data.
Suppose your cluster had five master-eligible nodes and you have permanently lost three of them, leaving two nodes remaining.
n
at the confirmation
step.
node_1$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (4, 12) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] n
n
at the
confirmation step.
node_2$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (5, 3) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] n
node_2$ ./bin/elasticsearch-node unsafe-bootstrap WARNING: Elasticsearch MUST be stopped before running this tool. Current node cluster state (term, version) pair is (5, 3) You should only run this tool if you have permanently lost half or more of the master-eligible nodes in this cluster, and you cannot restore the cluster from a snapshot. This tool can cause arbitrary data loss and its use should be your last resort. If you have multiple surviving master eligible nodes, you should run this tool on the node with the highest cluster state (term, version) pair. Do you want to proceed? Confirm [y/N] y Master node was successfully bootstrapped
After unsafely bootstrapping a new cluster, run the elasticsearch-node
detach-cluster
command to detach all remaining nodes from the failed cluster
so they can join the new cluster:
node_3$ ./bin/elasticsearch-node detach-cluster WARNING: Elasticsearch MUST be stopped before running this tool. You should only run this tool if you have permanently lost all of the master-eligible nodes in this cluster and you cannot restore the cluster from a snapshot, or you have already unsafely bootstrapped a new cluster by running `elasticsearch-node unsafe-bootstrap` on a master-eligible node that belonged to the same cluster as this node. This tool can cause arbitrary data loss and its use should be your last resort. Do you want to proceed? Confirm [y/N] y Node was successfully detached from the cluster