-->

Transport

The transport module is used for internal communication between nodes within the cluster. Each call that goes from one node to the other uses the transport module (for example, when an HTTP GET request is processed by one node, and should actually be processed by another node that holds the data).

The transport mechanism is completely asynchronous in nature, meaning that there is no blocking thread waiting for a response. The benefit of using asynchronous communication is first solving the C10k problem, as well as being the ideal solution for scatter (broadcast) / gather operations such as search in Elasticsearch.

Transport Settings

The internal transport communicates over TCP. You can configure it with the following settings:

Setting Description

transport.port

A bind port range. Defaults to 9300-9400.

transport.publish_port

The port that other nodes in the cluster should use when communicating with this node. Useful when a cluster node is behind a proxy or firewall and the transport.port is not directly addressable from the outside. Defaults to the actual port assigned via transport.port.

transport.bind_host

The host address to bind the transport service to. Defaults to transport.host (if set) or network.bind_host.

transport.publish_host

The host address to publish for nodes in the cluster to connect to. Defaults to transport.host (if set) or network.publish_host.

transport.host

Used to set the transport.bind_host and the transport.publish_host Defaults to transport.host or network.host.

transport.connect_timeout

The connect timeout for initiating a new connection (in time setting format). Defaults to 30s.

transport.compress

Set to true to enable compression (DEFLATE) between all nodes. Defaults to false.

transport.ping_schedule

Schedule a regular application-level ping message to ensure that transport connections between nodes are kept alive. Defaults to 5s in the transport client and -1 (disabled) elsewhere. It is preferable to correctly configure TCP keep-alives instead of using this feature, because TCP keep-alives apply to all kinds of long-lived connections and not just to transport connections.

It also uses the common network settings.

Transport Profiles

Elasticsearch allows you to bind to multiple ports on different interfaces by the use of transport profiles. See this example configuration

transport.profiles.default.port: 9300-9400
transport.profiles.default.bind_host: 10.0.0.1
transport.profiles.client.port: 9500-9600
transport.profiles.client.bind_host: 192.168.0.1
transport.profiles.dmz.port: 9700-9800
transport.profiles.dmz.bind_host: 172.16.1.2

The default profile is special. It is used as a fallback for any other profiles, if those do not have a specific configuration setting set, and is how this node connects to other nodes in the cluster.

The following parameters can be configured on each transport profile, as in the example above:

  • port: The port to bind to
  • bind_host: The host to bind
  • publish_host: The host which is published in informational APIs
  • tcp.no_delay: Configures the TCP_NO_DELAY option for this socket
  • tcp.keep_alive: Configures the SO_KEEPALIVE option for this socket
  • tcp.reuse_address: Configures the SO_REUSEADDR option for this socket
  • tcp.send_buffer_size: Configures the send buffer size of the socket
  • tcp.receive_buffer_size: Configures the receive buffer size of the socket

Long-lived idle connections

Elasticsearch opens a number of long-lived TCP connections between each pair of nodes in the cluster, and some of these connections may be idle for an extended period of time. Nonetheless, Elasticsearch requires these connections to remain open, and it can disrupt the operation of the cluster if any inter-node connections are closed by an external influence such as a firewall. It is important to configure your network to preserve long-lived idle connections between Elasticsearch nodes, for instance by leaving tcp.keep_alive enabled and ensuring that the keepalive interval is shorter than any timeout that might cause idle connections to be closed, or by setting transport.ping_schedule if keepalives cannot be configured.

Transport Compression

Request Compression

By default, the transport.compress setting is false and network-level request compression is disabled between nodes in the cluster. This default normally makes sense for local cluster communication as compression has a noticeable CPU cost and local clusters tend to be set up with fast network connections between nodes.

The transport.compress setting always configures local cluster request compression and is the fallback setting for remote cluster request compression. If you want to configure remote request compression differently than local request compression, you can set it on a per-remote cluster basis using the cluster.remote.${cluster_alias}.transport.compress setting.

Response Compression

The compression settings do not configure compression for responses. Elasticsearch will compress a response if the inbound request was compressed—even when compression is not enabled. Similarly, Elasticsearch will not compress a response if the inbound request was uncompressed—even when compression is enabled.

Transport Tracer

The transport module has a dedicated tracer logger which, when activated, logs incoming and out going requests. The log can be dynamically activated by settings the level of the org.elasticsearch.transport.TransportService.tracer logger to TRACE:

PUT _cluster/settings
{
   "transient" : {
      "logger.org.elasticsearch.transport.TransportService.tracer" : "TRACE"
   }
}

You can also control which actions will be traced, using a set of include and exclude wildcard patterns. By default every request will be traced except for fault detection pings:

PUT _cluster/settings
{
   "transient" : {
      "transport.tracer.include" : "*",
      "transport.tracer.exclude" : "internal:coordination/fault_detection/*"
   }
}