In the Neo4j HA architecture, the cluster is typically fronted by a load balancer. In this section we will explore how to set up HAProxy to perform load balancing across the HA cluster.
For this tutorial we will assume a Linux environment with HAProxy already installed. See http://www.haproxy.org/ for downloads and installation instructions.
Configuring HAProxy for the Bolt Protocol
In a typical HA deployment, HAProxy will be configured with two open ports, one for routing write operations to the master and one for load balancing read operations over slaves. Each application will have two driver instances, one connected to the master port for performing writes and one connected to the slave port for performing reads.
Let’s first set up the mode and timeouts. The settings below will kill the connection if a server or a client is idle for longer than two hours. Long-running queries may take longer time, but this can be taken care of by enabling HAProxy’s TCP heartbeat feature.
defaults mode tcp timeout connect 30s timeout client 2h timeout server 2h
Set up where drivers wanting to perform writes will connect:
frontend neo4j-write bind *:7680 default_backend current-master
Now, let’s set up the backend that points to the current master instance.
backend current-master option httpchk HEAD /db/manage/server/ha/master HTTP/1.0 server db01 10.0.1.10:7687 check port 7474 server db02 10.0.1.11:7687 check port 7474 server db03 10.0.1.12:7687 check port 7474
In the example above httpchk
is configured in the way you would do it if authentication has been disabled for Neo4j.
By default however, authentication is enabled and you will need to pass in an authentication header.
This would be along the lines of option httpchk HEAD /db/manage/server/ha/master HTTP/1.0\r\nAuthorization:\ Basic\ bmVvNGo6bmVvNGo=
where the last part has to be replaced with a base64 encoded value for your username and password.
See the section called “Authenticate to access the server” for more information.
Configure where drivers wanting to perform reads will connect:
frontend neo4j-read bind *:7681 default_backend slaves
Finally, configure a backend that points to slaves in a round-robin fashion:
backend slaves balance roundrobin option httpchk HEAD /db/manage/server/ha/slave HTTP/1.0 server db01 10.0.1.10:7687 check port 7474 server db02 10.0.1.11:7687 check port 7474 server db03 10.0.1.12:7687 check port 7474
Note that the servers in the slave
backend are configured the same way as in the current-master
backend.
Then by putting all the above configurations into one file, we get a basic workable HAProxy configuration to perform load balancing for applications using the Bolt Protocol.
By default, encryption is enabled between servers and drivers. With encryption turned on, the HAProxy configuration constructed above needs no change to work directly in TLS/SSL passthrough layout for HAProxy. However depending on the driver authentication strategy adopted, some special requirements might apply to the server certificates.
For drivers using trust-on-first-use authentication strategy, each driver would register the HAProxy port it connects to with the first certificate received from the cluster. Then for all subsequent connections, the driver would only establish connections with the server whose certificate is the same as the one registered. Therefore, in order to make it possible for a driver to establish connections with all instances in the cluster, this mode requires all the instances in the cluster sharing the same certificate.
If drivers are configured to run in trusted-certificate mode, then the certificate known to the drivers should be a root certificate to all the certificates installed on the servers in the cluster. Alternatively, for the drivers such as Java driver who supports registering multiple certificates as trusted certificates, the drivers also work well with a cluster if server certificates used in the cluster are all registered as trusted certificates.
To use HAProxy with other encryption layout, please refer to their full documentation at their website.
Configuring HAProxy for the REST API
HAProxy can be configured in many ways. The full documentation is available at their website.
For this example, we will configure HAProxy to load balance requests to three HA servers. Simply write the following configuration to /etc/haproxy.cfg:
global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend neo4j backend neo4j option httpchk GET /db/manage/server/ha/available server s1 10.0.1.10:7474 maxconn 32 server s2 10.0.1.11:7474 maxconn 32 server s3 10.0.1.12:7474 maxconn 32 listen admin bind *:8080 stats enable
HAProxy can now be started by running:
/usr/sbin/haproxy -f /etc/haproxy.cfg
You can connect to http://<ha-proxy-ip>:8080/haproxy?stats to view the status dashboard. This dashboard can be moved to run on port 80, and authentication can also be added. See the HAProxy documentation for details on this.
Optimizing for reads and writes
Neo4j provides a catalogue of health check URLs (see Section 25.7, “REST endpoint for HA status information”) that HAProxy (or any load balancer for that matter) can use to distinguish machines using HTTP response codes.
In the example above we used the /available
endpoint, which directs requests to machines that are generally available for transaction processing (they are alive!).
However, it is possible to have requests directed to slaves only, or to the master only.
If you are able to distinguish in your application between requests that write, and requests that only read, then you can take advantage of two (logical) load balancers: one that sends all your writes to the master, and one that sends all your read-only requests to a slave.
In HAProxy you build logical load balancers by adding multiple backend
s.
The trade-off here is that while Neo4j allows slaves to proxy writes for you, this indirection unnecessarily ties up resources on the slave and adds latency to your write requests. Conversely, you don’t particularly want read traffic to tie up resources on the master; Neo4j allows you to scale out for reads, but writes are still constrained to a single instance. If possible, that instance should exclusively do writes to ensure maximum write performance.
The following example excludes the master from the set of machines using the /slave
endpoint.
global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend neo4j-slaves backend neo4j-slaves option httpchk GET /db/manage/server/ha/slave server s1 10.0.1.10:7474 maxconn 32 check server s2 10.0.1.11:7474 maxconn 32 check server s3 10.0.1.12:7474 maxconn 32 check listen admin bind *:8080 stats enable
Note
In practice, writing to a slave is uncommon. While writing to slaves has the benefit of ensuring that data is persisted in two places (the slave and the master), it comes at a cost. The cost is that the slave must immediately become consistent with the master by applying any missing transactions and then synchronously apply the new transaction with the master. This is a more expensive operation than writing to the master and having the master push changes to one or more slaves. |
Cache-based sharding with HAProxy
Neo4j HA enables what is called cache-based sharding. If the dataset is too big to fit into the cache of any single machine, then by applying a consistent routing algorithm to requests, the caches on each machine will actually cache different parts of the graph. A typical routing key could be user ID.
In this example, the user ID is a query parameter in the URL being requested. This will route the same user to the same machine for each request.
global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http-in bind *:80 default_backend neo4j-slaves backend neo4j-slaves balance url_param user_id server s1 10.0.1.10:7474 maxconn 32 server s2 10.0.1.11:7474 maxconn 32 server s3 10.0.1.12:7474 maxconn 32 listen admin bind *:8080 stats enable
Naturally the health check and query parameter-based routing can be combined to only route requests to slaves by user ID.
Other load balancing algorithms are also available, such as routing by source IP (source
), the URI (uri
) or HTTP headers(hdr()
).