- Sharding >
- Sharded Cluster Tutorials >
- Sharded Cluster Deployment Tutorials >
- Upgrade Config Servers to Replica Set
Upgrade Config Servers to Replica Set¶
On this page
Starting in MongoDB 3.2, config servers for sharded clusters can be deployed as a replica set. Using a replica set for the config servers improves consistency across the config servers, since MongoDB can take advantage of the standard replica set read and write protocols for the config data. In addition, using a replica set for config servers allows a sharded cluster to have more than 3 config servers since a replica set can have up to 50 members. To deploy config servers as a relica set, the config servers must run the WiredTiger storage engine.
The following procedure upgrades three mirrored config servers to a config server replica set without downtime. To use this procedure, all the sharded cluster binaries must be at least version 3.2.4.
During this procedure there will be a period of time where the config servers will be read-only. During this period, certain catalog operations will fail if attempted. Operations that will not be available include adding and dropping shards, creating and dropping databases, creating and dropping sharded collections, and migrating chunks (both manually and via the balancer process). Normal read and write operations to existing collections will not be affected.
Prerequisites¶
- All binaries in the sharded clusters must be at least version 3.2.4. See Upgrade a Sharded Cluster to 3.2 for instructions to upgrade the sharded cluster.
- The existing config servers must be in sync.
Procedure¶
Note
The procedure refers to the first config server, second config server, and the third config server as listed in the configDB setting of the mongos. This means, that for the following example:
mongos --configdb confServer1:port1,confServer2:port2,confServer3:port3
- The first config server refers to confServer1.
- The second config server refers to confServer2.
- The third config server refers to confServer3.
Disable the balancer as described in Disable the Balancer.
Connect a mongo shell to the first config server listed in the configDB setting of the mongos and run rs.initiate() to initiate the single member replica set.
rs.initiate( { _id: "csReplSet", configsvr: true, version: 1, members: [ { _id: 0, host: "<host>:<port>" } ] } )
- _id corresponds to the replica set name for the config servers.
- configsvr must be set to true.
- version must be set to 1.
- members array contains a document that specifies:
- members._id which is a numeric identifier for the member.
- members.host which is a string corresponding to the config server’s hostname and port.
Restart this config server as a single member replica set with:
- the --replSet option set to the replica set name specified during the rs.initiate(),
- the --configsvrMode option set to the legacy config server mode Sync Cluster Connection Config (sccc),
- the --configsvr option,
- the --storageEngine option set to the storage engine used by this config server. For this upgrade procedure, the existing config server can be using either MMAPv1 or WiredTiger, and
- the --port option set to the same port as before restart, and
- the --dbpath option set to the same path as before restart.
Include additional options as specific to your deployment.
Important
The config server must use the same port as before. [1]
mongod --configsvr --replSet csReplSet --configsvrMode=sccc --storageEngine <storageEngine> --port <port> --dbpath <path>
Or if using a configuration file, specify the:
- sharding.clusterRole,
- sharding.configsvrMode,
- replication.replSetName,
- storage.dbPath,
- storage.engine, and
- net.port.
sharding: clusterRole: configsvr configsvrMode: sccc replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path> engine: <storageEngine>
[1] If before the restart, your config server did not explicitly specify the --configsvr option or the --port option, the restart with the --configsvr will result in a change of port.
To ensure that the port used by the config server does not change, include the --port option or net.port set to the same port as before the restart.
Start the new mongod instances to add to the replica set. These instances must use the WiredTiger storage engine. Starting in 3.2, the default storage engine is WiredTiger for new mongod instances with new data paths.
Important
- Do not add existing config servers to the replica set.
- Use new dbpaths for the new instances.
The number of new mongod instances to add depends on the config server currently in the single-member replica set:
- If the config server is using MMAPv1, start 3 new mongod instances.
- If the config server is using WiredTiger, start 2 new mongod instances.
Note
The example in this procedure assumes that the existing config servers use MMAPv1.
For each new mongod instance to add, include the --configsvr and the --replSet options:
mongod --configsvr --replSet csReplSet --port <port> --dbpath <path>
Or if using a configuration file:
sharding: clusterRole: configsvr replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path>
Using the mongo shell connected to the replica set config server, add the new mongod instances as non-voting, priority 0 members:
rs.add( { host: <host:port>, priority: 0, votes: 0 } )
Once all the new members have been added as non-voting, priority 0 members, ensure that the new nodes have completed the initial sync and have reached SECONDARY state. To check the state of the replica set members, run rs.status() in the mongo shell:
rs.status()
Shut down one of the other non-replica set config servers; i.e. either the second and third config server listed in the configDB setting of the mongos. At this point the config servers will go read-only, meaning certain operations - such as creating and dropping databases and sharded collections - will not be available.
Reconfigure the replica set to allow all members to vote and have default priority of 1.
var cfg = rs.conf(); cfg.members[0].priority = 1; cfg.members[1].priority = 1; cfg.members[2].priority = 1; cfg.members[3].priority = 1; cfg.members[0].votes = 1; cfg.members[1].votes = 1; cfg.members[2].votes = 1; cfg.members[3].votes = 1; rs.reconfig(cfg);
Step down the first config server, i.e. the server started with --configsvrMode=sccc.
rs.stepDown(600)
Shut down the first config server.
Restart the first config server in config server replica set (CSRS) mode; i.e. restart without the --configsvrMode=sccc option:
mongod --configsvr --replSet csReplSet --storageEngine <storageEngine> --port <port> --dbpath <path>
Or if using a configuration file, omit the sharding.configsvrMode setting:
sharding: clusterRole: configsvr replication: replSetName: csReplSet net: port: <port> storage: dbPath: <path> engine: <storageEngine>
If the first config server uses the MMAPv1 storage engine, the member will transition to "REMOVED" state.
At this point the config server data will return to being writeable and all catalog operations - including creating and dropping databases and sharded collections - will once again be possible.
Restart mongos instances with updated --configdb or sharding.configDB setting.
For the updated --configdb or sharding.configDB setting, specify the replica set name for the config servers and the members in the replica set.
mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3>
Verify that the restarted mongos instances are aware of the protocol change. Connect a mongo shell to a mongos instance and check the mongos collection in the config database:
use config db.mongos.find()
The ping value for the mongos instances should indicate some time after the restart.
If the first config server uses the MMAPv1 storage engine, remove the member from the replica set. Connect a mongo shell to the current primary and use rs.remove():
Important
Only if the config server uses the MMAPv1 storage engine.
rs.remove("<hostname>:<port>")
Shut down the remaining non-replica set config server.
Re-enable the balancer as described in Enable the Balancer.