Skip to content
Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Sharding support in Percona ClusterSync for MongoDB (Technical Preview)

Technical Preview

Sharding support is available starting with Percona ClusterSync for MongoDB 0.7.0 and is currently in technical preview stage. We encourage you to try it out and share your feedback. This will help us improve the feature in future releases.

Percona ClusterSync for MongoDB supports replication between sharded MongoDB clusters, enabling you to migrate or synchronize data from one sharded deployment to another. This capability allows you to migrate sharded clusters with minimal downtime, maintain disaster recovery setups across sharded environments, and synchronize data between sharded clusters for testing or development purposes.

Overview

The workflow for sharded clusters is similar to replica sets. See How Percona ClusterSync for MongoDB works for the complete workflow overview. The key difference is that PCSM connects to mongos instances on both the source and target clusters instead of replica set members.

Since PCSM connects through mongos, the cluster topology doesn’t matter. This means the source and target clusters can have different numbers of shards.

Also, PCSM replicates data and not metadata. This means chunk distribution as well as the primary shard name for a collection may differ on source and target clusters.

Prerequisites

  • Percona ClusterSync for MongoDB version 0.7.0 or later
  • Source and target clusters must be sharded MongoDB deployments
  • Both clusters must be running the same MongoDB version. Check Version requirements for more information about supported versions.

Connection string format

When connecting to sharded clusters, use the standard MongoDB connection string format but specify mongos hostname and port instead of replica set members:

mongodb://user:pwd@mongos-host:port/[authdb]?[options]

Since PCSM connects through mongos, you don’t need to specify individual shard members or config servers in the connection string. The mongos router handles routing to the appropriate shards.

For detailed information about authentication and connection string configuration, see Configure authentication in MongoDB.

Sharding-specific behavior

Initial sync preparation

Before starting the initial sync, PCSM checks which collections are sharded on the source cluster and creates corresponding sharded collections on the destination cluster. The only sharding configuration preserved from the source cluster is the sharding key; all other sharding details are handled internally by the destination cluster.

Balancer operation

Percona ClusterSync for MongoDB connects to source and target clusters via a mongos instance. Therefore, you do not need to disable the balancer on either the source or target cluster before starting replication. The target cluster’s balancer continues to operate normally and manages chunk distribution according to its own sharding configuration and balancer settings.

Chunk distribution

PCSM does not preserve chunk distribution information from the source cluster. The target cluster manages chunk distribution internally through its balancer. This means that after replication, chunks may be distributed differently on the target cluster compared to the source cluster, which is expected behavior.

Since the target cluster already has information about which collections are sharded, it handles sharding internally. PCSM does not interfere with the target cluster’s sharding configuration or chunk distribution.

Usage

The commands and API endpoints for sharded cluster replication are the same as for replica set replication. The workflow follows the same stages as replica set replication. See How Percona ClusterSync for MongoDB works for the complete workflow overview and Use Percona ClusterSync for MongoDB for detailed command instructions.

Next steps


Last update: February 3, 2026
Created: February 3, 2026