Skip to content
Rate this page
Thanks for your feedback
Thank you! The feedback has been submitted.

Get free database assistance or contact our experts for personalized support.

Percona ClusterSync for MongoDB 0.8.0 (2026-04-06)

We’re excited to announce the release of Percona ClusterSync for MongoDB (PCSM) 0.8.0, delivering major performance gains, enhanced reliability, and improved operational clarity. This version introduces parallel replication, enabling clusters to process larger workloads without increasing replication lag—significantly boosting scalability.

With 3× to 18× faster processing speeds compared to previous versions, PCSM 0.8.0 now supports workload capacities ranging from 3,000 to 30,000+ operations per second (OPS), depending on cluster architecture and document size (e.g., 5 KB vs. 100 KB payloads). These improvements ensure smoother replication, higher throughput, and more efficient cluster operations.

Get started with PCSM

Release highlights

Document-level parallel replication

Percona ClusterSync for MongoDB now introduces document-level parallel replication, significantly improving replication performance under heavy workloads. Previously, change stream events were applied sequentially in a single stream, which caused replication lag to grow indefinitely when write throughput lagged behind read throughput. DML events are distributed among multiple workers based on the hashing of document keys. This approach allows for concurrent processing while maintaining the order of operations for each document.

To ensure data consistency, PCSM maintains ordering for DDL operations using a write barrier. When a DDL event is encountered, all workers flush their pending operations before the DDL is applied, ensuring correct execution order.

Key benefits

  • Parallelism: Events are processed concurrently by multiple workers, allowing PCSM to handle larger workloads without increasing replication lag and improving throughput under heavy load.
  • Ordering: All operations on the same document are routed to the same worker, ensuring correct per‑document ordering.
  • Consistency: To ensure data consistency, PCSM maintains ordering for DDL operations using a write barrier.

For more information, see our documentation.

Async bulk write pipeline

Percona ClusterSync for MongoDB now improves replication throughput by allowing bulks to be pre‑populated while writes are in progress. Previously, event processing would stall until each bulk write was completed, resulting in filled queues and slowed synchronization. With parallel bulk preparation, replication can continue seamlessly. This improvement reduces idle time and provides faster, more efficient performance under heavy workloads.

For more information, see our documentation.

Changelog

Improvements

  • PCSM-218: Simplified the clone process by separating read and write worker/queue paths, improving maintainability and troubleshooting. Added better logging and benchmark validation to ensure no performance regression.

  • PCSM-219: Aligned environment variable configuration with CLI and HTTP request options for a simpler, more consistent user experience.

  • PCSM-240: We’ve simplified the transaction processing by leveraging change stream ordering. This decreased the complexity, making it easier to troubleshoot and optimize the replication path.

  • PCSM-253: We have reviewed the configuration options to ensure they deliver clear value to users and are well-documented. We also eliminated unnecessary options to simplify the experience. The remaining options have been explained in greater detail to enhance clarity, usability, and user confidence when working with PCSM.

  • PCSM-258: Syncing a sharded cluster to a replica set cluster functions correctly without issues.

  • PCSM-273: PCSM now introduces document-level parallel replication, significantly improving replication performance under heavy workloads.

  • PCSM-274: PCSM workers no longer block while waiting for writes to complete. Bulk operations are now executed asynchronously, allowing workers to continue reading and batching events even while writes are in progress. This reduces bottlenecks, minimizes idle time, and improves overall replication performance under load.

  • PCSM-276: Refactored background workers to propagate parent contexts instead of creating new ones, reducing the risk of leaks and making the overall process lifecycle easier to understand and maintain.

Bugs

  • PCSM-222: Resolved an issue where PCSM could stall during the clone phase when dropping collections on the target before recreating them. Migration now proceeds reliably without getting stuck.

  • PCSM-239: Fixed an issue where replication failed for documents updated with aggregation pipeline operators on arrays (such as $slice and $filter) when nested paths used numeric string field names. Replication now processes these updates correctly, ensuring reliable synchronization.

  • PCSM-251: Resolved an issue where atomicity was not maintained for transactions spanning multiple shards during replication. PCSM now correctly coordinates multi-shard transaction boundaries, ensuring that these operations are applied atomically across the entire cluster.

  • PCSM-254: Fixed an issue where PCSM underutilized system resources during change stream replication. This caused replication lag to increase even on high-capacity machines. PCSM now parallelizes change stream processing more efficiently, improving resource usage and reducing lag.

  • PCSM-257: Fixed an issue where PCSM attempted to recreate a view after encountering transient errors, such as primary stepdowns. In MongoDB 6.0, the view was already created before the error occurred, resulting in a NamespaceExists error during the retry. PCSM now correctly handles this situation, preventing replication failures.

  • PCSM-294: Fixed an issue in Percona ClusterSync for MongoDB that caused crashes during the change replication phase when processing complex document updates, leading to BufBuilder memory exhaustion and invalid $slice argument errors. First observed in customer migrations from Atlas to EKS environments, this problem has been addressed by enhancing memory management and refining $slice validation, ensuring reliable replication of complex document changes.


Last update: April 6, 2026
Created: April 6, 2026