Continuous migration to Cloud SQL for terabyte-scale databases with minimal downtime

Schools turn to Google Cloud to help re-open campuses
April 13, 2021
What’s new with Google Cloud
April 13, 2021
Schools turn to Google Cloud to help re-open campuses
April 13, 2021
What’s new with Google Cloud
April 13, 2021

When Broadcom completed its Symantec Enterprise Security Business acquisition in late 2019, the company made a strategic decision to move its Symantec workloads to Google Cloud, including its Symantec Endpoint Security Complete product. This is the cloud-managed SaaS version of Symantec’s endpoint protection, which provides protection, detection and response capabilities against advanced threats at scale across traditional and mobile endpoints.

To move the workloads without user disruption, Broadcom needed to migrate terabytes of data, across multiple databases, to Google Cloud. In this blog, we’ll explore several approaches to continuously migrating terabyte-scale data to Cloud SQL and how Broadcom planned and executed this large migration while keeping their downtime minimal.

Broadcom’s data migration requirements

  • Terabyte scale: The primary requirement was to migrate 40+ MySQL databases with a total size of more than 10 TB.

  • Minimal downtime: The database cutover downtime needed to be less than 10 minutes due to SLA requirements.

  • Granular schema selection: There was also a need for replication pipeline filters to selectively include and exclude tables and/or databases.

  • Multi-source and multi-destination: Traditional single source and single destination replication scenarios didn’t suffice here–see some of Broadcom’s complex scenarios below:

Leave a Reply

Your email address will not be published. Required fields are marked *