Multi-tenancy databases from private to public cloud

Optimizing global portal databases helping retailers and cities to monitor the recycling of 50 billion used bottles and cans a year in the world

In 2002, a gap in the market for a comprehensive project management solution led the founder to create ProWorkflow.
ProWorkflow

CUSTOMER SPOTLIGHT

Platform: On-Premises then Azure
Project Duration: 6. Months
Website: proworkflow.com
Project Resources: 1 database architect, 1 data engineer, 1 Devops engineer

Born from a need to enhance internal operations and communication, ProWorkflow quickly demonstrated its value, transforming our workflow and sparking the realization of its potential for businesses everywhere.Tailored to meet the diverse demands of the professional landscape, ProWorkflow has since been the cornerstone of efficiency and reliability.

Project Challenge

Proworkflow multi-tenancy has a lot of databases to migrate (above 10 000) and any trial period for a new client creates a new one (about 500 per months) and we have about 200 databases removed per month. The databases were in three different regions of the globe (Asian, European and North American ones). In the past, the client had to plan in advance upgrades and migrations to new servers for a year with specific roll forward and it was error prone. However this time, it was a requirement to consolidate and do it in 2 months time.

Solution

Phase 1: The architecture

Internal SLAs required a low RPO and RTO with a disaster recovery on a different geographical zone on Azure. Managed platform was too premature for the client, as the many features to provision new databases for new clients would have been extremely cumbersome. Also usage of specific server properties were not available on the managed platform. Finally the cost of the managed platform was too high. Always-On clustering was also not a good option as this would have led to thread starvation considering the number of databases as the client had a limited budget on infrastructure and licences. Cloud SQL on GCP would have been a good solution but they went for Azure that did not provide the right High Availability solution for the right price. We went therefore for a failover cluster instance on IaaS server. Consolidation of the server had 2 folds:

The first one was to make sure that tempdb collation would not be an issue comparatively to the database collation. The second one was to make sure that maintenance windows would be respected based on the timezone of the client and its business hours. After general consideration, one instance was sufficient. Proper features were easily installed as it was self-managed. We also strengthen security by implementing SSL onto the cluster with a yearly rotation for a trusted certificate with complex algorithm encryption.
We also created a tool for CICD deployment to be efficient. Regarding Disaster recovery, the objectives of recovery we within the hours with the minimum data loss. We used the “BACKUP WITH URL” solution to make sure the backups were in multi-zone buckets on Azure blob storage and made sure to keep the meta data of the backup history on a dedicated database easily restorable.

Phase 2: The Migration

We used Azure blob storage to transfer all the backups (full, diff, log) and kepts the backup history. With a customized tool we were able to log ship the databases on the new instance cluster (upgraded also from SQL server 2014 to 2019). Validation of the full migration was relatively simple with making sure that cutover wasz handled gracefully.

Phase 3: The Optimization

Thanks to the newly upgraded version, CHDS was able to improve the overall performance of the databases by a series of tuning sessions. We also made sure to perform every three months a disaster recovery scenario to confirm all the systems in place still work fine and assist the client to perform DBA tasks several years to get them in a comfortable place.