Make database normalization part of cloud migration


Last week I discussed database normalization as a best practice in multicloud architecture. Let’s look at this concept within cloud migration, as well.

Don’t confuse database normalization with data normalization. Data normalization is about reducing redundancy and defining a more optimized structure. Perhaps you DBAs are aware of this process. I taught it in college more than 30 years ago.

Database normalization is the process of reducing redundancy of the databases themselves to create a set of databases that are better focused on serving the needs of the business applications, data scientists, and those performing data analytics. 

The challenge is that databases and data slated to move to the cloud are overly complex and full of redundancy (few single sources of truth). Also, most people moving that data to the cloud just want to replicate the databases to the public cloud destination—a huge mistake, and here’s why.

I understand that most budgets are limited and that the cost of moving and combining data to cloud-native and noncloud-native databases is much higher than simply pushing bad database architectures to the cloud. However, I also understand that you’re better off figuring that out as you move to the cloud, rather than having to fix it later.

If you don’t, you’ll have to migrate data twice: First lifting and shifting to a public cloud or clouds, then having to loop back and fix things once you figure out that the database architecture in the cloud is not optimized (because it’s overly complex, or redundant, or too expensive). 

Copyright © 2021 IDG Communications, Inc.



Source link