Data Migration

Migrations Are Dead, Long Live Migrations

Pinterest LinkedIn Tumblr

Storage is an industry in perpetual motion. Vendors are constantly innovating and releasing new products to help their customers get a leg up on the competition. While storage upgrades usually deliver new features that IT admins are eager to get into production, few IT professionals look forward to the headache of upgrading their storage infrastructure.

Low-code Application Development Company

The challenge with storage migrations is that many enterprises have applications that need to run around the clock. This means extensive planning in advance to find a window of time when IT can halt applications, manually copy data to the new storage, then reconfigure and restart the applications. To minimize the obvious business impact, IT must find ways to make this process as fast as possible and to perform the migration over evenings, weekends and holidays. While IT pros know that this is part of the job, it’s still no one’s idea of fun.

Concealing Identities With Aliases

Some enterprises use a Band-Aid approach to the challenge of storage migrations using symbolic links (symlinks). Symlinks decouple data from its location by creating a pointer that redirects applications to the new location. For offline cloning, system configuration, or migrating inactive or cold files, this process is straightforward. However, not all operating systems or applications support the use of symlinks, and when used in environments where there might be open/active files, apps might continue to write to the original file and these writes can be lost, resulting in silent data loss or corruption. This risk significantly limits the real-world use of symlinks to fix the migration problem at enterprise scale.

Data Virtualization, Intelligence And Automation

Applications have had to stop for storage migrations to take place throughout the history of IT. Due to the siloed nature of traditional storage architectures, where applications were tethered to a single storage resource, taking systems down ensured no data was lost. Today, this is changing.

Two elements are key to getting past traditional storage migrations and moving to automated data migration. One is data virtualization, where an abstraction layer decouples an application’s view of data from the underlying storage hardware. Once this takes place, a global namespace can be formed to give applications aggregated access to different storage resources. The applications just view the global pool of resources and no changes are needed to give them that visibility or use.

The other part of the puzzle is intelligence/automation. Software is getting smart. Machine-learning capabilities make it possible to better understand data’s use via applications by using metadata. As software gets smarter, we can leave the traditional storage migration behind and move instead to automated data migration that happens systematically, whenever data would be better served by the capabilities of a different storage system. Metadata engine software can analyze what data is active (hot) and what is inactive (cold) along with a number of other characteristics. With different types of storage available in the global namespace, data can then move to the right system for its needs as defined by IT’s objectives — for example, performance, cost and reliability — automatically and without application interruption.

Gain Choice: Stretch Your Budget, Choose Your Vendor, Add A Cloud

Automated data migration opens up a number of new possibilities for enterprises and their IT teams. Budgets can be saved by repurposing storage to serve data that isn’t active but also can’t be deleted yet for a variety of business reasons. This extends the life of a system that’s no longer top-of-the-line but still offers good capabilities.

With the ease of integrating any storage into a global namespace, IT no longer has to stick with a single vendor and instead can make strategic choices about what is truly the best fit for their business’s needs. This includes adding off-premise storage options if necessary.

When it does come time to finally move data from and retire a storage system using a metadata engine, hardware can be decommissioned with a single click. Instead of manually copying data from an old array to a new one, the data can move seamlessly across several devices, accelerating the physical copying process, and doing so transparently to applications.

Storage is going to keep changing, which is good news for businesses facing the current explosion of data. Migrations aren’t going anywhere, but they definitely don’t have to be such a struggle anymore. With automated data mobility, IT can give your enterprise exactly what it needs, whether that is a system with breakthrough performance, extensive capacity, or even adding systems to deliver both, while still saving you money.

ThinkDataAnalytics is a data science and analytics online portal that provides the latest news and content on AI, Analytics, Big Data, Data Mining, Data Science, and Machine Learning. A team of experts with extensive experience in the field runs