Snapshot technologies have been around for a long time. Essentially, they let you quickly make online, disk-based copies of selected data sets that are easily accessed. With virtual computing becoming a mainstream deployment platform, these technologies are taking on a heightened importance.
When we refer to “snapshot technology”, there are two objects we have in mind. Most snapshot technologies let you quickly create immutable, read-only copies of a data set at a particular point in time. If you want a usable copy of that snapshot, then you create a clone from it. A clone is basically a writable snapshot. The term “snapshot technology” refers to both of these, which in its productized form from most vendors, includes both.
As you consider enterprise storage, core snapshot technology that has been developed specifically for virtual environments is a critical feature area that needs to be carefully assessed. Snapshots are not a standalone technology, and they’ll generally be delivered as a component in a more comprehensive solution (a hypervisor like VMware ESXi, storage array, storage appliance, etc.).
Snapshots in Virtualized Environments: What Matters
In most virtual environments, enterprises are working with thousands of copies. That’s because when new virtual machines are created, they’re (most frequently) created as a copy of another VM that is referred to as a VM template. Working from VM templates pre-defines CPU, memory, storage, network and other settings and parameters, allowing new VMs to be created very quickly. Part of the standard workflow is creating these copies and making them available for requestors to use. Disk-based snapshots are most-often often used for this.
In snapshots, what matters are performance, storage capacity consumption, creation speed and scalability. The problem with existing snapshot technologies is that you have to make trade-offs between these – you can’t create snapshots that result in high performance clones quickly, you can’t make as many snapshots as you might want from a single master copy, or you consume too much storage capacity because of the way you have to create snapshots that meet your performance requirements.
Because these trade-offs have existed for so long, most people no longer question the sub-optimal workflows that have evolved for snapshot use. Many administrators are so used to these workflows that it’s hard for them to imagine how they might use a better snapshot. Because of the improved agility for VM creation and usage in virtual environments, it’s time for these sub-optimal snapshot workflows to be re-considered.
The Business Value of High Performance Snapshots
If your organization is regularly making hundreds of copies of critical applications on a weekly basis for backup purposes, you could easily be increasing your consumption of storage capacity by more than 100x. In addition, the standard implementation of snapshots imposes a large performance impact on production workloads, and is time consuming to copy over blocks for each snapshot and copy operation. How long does it take to create these? Do you have internal resources – developers, customer service representatives, QA personnel, backup administrators, or customers – whose workflows and productivity are impacted by the time it takes to create these? If you are limited to only eight copies per master copy (a common limit), what extra work do you have to do to make the copies you need?
Imagine a virtual desktop environment. Imagine you have 3000 virtual desktops, all of which use a common Windows 7 image. And you don’t just make those images once – each time a patching operation is required for these desktops, the desktop team is required to re-create the master and all the copies. Instead of just creating 3000 copies (referred to as “clones”) with optimized snapshot and cloning features, you may be forced to perform dozens of update operations across all 3000 desktops. An unnecessary operational cost given the alternatives available.
Additional storage capacity and administrative time is expensive. These all add up to costs that you should not have to incur.
Virtual Computing Demands New Snapshot Technology
The best of breed snapshot & clone offerings now include:
• High performance snapshots whose performance does not degrade as you create more of them, and who do not require the purchase of expensive storage technologies (e.g. SSD) to maintain this performance
• Space-efficient snapshots which recognize data which is shared across copies and do not create redundancies as more copies are created, and which do not give up performance to achieve this (e.g. “single image management” technology) – there should only be a single logical copy of any shared data
• Snapshots that can be created very quickly and do not give up performance to achieve this
• Scalable snapshots that let you create tens of thousands of clones from a single master copy with a single operation
With virtualization, it is possible now to create a new virtual machine (VM) in seconds – why should it take minutes or hours to create and provision the high performance storage any given VM needs?
One of the interesting ancillary benefits to high performance, space-efficient snapshots that can be created in just seconds is reduced cycle time and increased IT agility. How much is it worth to a software developer if they can get their latest release out 2-3 months earlier due to accelerated test & development cycles? If DBAs could make high performance, space-efficient copies of their databases of record available to any number of testers on demand? If you could reduce your exposure to data loss for your most critical applications by making 12 snapshot backups a day instead of only two?
Snapshot technologies have evolved to keep up with the new, higher bar established by server virtualization technologies and become more agile. New approaches have arrived that deliver per-VM snapshots and clones such as offerings from Virsto Software and Tintri, and enlightened IT administrators need to start thinking about new ways to use them that are no longer constrained by legacy limitations.