Gartner Recommendations for an Ideal Cloud Backup Solution

December 7, 2012 / Cloud Computing

Cloud services enabled by cloud computing are numerous, and cloud backup is one of the best selling cloud enabled service today. In fact, the public cloud is most commonly used for backup, archiving and file sync and share. However, Gartner recommends two key factors to consider when assessing cloud backup as an alternative to traditional on-premises backup solutions for an organization.

Gartner thoughts on cloud backup solution:

The main challenges for organizations would be basically one: whether the cloud backup will meet the time to restore need for the organization data; and two: how to make a cloud backup solution economical.

For this ideally, an organization should assess network throughput and the backup restore data sizes to evaluate if the cloud backup would be able to meet the requirements.

Also, organizations should analyse the TCO of different types of backup: of the local backup, cloud backup and replication to a remote data center and compare results.

Organizations should take into consideration their business requirements to estimate what would be the approximate backup window and restore time for their data. Two elements are vital for an accurate assessment: the backup and restore data size; and effective throughput of the network or internet connection of the office to the cloud data center.

The other factor would be to compare the TCO of cloud backup solution with the traditional solutions like local tape or disk backup and remote replication to a corporate data center.

When it comes to backup, (RPO) and recovery time objective (RTO) are the most important factors. RPO refers to the frequency of the backup to taken, which in turn determines how many recovery points there should be; RTO refers to the duration of the time taken to restore data and recover applications.

If organizations are considering cloud backup servers, RPO would not be a key assessment factor in the initial stage. In fact, it would figure in the equation at the time when the organization needs to asses the cloud backup providers potential and efficiency.

To determine if cloud backup for servers is the right choice, one needs to first find out if the server application data and server image can be backed up to and restored from the cloud in time to meet the business requirements.

It is common knowledge that cloud backup has severe limitations on the size of backup dataset because of limited long-distance network bandwidth and throughput. How to determine when a backup job size is too big for cloud backup and restore becomes a natural question. To find an answer, one will have to know the following factors in the IT infrastructure:

  • Daily or nightly cloud backup window — This factor determines the maximum hours to complete all cloud backup jobs. It also determines whether it warrants changes in backup methodologies and practices, such as switching from full backup to “incremental forever” or changing from backup once a day to many times a day using near continuous data protection (CDP) technologies to drastically reduce the backup window.
  • Restore time for different types of data to be backed up to the cloud — This factor determines if a cloud backup strategy needs to be augmented by local disk backup for fast recovery.
  • Corporate WAN or Internet bandwidth and how far away the cloud data center is located — These factors determine how severely backup job performance will be impacted by latency — the network killer — and whether WAN optimization is needed to boost performance.

It’s very important to note restore time is a much more critical factor than backup windows when evaluating cloud backup because restore time represents a much tougher challenge. Cloud backup providers have developed many techniques to help reduce backup data size and backup windows, such as block-level incremental, incremental forever (which eliminates traditional weekly and monthly full-backup jobs), as well as source-side deduplication and compression. When those techniques are used, much larger backup jobs (often 10 times or more) than the 50GB maximum in theory, can fit into a daily or nightly backup window. Moreover, some cloud backup providers offer near CDP, which can trigger backup as frequently as every 15 minutes and back up only the changed data during those intervals, further reducing backup windows.

However, it is important to note that the initial or baseline backup will still require a substantial amount of data to be moved. To make the first backup seeding practical, most often removable tape or disk drive seeding is used to ship physical media overnight from the data center to the backup facility. This immediately triggers concern on the restore side of the process if more than a small amount of data (e.g., a limited number of small to midsize files) is pulled back from the cloud target to the source server.

The most challenging problem for cloud backup is achieving short restore time. Since the purpose of backup is for recovery, it’s critical to measure how long a restore from the cloud will take to meet RTOs. Restores are often full-image restores, whose sizes are much larger than a duplicated backup job, making restore jobs impractical to download from the cloud. As a result, many cloud backup service providers offer traditional offline shipping of the full-image restore on a USB drive, which would be faster than online restores. Increasingly, cloud backup providers also offer an on-premises backup target, which stores the most recent backup locally for much faster recovery than from the cloud. This last notion is sometimes called a hybrid approach or disk-to-disk-to-cloud (D2D2C), which has become increasingly popular to address the RTO problem associated with cloud backup, especially for servers.

In conclusion, if an organization determines that cloud backup could technically work for its environment, it should look for a cloud backup provider whose data center is far enough away to avoid the same man made and natural disaster threats but not so far as to incur unnecessarily long network latencies. To implement an effective cloud backup solution for servers, the solution should also have the following components:

  • Backup window reduction techniques, such as near CDP, block-level incremental, incremental forever and source-side deduplication/compression capabilities.
  • Faster recovery techniques, such as incremental or delta restore, bare metal restore and integration with a local backup/restore device.
  • WAN optimization techniques, such as reduction of backup protocol chattiness and widening of TCP windows.
  • Performance SLAs that spell out specific backup window and restore time.

Gartner recommends that all servers with 50GB or more restore data payload and one day or less RTO should have a local disk copy of the backup data for first-level operational restore requirements. This would only be for protection against logical errors, such as accidental deletion and data corruption, and a limited number of physical threats, and the cloud copy of the data would be used for larger disaster recovery (DR) remediation.

TCO Analysis:

Cloud backup is primarily used today to replace on-site tape backup and tape vaulting for small or midsize businesses (SMBs) and branch offices. Therefore, it is important to have a thorough understanding of the existing tape backup infrastructure and its associated costs, including acquisition and maintenance costs of hardware, software and services, as well as operational cost, such as daily tape administration and any cost associated with a tape vaulting service.

Cloud backup TCO could be straightforward or somewhat complex depending on which service provider is chosen. The most established cloud backup providers today are online backup providers that have existed for many years. Some on-premises-focused backup software providers also created cloud services. These providers usually own cloud data centers or rent co-location facilities to control the end-to-end experiences. The pricing model is usually a monthly or annual license fee based on capacity consumed at the target in the cloud, although a few charge by server without capacity considerations. Some prices are all-inclusive with no separate charges for agents and bandwidth, while others may charge server agents separately and offer optional bandwidth upgrade charges. Other optional services, such as replication to another cloud data center and offline shipping, may incur additional charges. Cloud backup providers using inexpensive public cloud storage infrastructure, such as Amazon S3 and Microsoft Azure, are mostly targeting consumer or PC backup today, with very limited server backup focus. They may also have more complex charging mechanisms with multiple cost components and multiple billing accounts.

While SMBs typically don’t own a secondary site for disaster recovery and want to leverage cloud backup for the DR purpose as well, large enterprises do have a different option other than cloud backup — replicating their branch-office server data to their data center for consolidated central backup. TCO analysis of replicating server data from a branch office to a corporate data center usually includes the following:

  • Replication software
  • Replication target hardware
  • Maintenance of replication software and hardware
  • Additional backup cost (software, hardware and maintenance) at the data center due to increased backup data volume
  • Extra data management time/cost incurred by data center IT staff

Factors other than TCO

If the TCO analyses do not bring forth distinctive cost differences among the various approaches, the decision will be based on less tangible considerations, such as whether internal employees or IT staff could spend the same time tackling more important tasks or whether offloading the backup job to the cloud will increase their happiness and loyalty to the company.

When factoring the cost associated with cloud-based backup solutions, it is important to look at the cost at scale. The cost premium of some cloud implementations can look more attractive if a smaller amount of data is being protected, but if the larger organizational needs are factored in (e.g., more servers and/applications per server) or if fast future growth is factored in, the economics of cloud as opposed to an on-premises solution may diminish.

In the years to come,

Gartner expects the hybrid approach of combining a local disk backup target and cloud backup to become commonplace. It could be in the form of online backup services with an on-premises backup/restore appliance or on-premises backup software writing data to public or private backup clouds. The tipping point of choosing a backup as a service (BaaS) approach versus expanding the traditional on-premises backup solution will come down to:

  • Restore time SLAs
  • TCO of the solution alternatives at scale
  • Data Security Concerns
  • Maturity of cloud backup solutions

Leave a Reply

Your email address will not be published. Required fields are marked *