Decision Maker

Backup 2.0: The Way Forward

Old habits can die hard, but is the current way you're doing backup really the way it should be done?

In a recently completed book, "Definitive Guide to Windows Application and Server Backup 2.0" (Realtime Publishers, 2010), I postulated a "mission statement" for backup and recovery: Backups should prevent us from losing any data or losing any work, and ensure that we always have access to our data with as little downtime as possible.

But here's the truth: Traditional backup and recovery products don't typically do a very good job of meeting this simple statement.

Traditional backup and recovery has essentially relied on snapshots: Grabbing the data at a certain point in time and dumping it to tape as fast as possible, so that we can grab as much data as possible in as short an amount of time as possible. Sometimes, our backup windows are so small and the data so large that we have to rely on differential and incremental backups, which grab the data faster but require even longer to perform a recovery. In the book, I coined the term "Backup 1.0" for this old-school style of backup, which has been basically unchanged since the 1960s.

We Can Do Better
I began using the term "Backup 2.0" to refer to a new way of thinking about backups. Backup 2.0 is fundamentally the concept of continuous data protection, where our servers and applications are backed up in real time or near-real time, so we never really have any at-risk data. A Backup 2.0 solution provides a way to reconstruct anything up to and including an entire disk volume to a very specific point in time, so that we can "roll back" a server to that point in time, or just access particular files or objects from that point in time without actually restoring the data anywhere.

The way this works technically is typically through a file system "shim," something supported in Windows Server, and the same technology used to implement third-party disk quota systems. The shim is just a sort of file system driver that gets notified of every disk change at the block level. The shim can grab each disk block as it changes, and transmit that information -- along with a timestamp -- to a central backup server. The backup server can do fancy stuff like de-duplication and compression, if necessary, so that the backups are smaller (potentially much smaller) than the source data.

Most importantly, the backup server can reconstruct disk volumes to a specific point in time by simply assembling the disk blocks leading up to that point in time. With the right tools, you could mount a backup image and browse it using Windows Explorer. If the solution had the right knowledge of database structures for products like SharePoint, Exchange or SQL Server, you could restore anything from an individual message or document up to an entire data store, all to a specific point in time -- and all much more \rapidly than streaming that same information from tape (although you'd likely still make copies of the backup data to tape for off-site storage, they wouldn't be your first line of defense).

Habits Are Horrible
I guess the real lesson here is that old habits -- like the backup techniques we've relied on for more than 40 years -- can die hard. But can you honestly say that you're satisfied with your old-school backup techniques? That you yearn to dig through tape indexes and wait for data to stream off disk? That you've never been let down by a corrupted tape, or a missing tape, or data that was lost in between backups? We should be constantly questioning the shortcomings of our technologies and processes, constantly defining our "pie in the sky" wishes for how they should work, and constantly pressuring vendors to deliver newer and better techniques and technologies.

About the Author

Don Jones is a 12-year industry veteran, author of more than 45 technology books and an in-demand speaker at industry events worldwide. His broad technological background, combined with his years of managerial-level business experience, make him a sought-after consultant by companies that want to better align their technology resources to their business direction. Jones is a contributor to TechNet Magazine and Redmond, and writes a blog at ConcentratedTech.com.

comments powered by Disqus

Reader Comments:

Mon, Sep 16, 2013 Janess cheap auto insurance get car insurance quotes

http://www.insurerscomparison.com/ DOT http://www.findcarinsurdeals.com/ DOT

Wed, Jan 12, 2011 phil cali

You can get pretty close with windows server shares versioning of files for network shares, as for the server itself...maybe hot standby or a VM running from a SAN if hardware is the culprit. Emails are easy with a archive product like GFI and exchange deletion retention policies. I do not worry too much about desktops, but do snapshot important ones using retrospect with 8 tb of desktop hard drives and a NAS device for longer term retention. Shared desktops store info on network drives which are actively backed up during the day. Acronis is good for the server snaps of live servers, but then again you are talking about 1 time snaps. I still do not have a good failover printer setup though besides 2 distinct servers with the same printer names. For databases there are snaps and log shipping for a hot standbys and master slave setups. Not sure how economically feasible true second by second backups are as that would chew up disk space and processor time of constantly filing block level changes, and isn't that what raid is for? Server configs changes and driver updates are always worrisome, but I usually do these post a snapshot anyways if I can't roll it back myself. Backups solutions that provide VM production should greatly speed up down time situations, but I have not forked over the money for these yet as most of my servers are virtual now anyways.

Add Your Comment Now:

Your Name:(optional)
Your Email:(optional)
Your Location:(optional)
Comment:
Please type the letters/numbers you see above

Redmond Tech Watch

Sign up for our newsletter.

I agree to this site's Privacy Policy.