Site Loader

In order to execute a disaster recovery plan in a timely manner, data growth must be considered within the overall scope of the plan. There are a variety of new technologies specifically engineered to control data growth for businesses of all sizes. Implementing these assets can help cut down on recovery times. They also reduce the high costs associated with maintaining a disaster recovery plan.

Data De-Duplication
A major contributing factor to data growth is the continual duplication of stored data. The current data deduplication technology targets duplicate files within a system. It is implemented at the data block level, which means it focuses on identifying duplicate blocks within a disk. The technology then removes the duplicate files and replaces them with a much smaller pointer. This pointer refers to a master data block. To improve data deduplication results, it can be combined with more traditional file compression techniques.

One fact to consider is data deduplication is significantly more effective for unstructured data segments. These file segments typically include items such as employee files. For example, if an important internal email is sent out, most employees will keep the email in their inboxes. Instead of saving the email over and over, data deduplication saves it only once. This substantially lowers the workload of IT personnel when implementing the disaster recovery plan.

Data Management Policies
An easy way to minimize data growth is developing data management policies. They should include data archiving and deletion policies. This is primarily used for older data that is rarely, if ever, used for current operations. Archiving or deleting this data will lessen the load that needs to be addressed by IT staff during the disaster recovery process. Typically, old data is archived at an offsite set of servers or on offline media. It is vital that these policies take into consideration the importance of the data. For example, financial records are kept indefinitely. They usually do not need to be left in active storage longer than three to five years, however.

Storage Tiering
Storage tiering is similar to creating data management policies. The difference is instead of focusing on the age of the data, the focus is on the importance of the data. During the disaster recovery phrase,  mission-critical data must be available. With storage tiering, the most important data will always be in the first storage tier. The second tier includes less important data. This data will be addressed immediately after the mission critical data has been recovered. Traditionally, this is any data that is required for regulatory purposes but is only accessed twice a year or less.

Data growth is a driving force behind how quickly the disaster recovery process is completed. Limiting data growth will speed up recovery times. It will also eliminate the added expenses associated with storing large amounts of data. There are new technologies emerging on a regular basis. However, data deduplication, data management policies and storage tiering have already proved effective when used properly.

Frank Lobb believes that companies in all industries should consider Disaster recovery services.  These services are critical for keeping businesses running after system failures.  Data Centers offering disaster recovery services provide the redundancy.