Accelerating content-defined-chunking based data deduplication by exploiting parallelism

Wen Xia, Dan Feng, Hong Jiang, Yucheng Zhang, Victor Chang, Xiangyu Zou

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
283 Downloads (Pure)

Abstract

Data deduplication, a data reduction technique that efficiently detects and eliminates redundant data chunks and files, has been widely applied in large-scale storage systems. Most existing deduplication-based storage systems employ content-defined chunking (CDC) and secure-hash-based fingerprinting (e.g., SHA1) to remove redundant data at the chunk level (e.g., 4 KB/8 KB chunks), which are extremely compute-intensive and thus time-consuming for storage systems. Therefore, we present P-Dedupe, a pipelined and parallelized data deduplication system that accelerates deduplication process by dividing the deduplication process into four stages (i.e., chunking, fingerprinting, indexing, and writing), pipelining these four stages with chunks & files (the processing data units for deduplication), and then parallelizing CDC and secure-hash-based fingerprinting stages to further alleviate the computation bottleneck. More important, to efficiently parallelize CDC with the requirements of both maximal and minimal chunk sizes and inspired by the MapReduce model, we first split the data stream into several segments (i.e., “Map”), where each segment will be running CDC in parallel with an independent thread, and then re-chunk and join the boundaries of these segments (i.e., “Reduce”) to ensure the chunking effectiveness of parallelized CDC. Experimental results of P-Dedupe with eight datasets on a quad-core Intel i7 processor suggest that P-Dedupe is able to accelerate the deduplication throughput near linearly by exploiting parallelism in the CDC-based deduplication process at the cost of only 0.02% decrease in the deduplication ratio. Our work provides contributions to big data science to ensure all files go through deduplication process quickly and thoroughly, and only process and analyze the same file once, rather than multiple times.

Original languageEnglish
Pages (from-to)406-418
Number of pages13
JournalFuture Generation Computer Systems
Volume98
DOIs
Publication statusPublished - 29 Mar 2019

Fingerprint

Dive into the research topics of 'Accelerating content-defined-chunking based data deduplication by exploiting parallelism'. Together they form a unique fingerprint.

Cite this