Characterizing and subsetting big data workloads
Paper i proceeding, 2014

© 2014 IEEE. Big data benchmark suites must include a diversity of data and workloads to be useful in fairly evaluating big data systems and architectures. However, using truly comprehensive benchmarks poses great challenges for the architecture community. First, we need to thoroughly understand the behaviors of a variety of workloads. Second, our usual simulation-based research methods become prohibitively expensive for big data. As big data is an emerging field, more and more software stacks are being proposed to facilitate the development of big data applications, which aggravates these challenges. In this paper, we first use Principle Component Analysis (PCA) to identify the most important characteristics from 45 metrics to characterize big data workloads from BigDataBench, a comprehensive big data benchmark suite. Second, we apply a clustering technique to the principle components obtained from the PCA to investigate the similarity among big data workloads, and we verify the importance of including different software stacks for big data benchmarking. Third, we select seven representative big data workloads by removing redundant ones and release the BigDataBench simulation version, which is publicly available from http://prof.ict.ac.cn/BigDataBench/ simulatorversion/.

Författare

Z. Jia

Chinese Academy of Sciences

J. Zhan

Chinese Academy of Sciences

L. Wang

Chinese Academy of Sciences

R. Han

Chinese Academy of Sciences

Sally A McKee

Chalmers, Data- och informationsteknik, Datorteknik

Q. Yang

Chinese Academy of Sciences

C. Luo

Chinese Academy of Sciences

J. Li

Chinese Academy of Sciences

IISWC 2014 - IEEE International Symposium on Workload Characterization

191-201
9781479964536 (ISBN)

Ämneskategorier

Data- och informationsvetenskap

DOI

10.1109/IISWC.2014.6983058

ISBN

9781479964536

Mer information

Skapat

2017-10-07