Thomas Lux, Tyler Chang, Layne T. Watson
Performance variability is an important factor of high-performance computing (HPC) systems. HPC performance variability is often complex because its sources interact and are distributed throughout the system stack. For example, the performance variability of I/O throughput can be affected by factors such as CPU frequency, the number of I/O threads, file size, and record size. In this paper, we focus on the I/O throughput variability across multiple executions of a benchmark program. For a given system configuration, the distribution of throughputs from run to run is of interest. We conduct large-scale experiments and collect a massive amount of data to study the distribution of I/O throughput under tens of thousands of system configurations. Despite normality often being assumed in the literature, our statistical analysis reveals that the performance variability is not normally distributed under most system configurations. Instead, multimodal distributions are common for many system configurations. We propose the use of mixture distributions to describe the multimodal behavior. Various underlying parametric distributions such as normal, gamma, and the Weibull are considered. We apply an expectation–maximization (EM) algorithm for parameter estimation and use the Bayesian information criterion (BIC) for parametric model selections. We also illustrate how to use the estimated mixture distribution to calculate the number of runs needed for future experiments on variability analysis. The paper provides a useful tool set in studying the behavior of performance variability.
- Date of publication:
- January 27, 2020
- Parallel and Distributed Computing
- Page number(s):
- Publication note:
Li Xu, Yueyao Wang, Thomas Lux, Tyler H. Chang, Jon Bernard, Bo Li, Yili Hong, Kirk W. Cameron, Layne T. Watson: Modeling I/O performance variability in high-performance computing systems using mixture distributions. J. Parallel Distributed Comput. 139: 87-98 (2020)