Experimental analysis of the root causes of performance evaluation results: A backfilling case study

Dror G. Feitelson*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

44 Scopus citations

Abstract

The complexity of modern computer systems may enable minor variations in performance evaluation procedures to actually determine the outcome. Our case study concerns the comparison of two parallel job schedulers, using different workloads and metrics. It shows that metrics may be sensitive to different job classes, and not measure the performance of the whole workload in an impartial manner. Workload models may implicitly assume that some workload attribute is unimportant and does not warrant modeling; this too can turn out to be wrong. As such effects are hard to predict, a careful experimental methodology is needed in order to find and verify them.

Original languageEnglish
Pages (from-to)175-182
Number of pages8
JournalIEEE Transactions on Parallel and Distributed Systems
Volume16
Issue number2
DOIs
StatePublished - Feb 2005

Bibliographical note

Funding Information:
This research was supported in part by the Israel Science Foundation (grant nos. 219/99 and 167/03). Many thanks are due to the people who deposited their workload logs and models in the Parallel Workloads Archive, specifically, the Cornell Theory Center, the HPC Systems group of the San Diego Supercomputer Center, and Joefon Jann.

Keywords

  • Backfilling
  • Experimental verification
  • Parallel job scheduling
  • Performance evaluation
  • Sensitivity of results
  • Simulation

Fingerprint

Dive into the research topics of 'Experimental analysis of the root causes of performance evaluation results: A backfilling case study'. Together they form a unique fingerprint.

Cite this