The performance of a computer system depends on the characteristics of the workload it must serve: for example, if work is evenly distributed performance will be better than if it comes in unpredictable bursts that lead to congestion. Thus performance evaluations require the use of representative workloads in order to produce dependable results. This can be achieved by collecting data about real workloads, and creating statistical models that capture their salient features. This survey covers methodologies for doing so. Emphasis is placed on problematic issues such as dealing with correlations between workload parameters and dealing with heavy-tailed distributions and rare events. These considerations lead to the notion of structural modeling, in which the general statistical model of the workload is replaced by a model of the process generating the workload.
|Title of host publication
|Performance Evaluation of Complex Systems
|Subtitle of host publication
|Techniques and Tools - Performance 2002 Tutorial Lectures
|Maria Carla Calzarossa, Salvatore Tucci
|Number of pages
|Published - 2002
|IFIP WG 7.3 International Symposium on Computer Modeling, Measurement, and Evaluation, Performance 2002 - Rome, Italy
Duration: 23 Sep 2002 → 27 Sep 2002
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|IFIP WG 7.3 International Symposium on Computer Modeling, Measurement, and Evaluation, Performance 2002
|23/09/02 → 27/09/02
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 2002.