Distributions Mean More than Means: Simple Statistics in Benchmark Analysis

October 3, 2007 Dr. John Mashey

Abstract:
For decades, computer benchmarkers fought a War of Means, arguing over proper uses of different means, starting in the mid-1980s. One would think this basic issue of computer performance analysis would have been long resolved, but it has really only started to get fixed recently, as in Hennessy & Patterson's: Computer Architecture: A Quantitative Approach, 4thEdition, . This argument has persisted only because the argument was wrongly framed years ago, and people's mindsets long stayed within that framing, which tended to use algebra, but not statistics. It is common practice to say "Performance is not a single number," and then give a single number anyway. Long ago, we chose the Geometric Mean for the SPEC CPU benchmarks, correctly, but did not really understand the strongest reason for that choice. No matter how badly people want a single number for performance, it is better described by a statistical distribution than by a mean alone, and as usual, distributions yield more insight for computer architecture, as they do in other areas of science. This talk offers ideas for some better performance and power analysis techniques, by applying simple statistics, with emphasis on application of the lognormal distribution.


Bio:
Dr. Mashey is a consultant for venture capitalists and technology companies. He is "an ancient UNIX person," having started work on it at Bell Labs in 1973, and continuing to work there for 10 years. He has been in Silicon Valley since 1983 at Convergent Technologies, MIPS Computer Systems, and Silicon Graphics, ending as VP and Chief Scientist. He was one of the founders of the SPEC benchmarking group, and was Hot Chips Program Co-Chair, and has given more than 500 public talks about software engineering, RISC and systems architecture, and performance analysis. He is also a Trustee of the Computer History Museum.