Graph expansion and communication costs of fast matrix multiplication

Grey Ballard, James Demmel, Olga Holtz, Oded Schwartz*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

62 Scopus citations

Abstract

The communication cost of algorithms (also known as I/O-complexity) is shown to be closely related to the expansion properties of the corresponding computation graphs.We demonstrate this on Strassen's and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. In the sequential case, where the processor has a fast memory of size M, too small to store three n-by-n matrices, the lower bound on the number of words moved between fast and slow memory is, for a large class of matrix multiplication algorithms, ((n/√M)ω0 ·M), where ω0 is the exponent in the arithmetic count (e.g., ω0 = lg 7 for Strassen, and ω0 = 3 for conventional matrix multiplication). With p parallel processors, each with fast memory of size M, the lower bound is asymptotically lower by a factor of p. These bounds are attainable both for sequential and for parallel algorithms and hence optimal.

Original languageEnglish
Article number32
JournalJournal of the ACM
Volume59
Issue number6
DOIs
StatePublished - Dec 2012
Externally publishedYes

Keywords

  • Communication-avoiding algorithms
  • Fast matrix multiplication
  • I/Ocomplexit

Fingerprint

Dive into the research topics of 'Graph expansion and communication costs of fast matrix multiplication'. Together they form a unique fingerprint.

Cite this