TY - JOUR
T1 - Hypergraph partitioning for sparse matrix-matrix multiplication
AU - Ballard, Grey
AU - Druinsky, Alex
AU - Knight, Nicholas
AU - Schwartz, Oded
N1 - Publisher Copyright:
© 2016 ACM.
PY - 2016/12
Y1 - 2016/12
N2 - We propose a fine-grained hypergraph model for sparse matrix-matrix multiplication (SpGEMM), a key computational kernel in scientific computing and data analysis whose performance is often communication bound. This model correctly describes both the interprocessor communication volume along a critical path in a parallel computation and also the volume of data moving through the memory hierarchy in a sequential computation. We show that identifying a communication-optimal algorithm for particular input matrices is equivalent to solving a hypergraph partitioning problem. Our approach is nonzero structure dependent, meaning that we seek the best algorithm for the given input matrices. In addition to our three-dimensional fine-grained model, we also propose coarse-grained one-dimensional and two-dimensional models that correspond to simpler SpGEMM algorithms. We explore the relations between our models theoretically, and we study their performance experimentally in the context of three applications that use SpGEMM as a key computation. For each application, we find that at least one coarse-grained model is as communication efficient as the fine-grained model. We also observe that different applications have affinities for different algorithms. Our results demonstrate that hypergraphs are an accurate model for reasoning about the communication costs of SpGEMM as well as a practical tool for exploring the SpGEMM algorithm design space.
AB - We propose a fine-grained hypergraph model for sparse matrix-matrix multiplication (SpGEMM), a key computational kernel in scientific computing and data analysis whose performance is often communication bound. This model correctly describes both the interprocessor communication volume along a critical path in a parallel computation and also the volume of data moving through the memory hierarchy in a sequential computation. We show that identifying a communication-optimal algorithm for particular input matrices is equivalent to solving a hypergraph partitioning problem. Our approach is nonzero structure dependent, meaning that we seek the best algorithm for the given input matrices. In addition to our three-dimensional fine-grained model, we also propose coarse-grained one-dimensional and two-dimensional models that correspond to simpler SpGEMM algorithms. We explore the relations between our models theoretically, and we study their performance experimentally in the context of three applications that use SpGEMM as a key computation. For each application, we find that at least one coarse-grained model is as communication efficient as the fine-grained model. We also observe that different applications have affinities for different algorithms. Our results demonstrate that hypergraphs are an accurate model for reasoning about the communication costs of SpGEMM as well as a practical tool for exploring the SpGEMM algorithm design space.
KW - Hypergraph partitioning
KW - Sparse matrix-matrix multiplication
UR - http://www.scopus.com/inward/record.url?scp=85041192786&partnerID=8YFLogxK
U2 - 10.1145/3015144
DO - 10.1145/3015144
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85041192786
SN - 2329-4949
VL - 3
SP - 1
EP - 34
JO - ACM Transactions on Parallel Computing
JF - ACM Transactions on Parallel Computing
IS - 3
ER -