Communication-Optimal Parallel Recursive Rectangular Matrix Multiplication

TitleCommunication-Optimal Parallel Recursive Rectangular Matrix Multiplication
Publication TypeConference Paper
Year of Publication2013
AuthorsDemmel, J., Elaihu D., Fox A., Kamil S., Lipshitz B., Schwartz O., & Spillinger O.
Conference Name27th IEEE International Parallel & Distributed Processing Symposium 2013
Conference LocationHyatt Regency Cambridge Boston, Massachusetts USA
Abstract

Communication-optimal algorithms are known
for square matrix multiplication. Here, we obtain the first
communication-optimal algorithm for all dimensions of rectangular
matrices. Combining the dimension-splitting technique of
Frigo, Leiserson, Prokop and Ramachandran (1999) with the
recursive BFS/DFS approach of Ballard, Demmel, Holtz, Lipshitz
and Schwartz (2012) allows for a communication-optimal as
well as cache- and network-oblivious algorithm. Moreover, the
implementation is simple: approximately 50 lines of code for
the shared-memory version. Since the new algorithm minimizes
communication across the network, between NUMA domains,
and between levels of cache, it performs well in practice on both
shared- and distributed-memory machines. We show significant
speedups over existing parallel linear algebra libraries both on
a 32-core shared-memory machine and on a distributed-memory
supercomputer.

URLhttp://www.ipdps.org/