Matrix Product on Heterogeneous Master-Worker Platforms

被引:8
作者
Dongarra, Jack [1 ]
Pineau, Jean-Francois [1 ]
Robert, Yves [1 ]
Vivien, Frederic [1 ]
机构
[1] Univ Tennessee, Knoxville, TN 37996 USA
来源
PPOPP'08: PROCEEDINGS OF THE 2008 ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING | 2008年
关键词
Matrix product; limited memory; communication; MULTIPLICATION; COMMUNICATION;
D O I
10.1145/1345206.1345217
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
This paper is focused on designing efficient parallel matrix-product algorithms for heterogeneous master-worker platforms. While matrix-product is well-understood for homogeneous 2D-arrays of processors (e.g., Cannon algorithm and SCaLAPACK outer product algorithm), there are three key hypotheses that render our work original and innovative: - Centralized data. We assume that all matrix files originate from, and must be returned to, the master. The master distributes data and computations to the workers (while in SCaLAPACK, input and output matrices are supposed to be equally distributed among participating resources beforehand). Typically, our approach is useful in the context of speeding up MATLAB or SCILAB clients running on a server (which acts as the master and initial repository of files). - Heterogeneous star-shaped platforms. We target fully heterogeneous platforms, where computational resources have different computing powers. Also, the workers are connected to the master by links of different capacities. This framework is realistic when deploying the application from the server, which is responsible for enrolling authorized resources. - Limited memory. As we investigate the parallelization of large problems, we cannot assume that full matrix column blocks can be stored in the worker memories and be re-used for subsequent updates (as in ScaLAPACK). We have devised efficient algorithms for resource selection (deciding which workers to enroll) and communication ordering (both for input and result messages), and we report a set of numerical experiments on a platform at our site. The experiments show that our matrix-product algorithm has smaller execution times than existing ones, while it also uses fewer resources.
引用
收藏
页码:53 / 62
页数:10
相关论文
共 18 条
[1]   Scheduling strategies for master-slave tasking on heterogeneous processor platforms [J].
Banino, C ;
Beaumont, O ;
Carter, L ;
Ferrante, J ;
Legrand, A ;
Robert, Y .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2004, 15 (04) :319-330
[2]   Matrix multiplication on heterogeneous platforms [J].
Beaumont, O ;
Boudet, V ;
Rastello, F ;
Robert, Y .
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2001, 12 (10) :1033-1051
[3]   A proposal for a heterogeneous cluster ScaLAPACK (dense linear solvers) [J].
Beaumont, O ;
Boudet, V ;
Petitet, A ;
Rastello, F ;
Robert, Y .
IEEE TRANSACTIONS ON COMPUTERS, 2001, 50 (10) :1052-1070
[4]   Efficient collective communication in distributed heterogeneous systems [J].
Bhat, PB ;
Raghavendra, CS ;
Prasanna, VK .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2003, 63 (03) :251-263
[5]  
Blackford L. S., 1997, ScaLAPACK Users' Guide, V4
[6]  
Cannon L. E, 1969, THESIS MONTANA STATE
[7]   Key concepts for parallel out-of-core LU factorization [J].
Dongarra, JJ ;
Hammarling, S ;
Walker, DW .
PARALLEL COMPUTING, 1997, 23 (1-2) :49-70
[8]  
Hong J.W., 1981, P 13 ANN ACM S THEOR, P326, DOI DOI 10.1145/800076.802486
[9]   Communication lower bounds for distributed-memory matrix multiplication [J].
Irony, D ;
Toledo, S ;
Tiskin, A .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2004, 64 (09) :1017-1026
[10]   Heterogeneous distribution of computations solving linear algebra problems on networks of heterogeneous computers [J].
Kalinov, A ;
Lastovetsky, A .
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2001, 61 (04) :520-535