Compiler blockability of dense matrix factorizations

被引:8
作者
Carr, S [1 ]
Lehoucq, RB
机构
[1] Michigan Technol Univ, Dept Comp Sci, Houghton, MI 49931 USA
[2] Argonne Natl Lab, Div Math & Comp Sci, Argonne, IL USA
来源
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE | 1997年 / 23卷 / 03期
关键词
BLAS; cache optimization; Cholesky decomposition; LAPACK; LU decomposition; QR decomposition;
D O I
10.1145/275323.275325
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The goal of the LAPACK project is to provide efficient and portable software for dense numerical linear algebra computations. By recasting many of the fundamental dense matrix computations in terms of calls to an efficient implementation of the BLAS (Basic Linear Algebra Subprograms), the LAPACK project has, in large part, achieved its goal. Unfortunately, the efficient implementation of the BLAS results often in machine-specific code that is not portable across multiple architectures without a significant loss in performance or a significant effort to reoptimize them. This article examines whether most of the hand optimizations performed on matrix factorization codes are unnecessary because they can (and should) be performed by the compiler. We believe that it is better for the programmer to express algorithms in a machine-independent form and allow the compiler to handle the machine-dependent details. This gives the algorithms portability across architectures and removes the error-prone, expensive, and tedious process of hand optimization. Although there currently exist no production compilers that can perform all the loop transformations discussed in this article, a description of current research in compiler technology is provided that will prove beneficial to the numerical linear algebra community. We show that the Cholesky and optimized automatically by a compiler to be as efficient as the same hand-optimized version found in LAPACK. We also show that the QR factorization may be optimized by the compiler to perform comparably with the hand-optimized LAPACK version on modest matrix sizes. Our approach allows us to conclude that with the advent of the compiler optimizations discussed in this article, matrix factorizations may be efficiently implemented in a BLAS-less form.
引用
收藏
页码:336 / 361
页数:26
相关论文
共 38 条
[1]   AUTOMATIC TRANSLATION OF FORTRAN PROGRAMS TO VECTOR FORM [J].
ALLEN, R ;
KENNEDY, K .
ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 1987, 9 (04) :491-542
[2]  
Anderson E., 1995, LAPACK USERS GUIDE
[3]  
BILMES J, 1996, 111 U TENN LAPACK
[4]  
CALLAHAN D, 1987, P 1 INT C SUP ATH GR
[5]  
CALLAHAN K, 1987, P 1 INT C SUP COMP A
[6]  
Carr S., 1992, Proceedings. Supercomputing '92. (Cat. No.92CH3216-9), P114, DOI 10.1109/SUPERC.1992.236704
[7]   IMPROVING THE RATIO OF MEMORY OPERATIONS TO FLOATING-POINT OPERATIONS IN LOOPS [J].
CARR, S ;
KENNEDY, K .
ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 1994, 16 (06) :1768-1810
[8]  
CARR S, 1996, P 29 ANN HAW INT C S
[9]  
CARR S, 1992, THESIS RICE U HOUSTO
[10]  
COLEMAN S, 1995, SIGPLAN NOTICES, V30, P279