We review the method of Parallel Factor Analysis, which simultaneously fits multiple two-way arrays or 'slices' of a three-way array in terms of a common set of factors with differing relative weights in each 'slice'. Mathematically, it is a straightforward generalization of the bilinear model of factor (or component) analysis (x(ij) = SIGMA(r=1)R a(ir)b(jr)) to a trilinear model (x(ijk) = SIGMA(r=1)R a(ir)b(jr)c(kr)). Despite this simplicity, it has an important property not possessed by the two-way model: if the latent factors show adequately distinct patterns of three-way variation, the model is fully identified; the orientation of factors is uniquely determined by minimizing residual error, eliminating the need for a separate 'rotation' phase of analysis. The model can be used several ways. It can be directly fit to a three-way array of observations with (possibly incomplete) factorial structure, or it can be indirectly fit to the original observations by fitting a set of covariance matrices computed from the observations, with each matrix corresponding to a two-way subset of the data. Even more generally, one can simultaneously analyze covariance matrices computed from different samples, perhaps corresponding to different treatment groups, different kinds of cases, data from different studies, etc. To demonstrate the method we analyze data from an experiment on right vs. left cerebral hemispheric control of the hands during various tasks. The factors found appear to correspond to the causal influences manipulated in the experiment, revealing their patterns of influence in all three ways of the data. Several generalizations of the parallel factor analysis model are currently under development, including ones that combine parallel factors with Tucker-like factor 'interactions'. Of key importance is the need to increase the method's robustness against nonstationary factor structures and qualitative (nonproportional) factor change.