In this paper we present benchmark results from the parallel implementation of the three-dimensional Navier-Stokes solver Prism on different parallel platforms of current interest: IBM SP2 (all three types of processors), SGI Power Challenge XL and Gray C90. The numerical method is based on mixed spectral element-Fourier expansions in (x-y) and z-directions, respectively. Each one (or a group) of the Fourier modes can be computed on a separate processor as the linear contributions in Navier-Stokes equations (Helmholtz solvers) are completely uncoupled. Coupling is obtained via the non-linear contributions (advection terms) and requires a global transpose of the data and one-dimensional multiple-point FFTs. We first analyze the computational complexity of Prism identifying basic computational kernels and communication bottlenecks. These kernels are benchmarked separately on several computer architectures. More specifically, we obtain timings for BLAS routines on all aforementioned processors but also on the SPI, Gray J90, the Intel Paragon, and the DEC AlphaServer 8400 5/300. A two-dimensional version of Prism is also benchmarked on most processors testing both direct and iterative methods of solution. Next, we define two three-dimensional benchmark flow problems in prototype complex geometries, and measure parallel scalability and performance using different message-passing libraries. Special emphasis is placed on runtime data processing, e.g. turbulence statistics computed during the simulation. Our results provide a measure of MFlcp/s sustained performance of individual processors, and indicate the limitations of the current communications software for typical problems in computational mechanics based on spectral or finite element discretizations. Copyright (C) 1996 Elsevier Science Ltd.