Traditional subspace methods for face recognition compute a measure of similarity between images after projecting them onto a fixed linear subspace that is spanned by some principal component. vectors (a.k.a. "eigenfaces") of a training set of images. By supposing a parametric Gaussian distribution over the subspace and a symmetric Gaussian noise model far the image given a point in the subspace, we can endow this framework with a probabilistic interpretation so that Bayes-optimal decisions can be made. However, we expect that different image clusters (corresponding, sag, to different poses and expressions) will be best represented by different subspaces. In this paper, we study the recognition performance of a mixture of local linear subspaces model that can be fit to training data using the expectation maximization algorithm. The mixture model outperforms a nearest-neighbor classifier that operates in a PCA subspace.