Model-based visual recognition systems often match groups of image features to groups of model features to form initial hypotheses, which are then verified. In order to accelerate recognition considerably, the model groups can be arranged in an index space (hashed) offline such that feasible matches are found by indexing into this space. For the case of 2-D images and 3-D models consisting of point features, we demonstrate bounds on the space required for indexing and on the speedup that such indexing can achieve. Specifically, we prove that even in the absence of image error, each model must be represented by a 2-D surface in the index space. This places an unexpected lower bound on the space required to implement indexing and proves that no quantity is invariant for all projections of a model into the image. We also determine theoretical bounds on the speedup achieved by indexing in the presence of image error and present an implementation of indexing to measure this speedup empirically. We find that indexing can produce only a minimal speedup on its own. However, when accompanied by a grouping operation, indexing can provide significant speedups that grow exponentially with the number of features in the groups.