This paper reviews some knowledge representation approaches devoted to the sensor fusion problem, as encountered whenever images, signals, text must be combined to provide the input to a controller or to an inference procedure. The basic steps involved in the derivation of the knowledge representation scheme, are: (A) locate a representation, based on exogeneous context information (B) compare two representations to find out if they refer to the same object/entity (C) merge sensor-based features from the various representations of the same object into a new set of features or attributes, (D) aggregate the representations into a joint fused representation, usually more abstract than each of the sensor-related representations. The importance of sensor fusion stems first from the fact that it is generally correct to assume that improvements in control law simplicity and robustness, as well as better classification results, can be achieved by combining diverse information sources. The second element, is that, e.g., spatially distributed sensing, or otherwise diverse sensing, does indeed require fusion as well. © 1988 Kluwer Academic Publishers.