For each N, and each fixed time T, a signal X(N) and a "noisy" observation Y(N) are defined by the pair of stochastic difference equations DELTA-X(N) (n-DELTA): = X(N)((n + 1)DELTA) - X(N)(n-DELTA) = f(n-DELTA, X(N)(n-DELTA))DELTA + DELTA-V(N)(n-DELTA), DELTA-Y(N)(n-DELTA) = g(n-DELTA, X(N)(n-DELTA))DELTA + DELTA-W(N)(n-DELTA), where DELTA = T/N, and n = 0, 1, ..., N - 1. The noise increments DELTA-V(N)(n-DELTA) and DELTA-W(N)(n-DELTA) are i.i.d., and scaled so that (V(N), W(N)) double-line arrow pointing right (V, W), where V and W are Brownian motions. Then, (X(N), Y(N)) converges in distribution to (X, Y), where dX(t) = f(t, X(t)) dt + dV(t), dY(t) = g(t, X(t))dt + dW(t). Conditions are sought under which convergence in distribution of the conditional expectations E{F(X(N))\Y(N)} to E{F(X)\Y} follows, for every bounded continuous function F. It is assumed that DELTA-W(N)(n-DELTA) = square-root T/N-xi(n), where the xi(n) are i.i.d. with a smooth density h, and it is shown that the required convergence of the conditional expectations follows iff h is Gaussian. In the case where h is not Gaussian, the conditional expectations still converge, but the limit is not E{F(X)\Y}. In the situation where f and g are linear functions of X, an examination of this limit leads to a Kalman-Bucy-type estimate of X(N) which is asymptotically optimal; this estimate has the same limit as E{X(N)\Y(N)} as N --> infinity, and hence, is an improvement on the usual Kalman-Bucy estimate.