A learning attractor neural network (LANN) With a double dynamics of neural activities and synaptic efficacies, operating on two different timescales is studied by simulations in preparation for an electronic implementation. The present network includes several quasirealistic features: neurons are represented by their afferent currents and output spike rates; excitatory and inhibitory neurons are separated; attractor spike rates as well as coding levels in arriving stimuli are low; learning takes place only between excitatory units. Synaptic dynamics is an unsupervised, analogue Hebbian process, but long term memory in the absence of neural activity is maintained by a refresh mechanism which on long timescales discretizes the synaptic values, converting learning into asynchronous stochastic process induced by the stimuli on the synaptic efficacies. This network is intended to learn a set of attractors from the statistics of freely arriving stimuli, which are represented by external synaptic inputs injected into the excitatory neurons. In the simulations different types of sequences of many thousands of stimuli are presented to the network, without distinguishing in the dynamics a learning phase from retrieval. Stimulus sequences differ in pre-assigned global statistics (including time-dependent statistics); in orders of presentation of individual stimuli within a given statistics; in lengths of time intervals for each presentation and in the intervals separating one stimulus from another. We find that the network effectively learns a set of attractors representing the statistics of the stimuli, and is able to modify its attractors when the input statistics change. Moreover, as the global input statistics changes the network can also forget attractors related to stimulus classes no longer presented. Forgetting takes place only due to the arrival of new stimuli. The performance of the network and the statistics of the attractors are studied as a function of the input statistics. Most of the large-scale characteristics of the learning dynamics can be captured theoretically. This model modifies a previous implementation of a LANN composed of discrete neurons, in a network of more realistic neurons. The different elements have been designed to facilitate their implementation in silicon.