Superefficiency in blind source separation

被引:33
作者
Amari, S [1 ]
机构
[1] RIKEN Brain Sci Inst, Saitama, Japan
关键词
blind source separation; error analysis; estimating function; independent component analysis; on-line learning; superefficiency;
D O I
10.1109/78.752592
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Blind source separation is the problem of extracting independent signals from their mixtures without knowing the mixing coefficients nor the probability distributions of source signals and may be applied to EEG and MEG imaging of the brain. It is already known that certain algorithms work well for the extraction of independent components. The present paper is concerned with superefficiency of these based on the statistical and dynamical anal, sis. In a statistical estimation using t examples, the covariance of ang two extracted independent signals converges to 0 of the order of 1/t. On-line dynamics shows that the covariance is of the order of eta when the learning rate eta is fixed to a small constant. In contrast with the above general properties, a surprising superefficiency holds in blind source separation under certain conditions where superefficiency implies that covariance decreases in the order of 1/t(2) or of eta(2) The present paper uses the natural gradient learning algorithm and method of estimating functions to obtain superefficient procedures for both batch estimation and on-line learning. A standardized estimating function is introduced to this end. Superefficiency does not imply that the error variances of the extracted signals decrease in the order of 1/t(2) or eta(2) but implies that their covariances (and independencies) do.
引用
收藏
页码:936 / 944
页数:9
相关论文
共 29 条
[1]  
Amari S, 1996, ADV NEUR IN, V8, P757
[2]   Stability analysis of learning algorithms for blind source separation [J].
Amari, S ;
Chen, TP ;
Cichocki, A .
NEURAL NETWORKS, 1997, 10 (08) :1345-1351
[3]   Blind source separation - Semiparametric statistical approach [J].
Amari, S ;
Cardoso, JF .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 1997, 45 (11) :2692-2700
[4]   Natural gradient works efficiently in learning [J].
Amari, S .
NEURAL COMPUTATION, 1998, 10 (02) :251-276
[5]   A THEORY OF ADAPTIVE PATTERN CLASSIFIERS [J].
AMARI, S .
IEEE TRANSACTIONS ON ELECTRONIC COMPUTERS, 1967, EC16 (03) :299-+
[6]   Information geometry of estimating functions in semi-parametric statistical models [J].
Amari, S ;
Kawanabe, M .
BERNOULLI, 1997, 3 (01) :29-54
[7]  
AMARI S, IN PRESS NEURAL COMP
[8]  
Amari S., 1995, HDB BRAIN THEORY NEU, P522
[9]  
Amari S., 1985, DIFFERENTIAL GEOMETR
[10]  
AMARI S, UNPUB MULTICHANNEL B