Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources

被引:1497
作者
Lee, TW [1 ]
Girolami, M
Sejnowski, TJ
机构
[1] Salk Inst, Howard Hughes Med Inst, Computat Neurobiol Lab, La Jolla, CA 92037 USA
[2] Tech Univ Berlin, Inst Elect, D-1000 Berlin, Germany
[3] Univ Paisley, Dept Comp & Informat Syst, Paisley PA1 2BE, Renfrew, Scotland
[4] Univ Calif San Diego, Dept Biol, La Jolla, CA 92093 USA
关键词
D O I
10.1162/089976699300016719
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
引用
收藏
页码:417 / 441
页数:25
相关论文
共 52 条