Enhanced Gradient for Training Restricted Boltzmann Machines

被引:44
作者
Cho, KyungHyun [1 ]
Raiko, Tapani [1 ]
Ilin, Alexander [1 ]
机构
[1] Aalto Univ, Sch Sci, Dept Informat & Comp Sci, Espoo 02150, Uusimaa, Finland
关键词
D O I
10.1162/NECO_a_00397
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Restricted Boltzmann machines (RBMs) are often used as building blocks in greedy learning of deep networks. However, training this simple model can be laborious. Traditional learning algorithms often converge only with the right choice of metaparameters that specify, for example, learning rate scheduling and the scale of the initial weights. They are also sensitive to specific data representation. An equivalent RBM can be obtained by flipping some bits and changing the weights and biases accordingly, but traditional learning rules are not invariant to such transformations. Without careful tuning of these training settings, traditional algorithms can easily get stuck or even diverge. In this letter, we present an enhanced gradient that is derived to be invariant to bit-flipping transformations. We experimentally show that the enhanced gradient yields more stable training of RBMs both when used with a fixed learning rate and an adaptive one.
引用
收藏
页码:805 / 831
页数:27
相关论文
共 28 条
  • [1] [Anonymous], 2005, AISTATS BRIDGETOWN B
  • [2] [Anonymous], P INT JOINT C NEUR N
  • [3] [Anonymous], 1345 U MONTR
  • [4] [Anonymous], P 20 INT C ART NEUR
  • [5] [Anonymous], 2010, J. Mach. Learn. Res.
  • [6] [Anonymous], 2010, P 13 INT C ARTIFICIA
  • [7] [Anonymous], 2008, Advances in neural information processing systems
  • [8] [Anonymous], 2009, Advances in neural information processing systems
  • [9] [Anonymous], NIPS 2011 WORKSH DEE
  • [10] [Anonymous], 2009, P 26 ANN INT C MACHI, DOI DOI 10.1145/1553374.1553453