共 54 条
- [1] Bertsekas DP(1997)A new class of incremental gradient methods for least squares problems SIAM J. Optim. 7 913-926
- [2] Blatt D(2007)A convergent incremental gradient method with a constant step size SIAM J. Optim. 18 29-51
- [3] Hero AO(2009)Sgd-qn: careful quasi-newton stochastic gradient descent J. Mach. Learn. Res. 10 1737-1754
- [4] Gauchman H(2004)KDD-cup 2004: results and analysis ACM SIGKDD Newsl. 6 95-108
- [5] Bordes A(1847)Méthode générale pour la résolution des systèmes d’équations simultanées Comptes rendus des séances de l’Académie des sciences de Paris 25 536-538
- [6] Bottou L(2008)Exponentiated gradient algorithms for conditional random fields and max-margin markov networks J. Mach. Learn. Res. 9 1775-1822
- [7] Gallinari P(1993)Accelerated stochastic approximation SIAM J. Optim. 3 868-881
- [8] Caruana R(2011)Adaptive subgradient methods for online learning and stochastic optimization J. Mach. Learn. Res. 12 2121-2159
- [9] Joachims T(2012)Hybrid deterministic-stochastic methods for data fitting SIAM J. Sci. Comput. 34 A1351-A1379
- [10] Backstrom L(2005)A modified finite Newton method for fast solution of large scale linear SVMs J. Mach. Learn. Res. 6 341-361