基于神经网络的回归学习算法收敛性分析(英文)

被引:3
作者
张永全
曹飞龙
戴腾辉
机构
[1] 中国计量学院数学与信息科学系
关键词
回归; 神经网络; 覆盖数; 收敛率;
D O I
暂无
中图分类号
O241.5 [数值逼近];
学科分类号
摘要
本文利用最小二乘理论研究学习理论中的回归问题.其目的在于利用概率不等式与神经网络的逼近性质来分析回归学习算法的误差.结论表明,当回归函数满足一定的光滑性时,得到较为紧的上界且该上界与输入空间的维数无关.
引用
收藏
页码:493 / 498
页数:6
相关论文
共 9 条
[1]  
Estimation of convergence rate for multi-regression learning algorithm[J]. XU ZongBen1,2, ZHANG YongQuan3,1,2 & CAO FeiLong3 1Institute for Information and System Sciences, Xi’an Jiaotong University, Xi’an 710049, China;2MOE Key Labratory for Intelligent Networks and Network Security, Xi’an Jiaotong University, Xi’an 710049, China;3Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, China.Science China(Information Sciences). 2012(03)
[2]   Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors [J].
Kohler, Michael ;
Mehnert, Jens .
NEURAL NETWORKS, 2011, 24 (03) :273-279
[3]   Lower estimation of approximation rate for neural networks [J].
Cao FeiLong ;
Zhang YongQuan ;
Xu ZongBen .
SCIENCE IN CHINA SERIES F-INFORMATION SCIENCES, 2009, 52 (08) :1321-1327
[4]   Optimal rates for the regularized least-squares algorithm [J].
Caponnetto, A. ;
De Vito, E. .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2007, 7 (03) :331-368
[5]   Learning rates of least-square regularized regression [J].
Wu, Qiang ;
Ying, Yiming ;
Zhou, Ding-Xuan .
FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2006, 6 (02) :171-192
[6]   Nonasymptotic bounds on the L2 error of neural network regression estimates [J].
Hamers, M ;
Kohler, M .
ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS, 2006, 58 (01) :131-151
[7]   Simultaneous LP-approximation order for neural networks [J].
Xu, ZB ;
Cao, FL .
NEURAL NETWORKS, 2005, 18 (07) :914-923
[8]  
On the mathematical foundations of learning[J] . Felipe Cucker,Steve Smale.bull . 2001 (1)
[9]  
Approximation by superpositions of a sigmoidal function[J] . G. Cybenko.Mathematics of Control, Signals and Systems . 1989 (4)