Latent Factor-Based Recommenders Relying on Extended Stochastic Gradient Descent Algorithms

被引:115
作者
Luo, Xin [1 ]
Wang, Dexian [2 ,3 ]
Zhou, MengChu [4 ,5 ]
Yuan, Huaqiang [1 ]
机构
[1] Dongguan Univ Technol, Sch Comp Sci & Technol, Dongguan 523808, Peoples R China
[2] Chinese Acad Sci, Chongqing Engn Res Ctr Big Data Applicat Smart Ci, Chongqing 400714, Peoples R China
[3] Chinese Acad Sci, Chongqing Key Lab Big Data & Intelligent Comp, Chongqing Inst Green & Intelligent Technol, Chongqing 400714, Peoples R China
[4] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
[5] Macau Univ Sci & Technol, Inst Syst Engn, Macau 999078, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2021年 / 51卷 / 02期
基金
中国国家自然科学基金;
关键词
Big data; bi-linear; collaborative filtering (CF); high-dimensional and sparse (HiDS) matrix; industry; latent factor (LF) analysis; missing data; recommender system; NONNEGATIVE MATRIX-FACTORIZATION; CONVERGENCE; TERM;
D O I
10.1109/TSMC.2018.2884191
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
High-dimensional and sparse (HiDS) matrices generated by recommender systems contain rich knowledge regarding various desired patterns like users' potential preferences and community tendency. Latent factor (LF) analysis proves to be highly efficient in extracting such knowledge from an HiDS matrix efficiently. Stochastic gradient descent (SGD) is a highly efficient algorithm for building an LF model. However, current LF models mostly adopt a standard SGD algorithm. Can SGD be extended from various aspects in order to improve the resultant models' convergence rate and prediction accuracy for missing data? Are such SGD extensions compatible with an LF model? To answer them, this paper carefully investigates eight extended SGD algorithms to propose eight novel LF models. Experimental results on two HiDS matrices generated by real recommender systems show that compared with an LF model with a standard SGD algorithm, an LF model with extended ones can achieve: 1) higher prediction accuracy for missing data; 2) faster convergence rate; and 3) model diversity.
引用
收藏
页码:916 / 926
页数:11
相关论文
共 56 条
[1]   Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions [J].
Adomavicius, G ;
Tuzhilin, A .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2005, 17 (06) :734-749
[2]  
[Anonymous], 1971, Optimizing Methods in Statistics
[3]   Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods [J].
Attouch, Hedy ;
Bolte, Jerome ;
Svaiter, Benar Fux .
MATHEMATICAL PROGRAMMING, 2013, 137 (1-2) :91-129
[4]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[5]  
Bertsekas D. P, 1999, ATHENA SCI OPTIMIZAT, V2nd
[6]   Predictive Approach for User Long-Term Needs in Content-Based Image Suggestion [J].
Boutemedjet, Sabri ;
Ziou, Djemel .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (08) :1242-1253
[7]  
Boyd S., 2009, CONVEX OPTIMIZATION
[8]   Typicality-Based Collaborative Filtering Recommendation [J].
Cai, Yi ;
Leung, Ho-fung ;
Li, Qing ;
Min, Huaqing ;
Tang, Jie ;
Li, Juanzi .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2014, 26 (03) :766-779
[9]   Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage [J].
Chambolle, A ;
DeVore, RA ;
Lee, NY ;
Lucier, BJ .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 1998, 7 (03) :319-335
[10]   Nonparametric Bayesian Multitask Collaborative Filtering [J].
Chatzis, Sotirios P. .
PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM'13), 2013, :2149-2158