Distributed additive encryption and quantization for privacy preserving federated deep learning

被引:41
作者
Zhu, Hangyu [1 ]
Wang, Rui [2 ]
Jin, Yaochu [1 ]
Liang, Kaitai [2 ]
Ning, Jianting [3 ,4 ]
机构
[1] Univ Surrey, Dept Comp Sci, Guildford GU2 7XH, Surrey, England
[2] Delft Univ Technol, Dept Intelligent Syst, NL-2628 XE Delft, Netherlands
[3] Fujian Normal Univ, Coll Math & Informat, Fujian Prov Key Lab Network Secur & Cryptol, Fuzhou 350007, Peoples R China
[4] Singapore Management Univ, Sch Informat Syst, Singapore 178902, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; Deep learning; Homomorphic encryption; Distributed key generation; Quantization; SECURE; ALGORITHM;
D O I
10.1016/j.neucom.2021.08.062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a trusted third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:309 / 327
页数:19
相关论文
共 81 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[3]  
Amiri M.M., 2020, ARXIV PREPRINT ARXIV
[4]   Long short-term memory [J].
Hochreiter, S ;
Schmidhuber, J .
NEURAL COMPUTATION, 1997, 9 (08) :1735-1780
[5]  
[Anonymous], 2019, ARXIV180308375V2 CSN
[6]   Barycentric Lagrange interpolation [J].
Berrut, JP ;
Trefethen, LN .
SIAM REVIEW, 2004, 46 (03) :501-517
[7]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[8]  
Boneh D, 2011, LECT NOTES COMPUT SC, V6597, P253, DOI 10.1007/978-3-642-19571-6_16
[9]  
Brakerski Z, 2013, STOC'13: PROCEEDINGS OF THE 2013 ACM SYMPOSIUM ON THEORY OF COMPUTING, P575
[10]  
Caldas S, 2018, ARXIV PREPRINT ARXIV