ISSN 0253-2778

CN 34-1054/N

Open AccessOpen Access JUSTC Information Science and Technology 08 June 2023

Efficient secure aggregation for privacy-preserving federated learning based on secret sharing

Cite this:
https://doi.org/10.52396/JUSTC-2022-0116
More Information
  • Author Bio:

    Xuan Jin received his B.E. degree from the the Hohai University in 2022. He is currently a master’s student at the University of Science and Technology of China. His research interests include deep learning and applied cryptography

    Yuanzhi Yao is currently an Associate Professor at the Hefei University of Technology. He received his Ph.D. degree from the University of Science and Technology of China in 2017. His research interests include deep learning and multimedia security

  • Corresponding author: E-mail: yaoyz@hfut.edu.cn
  • Received Date: 25 August 2022
  • Accepted Date: 12 January 2023
  • Available Online: 08 June 2023
  • Federated learning allows multiple mobile participants to jointly train a global model without revealing their local private data. Communication-computation cost and privacy preservation are key fundamental issues in federated learning. Existing secret sharing-based secure aggregation mechanisms for federated learning still suffer from significant additional costs, insufficient privacy preservation, and vulnerability to participant dropouts. In this paper, we aim to solve these issues by introducing flexible and effective secret sharing mechanisms into federated learning. We propose two novel privacy-preserving federated learning schemes: federated learning based on one-way secret sharing (FLOSS) and federated learning based on multi-shot secret sharing (FLMSS). Compared with the state-of-the-art works, FLOSS enables high privacy preservation while significantly reducing the communication cost by dynamically designing secretly shared content and objects. Meanwhile, FLMSS further reduces the additional cost and has the ability to efficiently enhance the robustness of participant dropouts in federated learning. Foremost, FLMSS achieves a satisfactory tradeoff between privacy preservation and communication-computation cost. Security analysis and performance evaluations on real datasets demonstrate the superiority of our proposed schemes in terms of model accuracy, privacy preservation, and cost reduction.
    In our proposed privacy-preserving federated learning schemes, participantsʼ local training data can be strongly protected with low cost.
    Federated learning allows multiple mobile participants to jointly train a global model without revealing their local private data. Communication-computation cost and privacy preservation are key fundamental issues in federated learning. Existing secret sharing-based secure aggregation mechanisms for federated learning still suffer from significant additional costs, insufficient privacy preservation, and vulnerability to participant dropouts. In this paper, we aim to solve these issues by introducing flexible and effective secret sharing mechanisms into federated learning. We propose two novel privacy-preserving federated learning schemes: federated learning based on one-way secret sharing (FLOSS) and federated learning based on multi-shot secret sharing (FLMSS). Compared with the state-of-the-art works, FLOSS enables high privacy preservation while significantly reducing the communication cost by dynamically designing secretly shared content and objects. Meanwhile, FLMSS further reduces the additional cost and has the ability to efficiently enhance the robustness of participant dropouts in federated learning. Foremost, FLMSS achieves a satisfactory tradeoff between privacy preservation and communication-computation cost. Security analysis and performance evaluations on real datasets demonstrate the superiority of our proposed schemes in terms of model accuracy, privacy preservation, and cost reduction.
    • The privacy-preserving federated learning scheme based on one-way secret sharing (FLOSS) is proposed to enable high privacy preservation while significantly reducing the communication cost by dynamically designing secretly shared content and objects.
    • The privacy-preserving federated learning scheme based on multi-shot secret sharing (FLMSS) is proposed to further reduce the additional communication-computation cost and enhance the robustness of participant dropouts.
    • Extensive security analysis and performance evaluations demonstrate the superiority of our proposed schemes in terms of model accuracy, privacy preservation, and cost reduction.

  • loading
  • [1]
    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521: 436–444. doi: 10.1038/nature14539
    [2]
    Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 779–788.
    [3]
    Minaee S, Kalchbrenner N, Cambria E, et al. Deep learning: Based text classification: A comprehensive review. ACM Computing Surveys, 2021, 54 (3): 1–40. doi: 10.1145/3439726
    [4]
    Lee M, Sanz L R D, Barra A, et al. Quantifying arousal and awareness in altered states of consciousness using interpretable deep learning. Nature Communications, 2022, 13: 1064. doi: 10.1038/s41467-022-28451-0
    [5]
    Wright L G, Onodera T, Stein M M, et al. Deep physical neural networks trained with backpropagation. Nature, 2022, 601: 549–555. doi: 10.1038/s41586-021-04223-6
    [6]
    Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015: 1–9.
    [7]
    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 770–778.
    [8]
    McMahan H B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. arXiv: 1602.05629, 2016.
    [9]
    Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP). San Francisco, USA: IEEE, 2019: 739–753.
    [10]
    Wang Z, Song M, Zhang Z, et al. Beyond inferring class representatives: User-level privacy leakage from federated learning. In: IEEE INFOCOM 2019—IEEE Conference on Computer Communications. Paris, France: IEEE, 2019: 2512–2520.
    [11]
    Zhu L, Liu Z, Han S. Deep leakage from gradients. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York: ACM, 2019, 1323: 14774–14784.
    [12]
    Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: Information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 603–618.
    [13]
    Xu G, Li H, Liu S, et al. VerifyNet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 2020, 15: 911–926. doi: 10.1109/TIFS.2019.2929409
    [14]
    Mothukuri V, Parizi R M, Pouriyeh S, et al. A survey on security and privacy of federated learning. Future Generation Computer Systems, 2021, 115: 619–640. doi: 10.1016/j.future.2020.10.007
    [15]
    Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016: 308–318.
    [16]
    Phong L T, Aono Y, Hayashi T, et al. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 2018, 13 (5): 1333–1345. doi: 10.1109/TIFS.2017.2787987
    [17]
    Zhang X, Chen X, Liu J K, et al. DeepPAR and DeepDPA: Privacy preserving and asynchronous deep learning for industrial IoT. IEEE Transactions on Industrial Informatics, 2020, 16 (3): 2081–2090. doi: 10.1109/TII.2019.2941244
    [18]
    Huang K, Liu X, Fu S, et al. A lightweight privacy-preserving CNN feature extraction framework for mobile sensing. IEEE Transactions on Dependable and Secure Computing, 2021, 18 (3): 1441–1455. doi: 10.1109/TDSC.2019.2913362
    [19]
    Fereidooni H, Marchal S, Miettinen M, et al. SAFELearn: Secure aggregation for private Federated learning. In: 2021 IEEE Security and Privacy Workshops (SPW). San Francisco, USA: IEEE, 2021: 56–62.
    [20]
    Yang Y, Mu K, Deng R H. Lightweight privacy-preserving GAN framework for model training and image synthesis. IEEE Transactions on Information Forensics and Security, 2022, 17: 1083–1098. doi: 10.1109/TIFS.2022.3156818
    [21]
    Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 1175–1191.
    [22]
    Duan J, Zhou J, Li Y. Privacy-Preserving distributed deep learning based on secret sharing. Information Sciences, 2020, 527: 108–127. doi: 10.1016/j.ins.2020.03.074
    [23]
    Zheng Y, Lai S, Liu Y, et al. Aggregation service for federated learning: An efficient, secure, and more resilient realization. IEEE Transactions on Dependable and Secure Computing, 2022, 20 (2): 988–1001. doi: 10.1109/TDSC.2022.3146448
    [24]
    Xu R, Baracaldo N, Zhou Y, et al. HybridAlpha: An efficient approach for privacy-preserving federated learning. In: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2019: 13–23.
    [25]
    Wu D, Pan M, Xu Z, et al. Towards efficient secure aggregation for model update in federated learning. In: GLOBECOM 2020—2020 IEEE Global Communications Conference. Taipei, China: IEEE, 2020: 1–6.
    [26]
    Truex S, Baracaldo N, Anwar A, et al. A hybrid approach to privacy-preserving federated learning. Informatik Spektrum, 2019, 42: 356–357. doi: 10.1007/s00287-019-01205-x
    [27]
    Kadhe S, Rajaraman N, Koyluoglu O O, et al. FastSecAgg: Scalable secure aggregation for privacy-preserving federated learning. arXiv: 2009.11248, 2020.
    [28]
    So J, Güler B, Avestimehr A S. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory, 2021, 2 (1): 479–489. doi: 10.1109/JSAIT.2021.3054610
    [29]
    Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: stochastic controlled averaging for federated learning. In: Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 5132–5143.
    [30]
    Ozfatura E, Ozfatura K, Gündüz D. FedADC: Accelerated federated learning with drift control. In: 2021 IEEE International Symposium on Information Theory (ISIT). Melbourne, Australia: IEEE, 2021: 467–472.
    [31]
    Shamir A. How to share a secret. Communications of the ACM, 1979, 22 (11): 612–613. doi: 10.1145/359168.359176
    [32]
    Diffie W, Hellman M. New directions in cryptography. IEEE Transactions on Information Theory, 1976, 22 (6): 644–654. doi: 10.1109/TIT.1976.1055638
    [33]
    Blum M, Micali S. How to generate cryptographically strong sequences of pseudo-random bits. SIAM Journal on Computing, 1984, 13 (4): 850–864. doi: 10.1137/0213053
    [34]
    Bellare M, Yee B. Forward-security in private-key cryptography. Topics in cryptology—CT-RSA 2003. Berlin, Heidelberg: Springer, 2003: 1–18.
    [35]
    Shen J, Yang H, Vijayakumar P, et al. A privacy-preserving and untraceable group data sharing scheme in cloud computing. IEEE Transactions on Dependable and Secure Computing, 2022, 19 (4): 2198–2210. doi: 10.1109/TDSC.2021.3050517
    [36]
    Fan K, Chen Q, Su R, et al. MSIAP: A dynamic searchable encryption for privacy-protection on smart grid with cloud-edge-end. IEEE Transactions on Cloud Computing, 2021, 11: 1170–1181. doi: 10.1109/TCC.2021.3134015
    [37]
    Lin Y, Han S, Mao H, et al. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv: 1712.01887, 2017.
    [38]
    Shokri R, Shmatikov V. Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1310–1321.
    [39]
    Vogels T, Karimireddy S P, Jaggi M. PowerSGD: practical low-rank gradient compression for distributed optimization. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York: ACM, 2019: 14269–14278.
    [40]
    Abdulrahman S, Tout H, Ould-Slimane H, et al. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet of Things Journal, 2021, 8 (7): 5476–5497. doi: 10.1109/JIOT.2020.3030072
    [41]
    Rahman S A, Tout H, Talhi C, et al. Internet of Things intrusion detection: Centralized, on-device, or federated learning. IEEE Network, 2020, 34 (6): 310–317. doi: 10.1109/MNET.011.2000286
    [42]
    LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86 (11): 2278–2324. doi: 10.1109/5.726791
  • 加载中

Catalog

    Figure  1.  The basic framework of federated learning.

    Figure  2.  System model of FLOSS where the formulas are simplified versions.

    Figure  3.  System model of FLMSS with the case of $d = 1$. The omitted secret share values can be obtained in Fig. 2. The aggregation process after handling the first round of dropped participants is not shown in this figure. The red dashed arrow represents the dropout state of the participant.

    Figure  4.  Surface graphs of the security of FLMSS as a function of environmental parameters when the number of shots $d$ is fixed.

    Figure  5.  Curves of the security of FLMSS as a function of the number of curious participants $x$ when the total number of participants $N$ is fixed.

    Figure  6.  Comparison of model accuracy for different learning methods.

    Figure  7.  Model performance under the influence of different parameter settings.

    Figure  8.  (a) The communication cost used for privacy protection under different numbers of participants. (b) The communication cost used for privacy protection under different numbers of model parameters.

    Figure  9.  Computation-cost comparison with varying fractions of selected participants per round.

    Figure  10.  Computation-cost comparison with varying numbers of model parameters per round.

    Figure  11.  Computation cost for handling dropout with varying dropout rates.

    [1]
    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521: 436–444. doi: 10.1038/nature14539
    [2]
    Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 779–788.
    [3]
    Minaee S, Kalchbrenner N, Cambria E, et al. Deep learning: Based text classification: A comprehensive review. ACM Computing Surveys, 2021, 54 (3): 1–40. doi: 10.1145/3439726
    [4]
    Lee M, Sanz L R D, Barra A, et al. Quantifying arousal and awareness in altered states of consciousness using interpretable deep learning. Nature Communications, 2022, 13: 1064. doi: 10.1038/s41467-022-28451-0
    [5]
    Wright L G, Onodera T, Stein M M, et al. Deep physical neural networks trained with backpropagation. Nature, 2022, 601: 549–555. doi: 10.1038/s41586-021-04223-6
    [6]
    Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015: 1–9.
    [7]
    He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 770–778.
    [8]
    McMahan H B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. arXiv: 1602.05629, 2016.
    [9]
    Nasr M, Shokri R, Houmansadr A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE Symposium on Security and Privacy (SP). San Francisco, USA: IEEE, 2019: 739–753.
    [10]
    Wang Z, Song M, Zhang Z, et al. Beyond inferring class representatives: User-level privacy leakage from federated learning. In: IEEE INFOCOM 2019—IEEE Conference on Computer Communications. Paris, France: IEEE, 2019: 2512–2520.
    [11]
    Zhu L, Liu Z, Han S. Deep leakage from gradients. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York: ACM, 2019, 1323: 14774–14784.
    [12]
    Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: Information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 603–618.
    [13]
    Xu G, Li H, Liu S, et al. VerifyNet: Secure and verifiable federated learning. IEEE Transactions on Information Forensics and Security, 2020, 15: 911–926. doi: 10.1109/TIFS.2019.2929409
    [14]
    Mothukuri V, Parizi R M, Pouriyeh S, et al. A survey on security and privacy of federated learning. Future Generation Computer Systems, 2021, 115: 619–640. doi: 10.1016/j.future.2020.10.007
    [15]
    Abadi M, Chu A, Goodfellow I, et al. Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2016: 308–318.
    [16]
    Phong L T, Aono Y, Hayashi T, et al. Privacy-preserving deep learning via additively homomorphic encryption. IEEE Transactions on Information Forensics and Security, 2018, 13 (5): 1333–1345. doi: 10.1109/TIFS.2017.2787987
    [17]
    Zhang X, Chen X, Liu J K, et al. DeepPAR and DeepDPA: Privacy preserving and asynchronous deep learning for industrial IoT. IEEE Transactions on Industrial Informatics, 2020, 16 (3): 2081–2090. doi: 10.1109/TII.2019.2941244
    [18]
    Huang K, Liu X, Fu S, et al. A lightweight privacy-preserving CNN feature extraction framework for mobile sensing. IEEE Transactions on Dependable and Secure Computing, 2021, 18 (3): 1441–1455. doi: 10.1109/TDSC.2019.2913362
    [19]
    Fereidooni H, Marchal S, Miettinen M, et al. SAFELearn: Secure aggregation for private Federated learning. In: 2021 IEEE Security and Privacy Workshops (SPW). San Francisco, USA: IEEE, 2021: 56–62.
    [20]
    Yang Y, Mu K, Deng R H. Lightweight privacy-preserving GAN framework for model training and image synthesis. IEEE Transactions on Information Forensics and Security, 2022, 17: 1083–1098. doi: 10.1109/TIFS.2022.3156818
    [21]
    Bonawitz K, Ivanov V, Kreuter B, et al. Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2017: 1175–1191.
    [22]
    Duan J, Zhou J, Li Y. Privacy-Preserving distributed deep learning based on secret sharing. Information Sciences, 2020, 527: 108–127. doi: 10.1016/j.ins.2020.03.074
    [23]
    Zheng Y, Lai S, Liu Y, et al. Aggregation service for federated learning: An efficient, secure, and more resilient realization. IEEE Transactions on Dependable and Secure Computing, 2022, 20 (2): 988–1001. doi: 10.1109/TDSC.2022.3146448
    [24]
    Xu R, Baracaldo N, Zhou Y, et al. HybridAlpha: An efficient approach for privacy-preserving federated learning. In: Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security. New York: ACM, 2019: 13–23.
    [25]
    Wu D, Pan M, Xu Z, et al. Towards efficient secure aggregation for model update in federated learning. In: GLOBECOM 2020—2020 IEEE Global Communications Conference. Taipei, China: IEEE, 2020: 1–6.
    [26]
    Truex S, Baracaldo N, Anwar A, et al. A hybrid approach to privacy-preserving federated learning. Informatik Spektrum, 2019, 42: 356–357. doi: 10.1007/s00287-019-01205-x
    [27]
    Kadhe S, Rajaraman N, Koyluoglu O O, et al. FastSecAgg: Scalable secure aggregation for privacy-preserving federated learning. arXiv: 2009.11248, 2020.
    [28]
    So J, Güler B, Avestimehr A S. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory, 2021, 2 (1): 479–489. doi: 10.1109/JSAIT.2021.3054610
    [29]
    Karimireddy S P, Kale S, Mohri M, et al. SCAFFOLD: stochastic controlled averaging for federated learning. In: Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 5132–5143.
    [30]
    Ozfatura E, Ozfatura K, Gündüz D. FedADC: Accelerated federated learning with drift control. In: 2021 IEEE International Symposium on Information Theory (ISIT). Melbourne, Australia: IEEE, 2021: 467–472.
    [31]
    Shamir A. How to share a secret. Communications of the ACM, 1979, 22 (11): 612–613. doi: 10.1145/359168.359176
    [32]
    Diffie W, Hellman M. New directions in cryptography. IEEE Transactions on Information Theory, 1976, 22 (6): 644–654. doi: 10.1109/TIT.1976.1055638
    [33]
    Blum M, Micali S. How to generate cryptographically strong sequences of pseudo-random bits. SIAM Journal on Computing, 1984, 13 (4): 850–864. doi: 10.1137/0213053
    [34]
    Bellare M, Yee B. Forward-security in private-key cryptography. Topics in cryptology—CT-RSA 2003. Berlin, Heidelberg: Springer, 2003: 1–18.
    [35]
    Shen J, Yang H, Vijayakumar P, et al. A privacy-preserving and untraceable group data sharing scheme in cloud computing. IEEE Transactions on Dependable and Secure Computing, 2022, 19 (4): 2198–2210. doi: 10.1109/TDSC.2021.3050517
    [36]
    Fan K, Chen Q, Su R, et al. MSIAP: A dynamic searchable encryption for privacy-protection on smart grid with cloud-edge-end. IEEE Transactions on Cloud Computing, 2021, 11: 1170–1181. doi: 10.1109/TCC.2021.3134015
    [37]
    Lin Y, Han S, Mao H, et al. Deep gradient compression: Reducing the communication bandwidth for distributed training. arXiv: 1712.01887, 2017.
    [38]
    Shokri R, Shmatikov V. Privacy-preserving deep learning. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. New York: ACM, 2015: 1310–1321.
    [39]
    Vogels T, Karimireddy S P, Jaggi M. PowerSGD: practical low-rank gradient compression for distributed optimization. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York: ACM, 2019: 14269–14278.
    [40]
    Abdulrahman S, Tout H, Ould-Slimane H, et al. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet of Things Journal, 2021, 8 (7): 5476–5497. doi: 10.1109/JIOT.2020.3030072
    [41]
    Rahman S A, Tout H, Talhi C, et al. Internet of Things intrusion detection: Centralized, on-device, or federated learning. IEEE Network, 2020, 34 (6): 310–317. doi: 10.1109/MNET.011.2000286
    [42]
    LeCun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86 (11): 2278–2324. doi: 10.1109/5.726791

    Article Metrics

    Article views (766) PDF downloads(2411)
    Proportional views

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return