写在最前面,感觉这一篇的技术更贴近于密码学,所以部分核心技术读起来比较吃力。仅供大家参考哇~ Abstract—In this paper, we address the problem of privacypreserving training and evaluation of neural networks in an N-party, federated learning setting
1. 论文信息 How Asynchronous can Federated Learning Be?2022 IEEE/ACM 30th International Symposium on Quality of Service (IWQoS). IEEE, 2022,不属于ccf认定 2. introduction 2.1. 背景: 现有的异步FL文献中设计的启发式方法都只反映设计空间
文章目录 前言一、联邦学习FedAVG和分布式SGD二、联邦学习中的SGD三、梯度压缩和误差补偿四、两种文中给出的新算法总结 前言 内容仅为笔者主观想法,记录下来以供之后回顾,如有错误请谅解。 一、联邦学习FedAVG和分布式SGD 文献:Federated Learning of Deep Networks using Model Averaging 在
Towards Personalized Federated Learning 标题:Towards Personalized Federated Learning 收录于:IEEE Transactions on Neural Networks and Learning Systems (Mar 28, 2022) 作者单位:NTU,Alibaba Group
他人总结:[link] \, [link] 讨论最优化算法的部分没看懂 4.1 \, Actors, Threat Models, and Privacy in Depth Various threat models for different adversarial actors (malicious / honest-but-curious) : 4.2 \, Too