In our second post we described attacks on models and the concepts of input privacy and output privacy . ln our last post , we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting. Models, training, and aggregation To explore techniques for input privacy in PPFL, we first have to be more precise about the training process. In horizontally-partitioned federated learning, a common approach is to ask each participant to
		Related Posts
How to protect business against BEC-attacks By Sergio Bertoni, The…
- rooter
 - November 28, 2023
 - 1 min read
 - 0
 
EXECUTIVE SUMMARY: Industry leaders have warned that some of the…
- rooter
 - June 29, 2023
 - 3 min read
 - 0
 
The Aurora information-stealing malware was delivered through an in-browser Windows…
- rooter
 - May 11, 2023
 - 1 min read
 - 0
 
Several high-profile and global law firms have been under the…
- rooter
 - December 1, 2024
 - 1 min read
 - 0
 
