In our second post we described attacks on models and the concepts of input privacy and output privacy . ln our last post , we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting. Models, training, and aggregation To explore techniques for input privacy in PPFL, we first have to be more precise about the training process. In horizontally-partitioned federated learning, a common approach is to ask each participant to
Related Posts
This newsletter summarizes cybersecurity news from March, and boy there…
- rooter
- March 21, 2025
- 5 min read
- 0
With 25 years’ experience in Information Technology, Rishi has been…
- rooter
- September 19, 2023
- 5 min read
- 0
Dual Critical Advisory: Critical Vulnerabilities in Veeam Backup & Replication and SonicWall SonicOS
Learn about critical vulnerabilities in Veeam Backup & Replication and…
- rooter
- September 11, 2024
- 2 min read
- 0
This post is part of a series on privacy-preserving federated…
- rooter
- February 27, 2024
- 1 min read
- 0