In our second post we described attacks on models and the concepts of input privacy and output privacy . ln our last post , we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting. Models, training, and aggregation To explore techniques for input privacy in PPFL, we first have to be more precise about the training process. In horizontally-partitioned federated learning, a common approach is to ask each participant to
Related Posts
By Benjamin Fabre, CEO & Co-founder, DataDome In the world…
- rooter
- October 25, 2023
- 1 min read
- 0
By Thomas Segura, Cyber Security Expert, GitGuardian The last few…
- rooter
- August 16, 2023
- 1 min read
- 0
Running a managed service provider (MSP) business is hugely rewarding.…
- rooter
- March 21, 2024
- 1 min read
- 0
When I go to BlackHat I’m always looking for cyber…
- rooter
- August 13, 2024
- 1 min read
- 0