In our second post we described attacks on models and the concepts of input privacy and output privacy. ln our last post, we described horizontal and vertical partitioning of data in privacy-preserving federated learning (PPFL) systems. In this post, we explore the problem of providing input privacy in PPFL systems for the horizontally-partitioned setting. Models, training, and aggregation To explore techniques for input privacy in PPFL, we first have to be more precise about the training process. In horizontally-partitioned federated learning, a common approach is to ask each participant to
Related Posts
Welcome to WordPress. This is your first post. Edit or…
- rooter
- February 15, 2023
- 1 min read
- 0
Cybersecurity’s Enduring Guardians: ESET’s 35-Year Journey of Threat Intelligence The…
New phishing tool, GoIssue, takes email addresses from public GitHub…
Welcome to October’s Cybersecurity Newsletter! This month, we’re celebrating Cybersecurity Awareness…