Privacy-Preserving and Byzantine-Robust Distributed Machine Learning

In machine learning and deep learning, distributed systems are often employed, enabling each participant to retain its training data locally and thereby preserve privacy. The central aggregation server only needs to receive model weights or gradients uploaded by participants, without directly accessing their raw data. However, such privacy-preserving mechanisms are vulnerable to malicious participants who deliberately upload incorrect or falsified gradients. This form of threat is known as a Byzantine attack. These attacks can degrade model performance, disrupt the training process, and even lead to data or model leakage, posing significant security risks. How to enhance Byzantine robustness while simultaneously preserving privacy remains an open research problem that requires further investigation.

Related Posts