Federated learning is a machine learning technique that allows multiple devices to train a machine learning model together without sharing their data. This is done by having each device train a local model on its own data, and then having the devices collaborate to aggregate the local models into a global model.
While federated learning has many advantages, it also has some security vulnerabilities. These vulnerabilities can be exploited by attackers to gain access to sensitive data or to disrupt the training process.
Some of the most common threats and vulnerabilities in federated learning include:
Data poisoning: In data poisoning, an attacker can intentionally corrupt the data on a device in order to sabotage the training process.
Model poisoning: In model poisoning, an attacker can inject malicious code into the global model in order to control its behavior.
Sybil attacks: In Sybil attacks, an attacker can create multiple fake devices in order to have a greater influence on the training process.
Coordination attacks: In coordination attacks, multiple attackers can work together to exploit vulnerabilities in federated learning.