Poisoning Attacks
Poisoning attacks depend on the attacker’s objective, they may launch a random attack or attack a specific target. In random attacks, they aim to reduce the FL model’s accuracy and in case of targeted attacks, they aim at influencing the model to output wrong labels ie. the labels intended by the attacker and they generally have a specific goal to achieve. Poisoning attacks can happen in two ways, data poisoning (during local data collection) and model poisoning (during the model training process).
- Data Poisoning: here the attacker corrupts/changes labels of the data and may also try to modify the individual features or small parts of the training data. This attack is generally carried out by the FL clients/participants and the impact depends on the extent to which the FL participants engage.
- Model Poisoning: here the attacker aims to poison the local model updates before sending them to the server or the attacker may also try to insert a backdoor to the global model for corrupting it. Model poisoning has more impact compared to data poisoning since tweaking the model completely changes its characteristics thereby misclassifying the data. These attacks have more impact when the attacker tries to escape from getting detected by using an alternating minimization strategy to alternately optimize for the training loss.
Threats and vulnerabilities in Federated Learning
Prerequisites – Collaborative Learning – Federated Learning, Google Cloud Platform – Understanding Federated Learning on Cloud
In this article, we will learn review what is federated learning and its advantages over conventional machine learning algorithms. In the later part let’s try to understand the threats and vulnerabilities in federated learning architecture in simple terms.