Code for the
SECON 2023 paper: VagueGAN: A GAN-Based Data Poisoning Attack Against Federated Learning Systems
A GAN-Based Data Poisoning Attack Against Federated Learning Systems and Its Countermeasure
Data Poisoning Attacks Against Federated Learning Systems
- Create a virtualenv (Python 3.7)
- Install dependencies inside of virtualenv (
pip install -r requirements.pip
) - If you are planning on using pca defense, you will need to install
matplotlib
. This is not required for running experiments, and is not included in the requirements file - We retain the interface for label flipping attacks. If you plan to reproduce the label flipping attack, you will need to modify
VagueGAN_attack.py
We outline the steps required to execute different experiments below.
python generate_data_distribution.py
This downloads the datasets, as well as generates a static distribution of the training and test data to provide consistency in experiments.python generate_default_models.py
This generates an instance of all of the models used in the paper, and saves them to disk.
- Federated learning system hyperparameters can be set in the
federated_learning/arguments.py
file - VagueGAN hyperparameters can be set in the
federated_learning/utils/main.py
file - Most specific experiment settings are located in the respective experiment files (see the following sections)
Running VagueGAN attack: python VagueGAN_attack.py
Set in the federated_learning/utils/main.py
Set in the sever.py
Set in the pca_defense.py
Running PCA defense: python pca_defense.py
Running MCD defense: python MCD_detection_metrics.py
If you have any questions or insights regarding this project, please contact us at 21120398@bjtu.edu.cn
If you find this code helpful to your research, please cite our attack or defense paper:
@INPROCEEDINGS{10287523,
author={Sun, Wei and Gao, Bo and Xiong, Ke and Lu, Yang and Wang, Yuwei},
booktitle={2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON)},
title={VagueGAN: A GAN-Based Data Poisoning Attack Against Federated Learning Systems},
year={2023},
volume={},
number={},
pages={321-329},
keywords={Computer aided instruction;Federated learning;Distance learning;Training data;Generative adversarial networks;Data models;Sensors;Fedrated learning(FL);Security and Privacy;Generative Adversarial Networks(GAN)},
doi={10.1109/SECON58729.2023.10287523}}
or
@misc{sun2024ganbased,
title={A GAN-Based Data Poisoning Attack Against Federated Learning Systems and Its Countermeasure},
author={Wei Sun and Bo Gao and Ke Xiong and Yuwei Wang},
year={2024},
eprint={2405.11440},
archivePrefix={arXiv},
primaryClass={cs.CR}
}