Witches' brew: Industrial scale data poisoning via gradient matching J Geiping, L Fowl, WR Huang, W Czaja, G Taylor, M Moeller, T Goldstein arXiv preprint arXiv:2009.02276, 2020 | 211 | 2020 |
Adversarially robust distillation M Goldblum, L Fowl, S Feizi, T Goldstein Proceedings of the AAAI conference on artificial intelligence 34 (04), 3996-4003, 2020 | 209 | 2020 |
Metapoison: Practical general-purpose clean-label data poisoning WR Huang, J Geiping, L Fowl, G Taylor, T Goldstein Advances in Neural Information Processing Systems 33, 12080-12091, 2020 | 200 | 2020 |
Deep k-NN Defense Against Clean-Label Data Poisoning Attacks N Peri, N Gupta, WR Huang, L Fowl, C Zhu, S Feizi, T Goldstein, ... Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020 …, 2020 | 126 | 2020 |
Strong data augmentation sanitizes poisoning and backdoor attacks without an accuracy tradeoff E Borgnia, V Cherepanova, L Fowl, A Ghiasi, J Geiping, M Goldblum, ... ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021 | 125 | 2021 |
Robbing the fed: Directly obtaining private data in federated learning with modified models L Fowl, J Geiping, W Czaja, M Goldblum, T Goldstein arXiv preprint arXiv:2110.13057, 2021 | 122 | 2021 |
Adversarial examples make strong poisons L Fowl, M Goldblum, P Chiang, J Geiping, W Czaja, T Goldstein Advances in Neural Information Processing Systems 34, 30339-30351, 2021 | 112 | 2021 |
Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch H Souri, L Fowl, R Chellappa, M Goldblum, T Goldstein Advances in Neural Information Processing Systems 35, 19165-19178, 2022 | 102 | 2022 |
Unraveling meta-learning: Understanding feature representations for few-shot tasks M Goldblum, S Reich, L Fowl, R Ni, V Cherepanova, T Goldstein International Conference on Machine Learning, 3607-3616, 2020 | 84 | 2020 |
Adversarially robust few-shot learning: A meta-learning approach M Goldblum, L Fowl, T Goldstein Advances in Neural Information Processing Systems 33, 17886-17895, 2020 | 84 | 2020 |
What doesn't kill you makes you robust (er): How to adversarially train against data poisoning J Geiping, L Fowl, G Somepalli, M Goldblum, M Moeller, T Goldstein arXiv preprint arXiv:2102.13624, 2021 | 76 | 2021 |
Fishing for user data in large-batch federated learning via gradient magnification Y Wen, J Geiping, L Fowl, M Goldblum, T Goldstein arXiv preprint arXiv:2202.00580, 2022 | 73 | 2022 |
Understanding generalization through visualizations WR Huang, Z Emam, M Goldblum, L Fowl, JK Terry, F Huang, T Goldstein PMLR, 2020 | 69 | 2020 |
Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective G Somepalli, L Fowl, A Bansal, P Yeh-Chiang, Y Dar, R Baraniuk, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 63 | 2022 |
Dp-instahide: Provably defusing poisoning and backdoor attacks with differentially private data augmentations E Borgnia, J Geiping, V Cherepanova, L Fowl, A Gupta, A Ghiasi, ... arXiv preprint arXiv:2103.02079, 2021 | 44 | 2021 |
Decepticons: Corrupted transformers breach privacy in federated learning for language models L Fowl, J Geiping, S Reich, Y Wen, W Czaja, M Goldblum, T Goldstein arXiv preprint arXiv:2201.12675, 2022 | 42 | 2022 |
Preventing unauthorized use of proprietary data: Poisoning for secure dataset release L Fowl, P Chiang, M Goldblum, J Geiping, A Bansal, W Czaja, T Goldstein arXiv preprint arXiv:2103.02683, 2021 | 40 | 2021 |
Robust few-shot learning with adversarially queried meta-learners M Goldblum, L Fowl, T Goldstein arXiv preprint arXiv:1910.00982 2 (4), 7, 2019 | 14 | 2019 |
Poisons that are learned faster are more effective P Sandoval-Segura, V Singla, L Fowl, J Geiping, M Goldblum, D Jacobs, ... Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 13 | 2022 |
Strong baseline defenses against clean-label poisoning attacks N Gupta, WR Huang, L Fowl, C Zhu, S Feizi, T Goldstein, J Dickerson | 11 | 2019 |