Follow
Matthew Jagielski
Title
Cited by
Cited by
Year
Manipulating machine learning: Poisoning attacks and countermeasures for regression learning
M Jagielski, A Oprea, B Biggio, C Liu, C Nita-Rotaru, B Li
2018 IEEE Symposium on Security and Privacy (SP), 19-35, 2018
6842018
Extracting Training Data from Large Language Models.
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
USENIX Security Symposium 6, 2021
5932021
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, ...
28th {USENIX} Security Symposium ({USENIX} Security 19), 321-338, 2019
2832019
High accuracy and high fidelity extraction of neural networks
M Jagielski, N Carlini, D Berthelot, A Kurakin, N Papernot
Proceedings of the 29th USENIX Conference on Security Symposium, 1345-1362, 2020
258*2020
Differentially private fair learning
M Jagielski, M Kearns, J Mao, A Oprea, A Roth, S Sharifi-Malvajerdi, ...
International Conference on Machine Learning, 3000-3008, 2019
1202019
Auditing differentially private machine learning: How private is private sgd?
M Jagielski, J Ullman, A Oprea
Advances in Neural Information Processing Systems 33, 22205-22216, 2020
1122020
Quantifying Memorization Across Neural Language Models
N Carlini, D Ippolito, M Jagielski, K Lee, F Tramer, C Zhang
arXiv preprint arXiv:2202.07646, 2022
1052022
Cryptanalytic extraction of neural network models
N Carlini, M Jagielski, I Mironov
Advances in Cryptology–CRYPTO 2020: 40th Annual International Cryptology …, 2020
972020
Subpopulation data poisoning attacks
M Jagielski, G Severi, N Pousette Harger, A Oprea
Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications …, 2021
542021
Extracting training data from diffusion models
N Carlini, J Hayes, M Nasr, M Jagielski, V Sehwag, F Tramèr, B Balle, ...
arXiv preprint arXiv:2301.13188, 2023
492023
Threat Detection for Collaborative Adaptive Cruise Control in Connected Cars
M Jagielski, N Jones, CW Lin, C Nita-Rotaru, S Shiraishi
Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and …, 2018
432018
Counterfactual Memorization in Neural Language Models
C Zhang, D Ippolito, K Lee, M Jagielski, F Tramèr, N Carlini
arXiv preprint arXiv:2112.12938, 2021
302021
Secure communication channel establishment: TLS 1.3 (over TCP fast open) vs. QUIC
S Chen, S Jero, M Jagielski, A Boldyreva, C Nita-Rotaru
Computer Security–ESORICS 2019: 24th European Symposium on Research in …, 2019
30*2019
Truth serum: Poisoning machine learning models to reveal their secrets
F Tramèr, R Shokri, A San Joaquin, H Le, M Jagielski, S Hong, N Carlini
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications …, 2022
262022
Measuring Forgetting of Memorized Training Examples
M Jagielski, O Thakkar, F Tramèr, D Ippolito, K Lee, N Carlini, E Wallace, ...
arXiv preprint arXiv:2207.00099, 2022
172022
Network and system level security in connected vehicle applications
H Liang, M Jagielski, B Zheng, CW Lin, E Kang, S Shiraishi, C Nita-Rotaru, ...
2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), 1-7, 2018
172018
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
D Ippolito, F Tramèr, M Nasr, C Zhang, M Jagielski, K Lee, ...
arXiv preprint arXiv:2210.17546, 2022
132022
PaLM 2 Technical Report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
112023
Poisoning Web-Scale Training Datasets is Practical
N Carlini, M Jagielski, CA Choquette-Choo, D Paleka, W Pearce, ...
arXiv preprint arXiv:2302.10149, 2023
112023
Debugging Differential Privacy: A Case Study for Privacy Auditing
F Tramer, A Terzis, T Steinke, S Song, M Jagielski, N Carlini
arXiv preprint arXiv:2202.12219, 2022
102022
The system can't perform the operation now. Try again later.
Articles 1–20