Follow
Berfin Simsek
Berfin Simsek
Faculty fellow NYU CDS, Flatiron CCM
Verified email at nyu.edu - Homepage
Title
Cited by
Cited by
Year
Implicit regularization of random feature models
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
International Conference on Machine Learning, 4631-4640, 2020
882020
Geometry of the loss landscape in overparameterized neural networks: Symmetries and invariances
B Simsek, F Ged, A Jacot, F Spadaro, C Hongler, W Gerstner, J Brea
International Conference on Machine Learning, 9722-9732, 2021
672021
Kernel alignment risk estimator: Risk prediction from training data
A Jacot, B Simsek, F Spadaro, C Hongler, F Gabriel
Advances in neural information processing systems 33, 15568-15578, 2020
532020
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
J Brea, B Simsek, B Illing, W Gerstner
arXiv preprint arXiv:1907.02911, 2019
482019
Saddle-to-saddle dynamics in deep linear networks: Small initialization training, symmetry, and sparsity
A Jacot, F Ged, B Şimşek, C Hongler, F Gabriel
arXiv preprint arXiv:2106.15933, 2021
44*2021
Understanding out-of-distribution accuracies through quantifying difficulty of test samples
B Simsek, M Hall, L Sagun
arXiv preprint arXiv:2203.15100, 2022
52022
MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
J Brea, F Martinelli, B Şimşek, W Gerstner
arXiv preprint arXiv:2301.10638, 2023
42023
Expand-and-cluster: exact parameter recovery of neural networks
F Martinelli, B Simsek, J Brea, W Gerstner
arXiv preprint arXiv:2304.12794, 2023
32023
Online bounded component analysis: A simple recurrent neural network with local update rule for unsupervised separation of dependent and independent sources
B Simsek, AT Erdogan
2019 53rd Asilomar Conference on Signals, Systems, and Computers, 1639-1643, 2019
32019
Should Under-parameterized Student Networks Copy or Average Teacher Weights?
B Simsek, A Bendjeddou, W Gerstner, J Brea
Advances in Neural Information Processing Systems 36, 2024
12024
Learning Associative Memories with Gradient Descent
V Cabannes, B Simsek, A Bietti
arXiv preprint arXiv:2402.18724, 2024
2024
The Loss Landscape of Shallow ReLU-like Neural Networks: Stationary Points, Saddle Escaping, and Network Embedding
Z Wu, B Simsek, F Ged
arXiv preprint arXiv:2402.05626, 2024
2024
Statistical physics, Bayesian inference and neural information processing
E Grant, S Nestler, B Şimşek, S Solla
arXiv preprint arXiv:2309.17006, 2023
2023
A Theory of Finite-Width Neural Networks: Generalization, Scaling Laws, and the Loss Landscape
B Simsek
EPFL, 2023
2023
CSFT
JR Fageot, LS Field, FR Gabriel, FG Ged, E Golikov, LPA Hardiman, ...
The system can't perform the operation now. Try again later.
Articles 1–15