Follow
Colin Wei
Colin Wei
Verified email at stanford.edu - Homepage
Title
Cited by
Cited by
Year
Learning imbalanced datasets with label-distribution-aware margin loss
K Cao, C Wei, A Gaidon, N Arechiga, T Ma
Advances in neural information processing systems 32, 2019
6072019
Regularization matters: Generalization and optimization of neural nets vs their induced kernel
C Wei, JD Lee, Q Liu, T Ma
Advances in Neural Information Processing Systems 32, 2019
192*2019
Towards explaining the regularization effect of initial large learning rate in training neural networks
Y Li, C Wei, T Ma
Advances in Neural Information Processing Systems 32, 2019
1712019
Generic 3d representation via pose estimation and matching
AR Zamir, T Wekel, P Agrawal, C Wei, J Malik, S Savarese
European Conference on Computer Vision, 535-553, 2016
932016
Theoretical analysis of self-training with deep networks on unlabeled data
C Wei, K Shen, Y Chen, T Ma
arXiv preprint arXiv:2010.03622, 2020
882020
Data-dependent sample complexity of deep neural networks via lipschitz augmentation
C Wei, T Ma
Advances in Neural Information Processing Systems 32, 2019
622019
The implicit and explicit regularization effects of dropout
C Wei, S Kakade, T Ma
International conference on machine learning, 10181-10192, 2020
562020
Provable guarantees for self-supervised deep learning with spectral contrastive loss
JZ HaoChen, C Wei, A Gaidon, T Ma
Advances in Neural Information Processing Systems 34, 5000-5011, 2021
522021
Improved sample complexities for deep networks and robust classification via an all-layer margin
C Wei, T Ma
arXiv preprint arXiv:1910.04284, 2019
52*2019
Shape matters: Understanding the implicit bias of the noise covariance
JZ HaoChen, C Wei, J Lee, T Ma
Conference on Learning Theory, 2315-2357, 2021
382021
Self-training avoids using spurious features under domain shift
Y Chen, C Wei, A Kumar, T Ma
Advances in Neural Information Processing Systems 33, 21061-21071, 2020
342020
Why do pretrained language models help in downstream tasks? an analysis of head and prompt tuning
C Wei, SM Xie, T Ma
Advances in Neural Information Processing Systems 34, 16158-16170, 2021
162021
Markov chain truncation for doubly-intractable inference
C Wei, I Murray
Artificial Intelligence and Statistics, 776-784, 2017
122017
Beyond separability: Analyzing the linear transferability of contrastive representations to related subpopulations
JZ HaoChen, C Wei, A Kumar, T Ma
arXiv preprint arXiv:2204.02683, 2022
42022
Certified robustness for deep equilibrium models via interval bound propagation
C Wei, JZ Kolter
International Conference on Learning Representations, 2021
42021
General bounds on satisfiability thresholds for random CSPs via fourier analysis
C Wei, S Ermon
Proceedings of the AAAI Conference on Artificial Intelligence 31 (1), 2017
32017
Statistically meaningful approximation: a case study on approximating turing machines with transformers
C Wei, Y Chen, T Ma
arXiv preprint arXiv:2107.13163, 2021
22021
Meta-learning transferable representations with a single target domain
H Liu, JZ HaoChen, C Wei, T Ma
arXiv preprint arXiv:2011.01418, 2020
22020
Application of neural networks in the semantic parsing re-ranking problem
R Long, C Wei
Tech. rep. Stanford University, 2015
12015
Max-Margin Works while Large Margin Fails: Generalization without Uniform Convergence
M Glasgow, C Wei, M Wootters, T Ma
arXiv preprint arXiv:2206.07892, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20