Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation X Yue, Z Zheng, S Zhang, Y Gao, T Darrell, K Keutzer, AS Vincentelli Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 85 | 2021 |
Prompt vision transformer for domain generalization Z Zheng, X Yue, K Wang, Y You arXiv preprint arXiv:2208.08914, 2022 | 12 | 2022 |
Sparse-mlp: A fully-mlp architecture with conditional computation Y Lou, F Xue, Z Zheng, Y You arXiv preprint arXiv:2109.02008 2, 2021 | 12 | 2021 |
Deeper vs wider: A revisit of transformer configuration F Xue, J Chen, A Sun, X Ren, Z Zheng, X He, X Jiang, Y You arXiv preprint arXiv:2205.10505, 2022 | 5 | 2022 |
Multi-source few-shot domain adaptation X Yue, Z Zheng, HP Das, K Keutzer, AS Vincentelli arXiv preprint arXiv:2109.12391, 2021 | 5 | 2021 |
Cross-token Modeling with Conditional Computation Y Lou, F Xue, Z Zheng, Y You arXiv preprint arXiv:2109.02008, 2021 | 4 | 2021 |
Scene-aware learning network for radar object detection Z Zheng, X Yue, K Keutzer, A Sangiovanni Vincentelli Proceedings of the 2021 International Conference on Multimedia Retrieval …, 2021 | 4 | 2021 |
Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models Z Zheng, M Ma, K Wang, Z Qin, X Yue, Y You arXiv preprint arXiv:2303.06628, 2023 | 1 | 2023 |
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis F Xue, Y Fu, W Zhou, Z Zheng, Y You arXiv preprint arXiv:2305.13230, 2023 | | 2023 |
Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline Z Zheng, X Ren, F Xue, Y Luo, X Jiang, Y You arXiv preprint arXiv:2305.13144, 2023 | | 2023 |
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning Z Qin, K Wang, Z Zheng, J Gu, X Peng, D Zhou, Y You arXiv preprint arXiv:2303.04947, 2023 | | 2023 |
CowClip: Reducing CTR Prediction Model Training Time from 12 hours to 10 minutes on 1 GPU Z Zheng, P Xu, X Zou, D Tang, Z Li, C Xi, P Wu, L Zou, Y Zhu, M Chen, ... arXiv preprint arXiv:2204.06240, 2022 | | 2022 |