News
[07/2023] One first-authored paper is accepted by ICCV 2023.
[03/2023] One first-authored paper is accepted by CVPR 2023 (Highlight; 2.5% acceptance rate).
|
Selected Publications
Papers are sorted by recency, * denotes equal contribution.
|
|
Compress & Align: Curating Image-Text Data with Human Knowledge
Lei Zhang,
Fangxun Shu,
Sucheng Ren,
Bingchen Zhao,
Hao Jiang,
Cihang Xie
Preprint, 2023
ArXiv
/
BibTeX
The first one introduces a human-knowledge-based algorithm to address the alignment and efficiency of large-scale image-text data. It can secure model performance by compressing the image-text datasets up tp ~90%.
|
|
Audio-Visual LLM for Video Understanding
Fangxun Shu*,
Lei Zhang*,
Hao Jiang,
Cihang Xie
Preprint, 2023
ArXiv
/
BibTeX
Develop Aduio-Visual LLM, which take both visual and audio inputs for holistic video understanding and reasoning. Two key designs: modality-augmented training and GPT-4 assisted instruction generation. It attains super strong performance in understanding and reasoning tasks.
|
|
Towards Fairness-aware Adversarial Network Pruning
Lei Zhang, Zhibo Wang, Xiaowei Dong, Yunhe Feng, Xiaoyi Pang, Zhifei Zhang, Kui Ren
ICCV, 2023
ArXiv /
BibTeX
Propose an adversarial fairness-aware network pruning method, which optimizes both pruning and debias tasks jointly by adversarial training. It significantly improves fairness by around 50% as compared to traditional pruning methods.
|
|
Accelerate Dataset Distillation via Model Augmentation
Lei Zhang,
Jie Zhang,
Bowen Lei,
Subhabrata Mukherjee,
Xiang Pan,
Bo Zhao,
Caiwen Ding,
Yao Li,
Dongkuan Xu
CVPR, 2023 (Highlight)
ArXiv /
Supplementary Material /
Code /
BibTeX
Propose two model augmentation techniques, i.e., using early-stage models and weight perturbation to learn an informative synthetic set with significantly reduced training cost. Extensive experiments demonstrate that our method achieves up to 20× speedup and comparable performance on par with state-of-the-art baseline methods.
|
|
Towards Efficient Data Free Black-Box Adversarial Attack
Jie Zhang,
Bo Li,
Jianghe Xu,
Shuang Wu,
Shouhong Ding,
Lei Zhang,
Chao Wu
CVPR, 2022
ArXiv /
Code /
BibTeX
By rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate.
|
|
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning
Jie Zhang*,
Lei Zhang*,
Gang Li,
Chao Wu
ICIP, 2022
ArXiv /
BibTeX
Provide a new perspective on how to deal with imbalanced data: adjust the biased decision boundary by training with Guiding Adversarial Examples (GAEs).
|
Experiences
|
Professional Services
Conference Reviewer: ICML 2024, ECCV 2024, CVPR 2024, NIPS 2024, CVPR 2023
Journal Reviewer: IEEE TPAMI
|
|