Hi! This is Shilong Liu, 刘世隆 in Chinese. I’m a
ceil(now() - 2020.9)th year Ph.D. candidate at the Department of Computer Science and Technology, Tsinghua University, under the supervision of Prof. Lei Zhang, Prof. Hang Su, and Prof. Jun Zhu. I got my bachelor’s degree from the Department of Industrial Engineering, Tsinghua University in 2020.
I was a summer intern at Microsoft Research, Redmond in June to Septemeber 2023, under the supervision of Dr. Chunyuan Li and Dr. Hao Cheng. I have a close collaboration with Dr. Jianwei Yang as well.
My research interest includes computer vision, object detection, and multi-modal learning.
Contact me with my email: slongliu86 AT gmail.com or liusl20 AT mails.tsinghua.edu.cn
|Nov 5, 2023
|I was awarded as the CCF-CV Rising Star Scholar 2023 (CCF-CV 学术新锐学者, 3 people per year)! Thanks to the China Computer Federation.
|Mar 13, 2023
|We release a strong open-set object detection model Grounding DINO that achieves the best results on open-set object detection tasks. It achieves 52.5 zero-shot AP on COCO detection, without any COCO training data! It achieves 63.0 AP on COCO after fine-tuning. Code and checkpoints will be available here.
|Sep 22, 2022
|We release a toolbox detrex that provides state-of-the-art Transformer-based detection algorithms. It includes DINO with better performance. Welcome to use it!
- Grounding dino: Marrying dino with grounded pre-training for open-set object detectionarXiv preprint arXiv:2303.05499, 2023SOTA open-set object detector. 52.5AP on COCO without COCO training data!
- DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection2023The first DETR-based object detector that achieved 1st on the COCO detection leaderboard.
- DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETRIn International Conference on Learning Representations, 2022A deep understanding of DETR’s query, and formulating queries as anchor boxes.
- Query2Label: A Simple Transformer Way to Multi-Label Classification2021A novel transformer-based multi-label classification model, achieving SOTA on four benchmarks.