Introduction
Our workshop has three challenges for different video segmentation tasks including semi-supervised video object segmentation, video instance segmentation and referring video object segmentation. The first two challenges will be very similar to previous challenges we have done with improved and augmented datasets (details). The third challenge aims to segment an object referred by a given language expression in a video and requires algorithms to understand video and language jointly.
Announcement
- Please checkout out our workshop schedule and challenge leaderboard
Dates
- May 20th: The final competition results will be announced and top teams will be invited to give oral/poster presentations at our CVPR 2021 workshop.
- May 5th - 14th: Release test data and open the submission of the test results.
- Feb 15th: Codalab websites open for registration. Training and validation data are released.
Tasks
- Track 1: Video Object Segmentation
- Track 2: Video Instance Segmentation
- Track 3: Referring Video Object Segmentation
Submission
- Track 1: Video Object Segmentation
Track 2: Video Instance Segmentationnew- Track 3: Referring Video Object Segmentation
Organizers
Ning Xu Adobe Research |
Linjie Yang ByteDance AI Lab |
Yuchen Fan UIUC |
Yang Fu UIUC |
Weiyao Lin SJTU, China |
Jianchao Yang ByteDance AI Lab |
Humphrey Shi UIUC |
Joon-Young Lee Adobe Research |
Seonguk Seo SNU, Korea |
Contact
For dataset related questions, please feel free to contact ytbvos@gmail.com. For challenge related questions, you can also use codalab forums.
Sponsors
Adobe | ByteDance | UIUC |