Bilkent University
Department of Computer Engineering
S E M I N A R

 

Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval

 

Asst. Prof. Dr. Gül Varol
École des Ponts ParisTech

Our objective in this work is video-text retrieval - in particular a joint embedding that enables efficient text-to-video retrieval. The challenges in this area include the design of the visual architecture and the nature of the training data, in that the available large scale video-text training datasets are noisy and hence competitive performance is achieved only at scale through large amounts of compute. We address both these challenges. We propose an end-to-end trainable model that is designed to take advantage of both large-scale image and video captioning datasets. Our model is an adaptation and extension of the recent ViT and Timesformer architectures, and consists of attention in both space and time. The model is flexible and can be trained on both image and video text datasets, either independently or in conjunction. It is trained with a curriculum learning schedule that begins by treating images as 'frozen' snapshots of video, and then gradually learns to attend to increasing temporal context when trained on video datasets. We also provide a new video-text pretraining dataset WebVid-2M, comprised of over two million videos with weak captions scraped from the internet. Despite training on datasets that are an order of magnitude smaller, we show that this approach yields state-of-the-art results on standard downstream video-retrieval benchmarks including MSR-VTT, MSVD, DiDeMo and LSMDC. The paper can be accessed at https://arxiv.org/abs/2104.00650.

Bio: Gul is an Assistant Professor at the IMAGINE team of École des Ponts ParisTech. Previously, she was a postdoctoral researcher at the University of Oxford (VGG), working with Andrew Zisserman. She obtained her PhD from the WILLOW team of Inria Paris and École Normale Supérieure. Her thesis, co-advised by Ivan Laptev and Cordelia Schmid, received the ELLIS PhD award. Her research is focused on human understanding in videos, specifically action recognition, body shape and motion analysis, and sign language.

 

DATE: 15 April 2021, Thursday @ 11:00
PLACE: Zoom