Adaptive Pooling in Multi-Instance Learning for Web Video Annotation

  • Yizhou Zhou ,
  • Xiaoyan Sun ,
  • Dong Liu ,
  • Zhengjun Zha ,
  • Wenjun Zeng

IEEE International Conference on Computer Vision, Workshop on Web-scale Vision and Social Media |

Published by IEEE – Institute of Electrical and Electronics Engineers

Web videos are usually weakly annotated, i.e., a tag is associated to a video once the corresponding concept appears in a frame of this video without indicating when and where it occurs. These weakly annotated tags pose big troubles to many Web video applications, e.g. search and recommendation. In this paper, we present a new Web video annotation approach based on multi-instance learning (MIL) with a learnable pooling. By formulating the Web video annotation as a MIL problem, we present a end-to-end deep network framework to solve this problem in which the frame (instance) level annotation is estimated from tags given at the video (bag of instances) level via a convolutional neural network (CNN). A learnable pooling function is proposed to adaptively fuse the outputs of the CNN to determine tags at the video level. We further proposed a new loss function which consists both bag-level and instance-level losses that enables the penalty term to be aware of internal state of network rather than an overall loss merely, to learn the cooling function better and faster. Experimental results demonstrate that our proposed framework is able to not only enhance the accuracy of Web video annotation, who outperforms the state-of-the-art Web video annotation methods on the large-scale video benchmark FCVID, but also help to indicate the most relevant frames in Web videos.