Conference Paper


Predicting action tubes

Abstract

In this work, we present a method to predict an entire ‘action tube’ (a set of temporally linked bounding boxes) in a trimmed video just by observing a smaller subset of it. Predicting where an action is going to take place in the near future is essential to many computer vision based applications such as autonomous driving or surgical robotics. Importantly, it has to be done in real-time and in an online fashion. We propose a Tube Prediction network (TPnet) which jointly predicts the past, present and future bounding boxes along with their action classification scores. At test time TPnet is used in a (temporal) sliding window setting, and its predictions are put into a tube estimation framework to construct/predict the video long action tubes not only for the observed part of the video but also for the unobserved part. Additionally, the proposed action tube predictor helps in completing action tubes for unobserved segments of the video. We quantitatively demonstrate the latter ability, and the fact that TPnet improves state-of-the-art detection performance, on one of the standard action detection benchmarks - J-HMDB-21 dataset.

Attached files

Authors

Singh, Gurkirt
Saha, Suman
Cuzzolin, Fabio

Oxford Brookes departments

Faculty of Technology, Design and Environment\School of Engineering, Computing and Mathematics

Dates

Year of publication: 2019
Date of RADAR deposit: 2019-04-16


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License


Related resources

This RADAR resource is the Accepted Manuscript of Predicting action tubes

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 553