Conference Paper


Deep learning for detecting multiple space-time action tubes in videos

Abstract

In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time.

Attached files

Authors

Saha, Suman
Singh, Gurkirt
Sapienza, Michael
Torr, Philip
Cuzzolin, Fabio

Oxford Brookes departments

Faculty of Technology, Design and Environment\School of Engineering, Computing and Mathematics

Dates

Year of publication: 2016
Date of RADAR deposit: 2018-09-21



© 2016. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms


Related resources

This RADAR resource is the Version of Record of Deep learning for detecting multiple space-time action tubes in videos

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 371