Conference Paper


Self-supervised pretraining for object detection in autonomous driving

Abstract

The detection of road agents, such as vehicles and pedestrians are central in autonomous driving. Self-Supervised Learning (SSL) has been proven to be an effective technique for learning discriminative feature representations for image classification, alleviating the need for labels, a remarkable advancement considering how timeconsuming and expensive labeling can be in autonomous driving. In this paper, we investigate the effectiveness of contrastive SSL techniques such as BYOL and MOCO on the object (agent) detection task using the ROad event Awareness Dataset (ROAD) and BDD100K benchmarks. Our experiments show that using self-supervised pretraining, we can achieve a 3.96 and 0.78 percentage points improvement on the AP50 metric on the ROAD and BDD100K benchmarks for the object detection task compared to supervised pretraining. Extensive comparisons and evaluations of current stateof-the-art SSL methods (namely MOCO, BYOL, SCRL) are conducted and reported for the object detection task.

Attached files

Authors

Kanacı, Aytaç
Teeti, Izzedin
Bradley, Andrew
Cuzzolin, Fabio

Oxford Brookes departments

School of Engineering, Computing and Mathematics

Dates

Year of publication: 2022
Date of RADAR deposit: 2022-10-19


Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License


Related resources

This RADAR resource is Identical to Self-supervised pretraining for object detection in autonomous driving

Details

  • Owner: Joseph Ripp
  • Collection: Outputs
  • Version: 1 (show all)
  • Status: Live
  • Views (since Sept 2022): 555