In an autonomous driving system, perception - identification of features and objects from the environment - is crucial. Autonomous racing, in particular, features high speeds and small margins that demand rapid and accurate perception systems. During the race, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres. In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions - the collection of which is a protracted and costly process. However, recent developments in CycleGAN architectures allow the synthesis of highly realistic scenes in multiple weather conditions. To this end, we introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-theart detectors by an average of 42.7 and 4.4 mean average precision (mAP) percentage points in the presence of night-time conditions and droplets, respectively. Furthermore, we present a comparative analysis of five object detectors - identifying the optimal pairing of detector and training data for use during autonomous racing in challenging conditions.
Teeti, Izzeddin Musat, ValentinaKhan, Salman Rast, AlexanderCuzzolin, Fabio Bradley, Andrew
School of Engineering, Computing and Mathematics
Year of publication: 2022Date of RADAR deposit: 2022-10-19