In this work, we take aim towards increasing the effectiveness of surgical assistant robots. We intended to make assistant robots safer by making them aware about the actions of surgeon, so it can take appropriate assisting actions. In other words, we aim to solve the problem of surgeon action detection in endoscopic videos. To this, we introduce a challenging dataset for surgeon action detection in real world endoscopic videos. Action classes are picked based on the feedback of surgeons and annotated by medical professional. Given a video frame, we draw bounding box around surgical tool which is performing action and label it with action label. Finally, we present a frame-level action detection baseline model based on recent advances in object detection. Results on our new dataset show that our presented dataset provides enough interesting challenges for future method and it can serve as strong benchmark corresponding research in surgeon action detection in endoscopic videos.
Permanent link to this resource: https://doi.org/10.48550/arXiv.2006.07164
Singh Bawac, VivekSingh, GurkitKaping’A, FrancisSkarga-Bandurova, Inna Leporini, AliceLandolfo, CarmelaStabile, ArmandoSetti, FranescoMuradore, RiccardoOleari, ElettraCuzzolin, Fabio
School of Engineering, Computing and Mathematics
Year of publication: 2022Date of RADAR deposit: 2022-07-08
http://arxiv.org/licenses/nonexclusive-distrib/1.0/