Drone-vs-Bird Detection Challenge

in conjunction with the “3rd International workshop on small-drone surveillance, detection and counteraction techniques” (WOSDETC) of IEEE AVSS 2020, September 22nd-25th, Washington DC, USA.

Motivation and description

Small drones are a rising threat due to their possible misuse for illegal activities such as smuggling of drugs as well as for terrorism attacks using explosives or chemical weapons. Several surveillance and detection technologies are under investigation at the moment, with different trade-offs in complexity, range, and capabilities. The “International Workshop on Small-Drone Surveillance, Detection and Counteraction Techniques” (WOSDETC) is aimed at bringing together researchers from both academia and industry, to share recent advances in this field. In conjunction, the Drone-vs-Bird Detection Challenge is proposed. Indeed, given their characteristics, drones can be easily confused with birds, which makes the surveillance tasks even more challenging especially in maritime areas where bird populations may be massive. The use of video analytics can solve the issue, but effective algorithms are needed able to operate also under unfavorable conditions, namely weak constraint, long range, reduced visibility, etc. Furthermore, practical systems require drones to be recognized at far distances, in order to allow time for reaction. Thus, very small objects must be recognized and differentiated against structured background and other challenging image contents.

The challenge aims at attracting research efforts to identify novel solutions to the problem outlined above, i.e., discrimination between birds and drones at far distance, by providing a video dataset that may be difficult to obtain (drone flying require special conditions and permissions, and shore areas are needed for the considered problem). The challenge goal is to detect a drone appearing at some time in a short video sequence where birds are also present: the algorithm should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds. The dataset is continually increased over consecutive installments of the challenge and made available to the community afterwards.

Participation and joint workshop

The challenge is organized in conjunction with the WOSDETC workshop, which will guarantee publication of the papers associated with the best proposals. All the participants to the challenge must submit score files with their results, as explained below, together with an abstract describing the applied methodology.

The best teams will extend their abstract to a full paper, which will be published in the conference proceedings and may be also presented in the workshop day session.

Publication in the main conference of a paper summarizing the overall challenge results, including the author names of all participants to the challenge, will be considered.

Participation to the workshop is of course possible independently of the challenge, following the standard submission and peer-review process.

Challenge organization and evaluation

Dataset

For this challenge an extended version of last year’s dataset will be made available, upon request and after signing a data user agreement. The dataset comprises a collection of videos where one or more drones enter the scene at some point. Annotation is provided in separate files in terms of frame number and bounding box of the target ([top_x top_y width height]) for those frames where drones are present.

Please send your request to wosdetc@googlegroups.com

Submission and evaluation procedure

Training data consisting of videos and annotations will be released at the beginning of the challenge to support the development of methods. Three days before the challenge deadline, a set of video sequences without annotations will be provided for testing. By the deadline, teams should submit one file for each test video, in a similar format as the annotation file. Submission files will provide the frame numbers and estimated drone bounding boxes ([top_x top_y width height]) in conjunction with detection confidence scores. For frames not reported in the files, no detection is assumed.

The challenge consists of two separate tracks. Track 1 will focus solely on accuracy without regarding detection speed. In this track, teams will merely submit results files. Track 2 will take detection speed into account. A minimum speed on reference hardware will be defined (e.g. 10 FPS on an NVIDIA 1080 Ti GPU). Only submissions achieving this speed will be ranked by their accuracy. For submission, a docker container will be provided. Teams will deploy their inference code and model in the docker container and submit the whole container. The containers will then be run for all teams on the reference hardware by the challenge organizers. Teams are free to submit to either or both tracks.

Developed algorithms should aim to localize drones accuratelyand generate bounding boxes as close as possible to to the targets. For evaluation, the mean Averaged Precision metric (mAP) will be employed. The metric is well established in the field of object detection and well known from the COCO object detection challenge. It is based on the Intersection over Union (IoU) criterion for matching ground truth and detected object boxes.

senza-titolo-3

Typically, a detection is counted as correct, when its IoU with a ground truth box is above 0.5. The mAP summarizes a whole precision-recall curve into a single metric. It thus encompasses the various precision-recall trade-offs of a detector. While the final ranking will be obtained based on overall mAP, a more detailed analysis of mAP for various object sizes will be carried out in the challenge summary paper in order to identify different strengths and weaknesses of the submitted approaches.

Each participating team will have to submit a summary description of their method with their results. The winning teams will be invited to extend their summary into a full paper. Submissions with interesting methodologies may be considered similarly. The summary will have to contain references to used public codebases, a detailed specification of the applied model, as well as training parameters. The use of additional training data is permitted. However, the amount and nature of the data will have to be described in detail. Furthermore, teams relying on added data will be asked to submit an additional result of their method, which was achieved relying only on the provided training data. Nevertheless, the overall best achieved score will count towards the final challenge ranking.

Result and paper submission

The result must be submitted through the CMT web site activated for the workshop. See the submission page.

Important dates

Dataset release to participants: March 5th, 2020

Submission deadline: July 9th, 2020

Challenge results notification: July 15th, 2020

Best teams camera-ready: July 30th, 2020

Main organizers

Angelo Coluccia, University of Salento, Lecce, Italy

Alessio Fascista, University of Salento, Lecce, Italy

Arne Schumann, Fraunhofer Institute, Karlsruhe, Germany

Lars Sommer, Fraunhofer Institute, Karlsruhe, Germany

Anastasios Dimou, CERTH, Greece

Dimitrios Zarpalas, CERTH, Greece

Advisory Committee

Geert De Cubbert, Royal Military Academy, Belgium

Tomas Piatrik, Queen Mary University, London, UK

Marian Ghenescu, UTI Grup, Romania

Stamatis Samaras, Greece

Crea il tuo sito web su WordPress.com
Crea il tuo sito
%d blogger hanno fatto clic su Mi Piace per questo: