Real-time Event Detection Using 360 Cameras at Edge

People

Timeline

  • Fall 2019-present

Project Description

The battlefield involves a lot of events, e.g., the enemy movement, the injury of a soldier or attach of some kind of weapon. It is important to detect these events and perform timely reactions in order to reduce loss and gain advantage. There have already been some works that focus on detecting events in a 2D video. However, 2D videos can only capture events in one direction, which means we need several 2D cameras in order to capture events in all possible directions. Even though we can capture all events in this way, it is not optimal. There are overlapping regions among the videos captured by different 2D cameras, and the detecting algorithm will spend unnecessary computation power on those overlapping regions. To avoid wasting computation power while capturing events on the battlefield, it is better to perform event detection in 360 videos. However, there are some challenges to perform in event detection in 360 videos. First, the objects in the equirectangular frame, a format where most 360 videos are captured, are distorted depending on its position in that frame. This makes it harder for the detection algorithm to capture the invariants of those objects. Second, there are currently no datasets for event detection in 360 videos. Therefore, it is non-trivial to train a model for event detection in 360 videos. In this work, we developed a real-time event detection system for 360 videos. There are two main contributions. First, we find that SSD combined with FPN and the image size of 1024×512 leads to the best mAP performance. Second, we create a synthetic 360 video dataset to train the model for event detection.

Publications

  • Bo Chen, Klara Nahrstedt, ”EScALation: A Framework for Efficient and Scalable Spatio-temporal Action Localization,” ACM Multimedia Systems Conference, 2021

Funding Agencies

This project is supported by the Army Research Lab funding.