Convolutional-Feature Analysis and Control for Mobile Visual Scene Perception

ONR_logo.jpg

N00014-17-1-2175

ONR_logo_small.jpg


February 1, 2017 – January 31, 2021

$1,710,380


ONR

Mathematics, Computer and Information Sciences Division

PI:

S. Ferrari

Co-PI:

M. Campell, K. Weinberger


Project Description

This project is focused on developing a deep-learning Bayesian optimization framework hinging on sparse features for mobile cooperative scene perception. Because the majority of frames in a video are redundant and only a subset of pixels in a frame are informative, this research will develop a convolutional feature extraction technique to extract task-relevant data with relevance backpropagation. Models of task-relevant objects and scene attributes will be learned from available physics-based and generative data-driven models. Obtaining scene models in addition to classifications will make it possible to infer intent, relationships, predict future actions, and help provide a semantic scene interpretation to an operator in the loop. Furthermore, it will be necessary for developing information-driven strategies to actively obtain additional videos collaboratively.

Research Goals

  • Extract mission-relevant data from videos with little or no prior knowledge of the scene.
  • Fuse spatio-temporal data with different viewpoints and changes in appearance, scale, illumination, and focus.
  • Extract and share compact models and classifications autonomously with few manually labeled data.
  • Operate robustly under dynamic and, possibly, disconnected communication topologies.

Peer-Reviewed Publications

  1. C. Liu and S. Ferrari, “Vision-guided Planning and Control for Autonomous Taxiing via Convolutional Neural Networks,” AIAA Guidance, Navigation, and Control (GNC)/ Intelligent System (IS) Conference, January 2019. [PDF]
  2. J. Gemerek, S. Ferrari, B. H. Wang, M. E. Campbell, “Video-guided Camera Control for Target Tracking and Following”, IFAC Conference on Cyber-Physical and Human Systems (CPHS), December 2018. [PDF]

Presentations

  1. “Vision-guided Planning and Control for Autonomous Taxiing via Convolutional Neural Networks,” AIAA Guidance, Navigation, and Control (GNC)/ Intelligent System (IS) Conference, San Diego, CA, January 2019. [PDF]
  2. “Video-guided Camera Control for Target Tracking and Following”, IFAC Conference on Cyber-Physical and Human Systems (CPHS), Miami, FL, December 2018. [PDF]
  3. “Mobile Scene Perception via Convolutional Neural Networks,” ONR Science of Autonomy Program Review, Arlington, VA, August 2018. [PDF]