Human Perceptual Models

Leveraging Models of Human Perception and Sensor Properties to Effectively Portray Information and Optimally Position Small Unmanned Aerial Systems (SUAS) in Complex Urban Environments

There is a need in the training, tactical and advanced automation contexts to better understand how to optimally position and maneuver Small Unmanned Aerial Systems to effectively conduct Intelligence, Surveillance and Reconnaissance; provide overwatch of obstacles, engagement areas or avenues of approach; or provide “around the corner” look ahead support in the effective range of onboard sensors while minimizing the chances of detection and disruption (e.g., within sensor range but outside of human detection or counter-measure range). The goal of this project is to integrate models of human audio and visual perception given variations in urban contextual parameters (e.g., ambient noise, structures, and lighting) with models of the specifications of SUAS (visual and auditory signatures) and onboard sensors (IR, camera, audio) to portray effective visualizations of dynamic optimal employment zones across platforms of use (virtual reality, augmented reality, 2D). The models and visualizations will be empirically tested for utility and usability and tailored to support training and tactical applications. The models will also be evaluated for use in advanced SUAS control automation.

Student Team
  • Jonathan Aguirre
  • Lloyd Castro
  • Jean Espinosa
  • Peter Han
  • George Hernandez
  • Hugo Izquierdo
  • Raymond Martinez
  • Bruck Negash
  • Jesus Perez Arias
Project Sponsor
Army Research Lab
Project Liaisons
Faculty Advisors
  • Elaine Kang
  • David Krum