Small Unit Drone Optimization (SUDO)

This project aims to digitize human audio and visual perception models that have been amalgamated through decades of study by the Army Research Lab, and transform those models into a dynamic, easy-to-use, and modern programming language. Our program takes into account factors such as environmental noise, a person's hearing threshold, atmospheric conditions, effects of ground composition, temperature, and 1/3 octave bands (among others) throughout multiple frequencies, and tracks the effects they have on auditory propogation loss. This will lead to an output which can determine a person's auditory detection range of any inputted target hardware.

This newly-developed program will then be used to create a testing simulation in a 3D environment using the Unity Engine, where representations of the effects of visual obstacles to a human's perception can be simulated. The combined models will then determine the statistical probabilities of a Small Unmanned Aerial System (SUAS aka drone) becoming detected by enemy soldiers at various distances. The program will proceed to warn the drone operator when detection by an enemy soldier is likely.

In the future, we believe our work can be added to by exploring the capabilities of cameras and sensors onboard actual SUAS platforms, in order to dynamically locate targets and determine their distance, thereby directing itself to the best possible flight pattern to avoid detection. 

Student Team
  • Jonathan Aguirre
  • Lloyd Castro
  • Jean Espinosa
  • Peter Han
  • George Hernandez
  • Hugo Izquierdo
  • Raymond Martinez
  • Bruck Negash
  • Jesus Perez Arias
Project Sponsor
Army Research Lab
Project Liaisons
  • Paul Fedele
  • Eric Holder
Faculty Advisors
  • Elaine Kang
  • David Krum