Mathworks (Sensor Fusion for Autonomous Systems)

The goal of this project is to develop a sensor fusion algorithm for vehicle pose estimation using AI-based techniques. Vehicle pose refers to a vehicle's position and direction. Vehicles typically employ a variety of on-board sensors such as cameras and LiDAR sensors to gather environmental information and estimate vehicle pose. Traditionally, pose estimation was done using classic filtering techniques such as Kalman Filtering. However, we are now exploring Machine Learning (ML) and Artificial Intelligence (AI) based methods to help us with estimation. In this project, we will design a software application that can fuse vehicle sensor data from multiple sensors and use ML to analyze the data and estimate a vehicle's pose. General components of the work to be done include: 

  1. Identifying a limited set of sensor types and data that we consider most relevant to vehicle pose estimation.
  2. Identifying an appropriate ML method through survey of recent literature and industry practices.
  3. Training and testing of the ML algorithm.
  4. Revision of software application with other appropriate sensor data. The idea behind this step is to determine if additional sensor data adds value to our vehicle pose estimation method. 

The description of this project from Mathworks can be found here

The group are divided in two teams:

Group 1: Computer Vision Team
Angel Tinajero
Gerardo Ibarra
Hagop Arabian
Jonathan Santos
Deepanker Seth

 

Group 2: Object Detection Team
Daniel Gallegos
Roberto Garcia
David Neilson
Xiao Hang Wang
Patrick Emmanuel Sangalang

Team Leader: David Neilsen

Team Communication Leader: Hagop Arabian

 

Student Team
  • Hagop Arabian
  • Daniel Gallegos
  • Roberto Garcia
  • Gerardo Ibarra
  • David Neilsen
  • Patrick Sangalang
  • Jonathan Santos
  • Deepanker Seth
  • Angel Tinajero
  • Xiao Wang
Project Sponsor
Mathworks
Project Liaisons
  • Sumit Tandon
Faculty Advisors
  • Manveen Kaur