JPL Lunar Detection Pipeline
Project Description
NASA's Lunar Reconnaissance Orbiter Camera (LROC) captures high-resolution imagery of the lunar surface, generating vast quantities of data that require analysis to identify scientifically significant geological features. Manual review of this imagery is time-intensive and limits the pace at which researchers can survey the Moon for features of interest such as craters, boulders, slopes, and other formations critical to understanding lunar geology and planning future exploration missions. Automated feature detection addresses this bottleneck by enabling rapid, systematic analysis of LROC imagery, allowing scientists to focus their expertise on interpreting results rather than initial identification tasks.
Project Goals
The project's goal is to develop an automated machine learning pipeline that detects and classifies lunar surface features from LROC imagery with high accuracy and efficiency. The system leverages two state-of-the-art computer vision architectures: YOLOv11 for real-time object detection and Detectron2 for instance segmentation tasks. Both models are trained on annotated datasets of lunar features and evaluated through comprehensive performance metrics including precision, recall, and processing speed. Comparative analysis between the two architectures determines optimal use cases for each model, providing JPL with flexible tools suited to different operational requirements.
Technical Implementation
The machine learning pipeline processes raw LROC imagery through preprocessing stages that normalize image data and prepare it for model inference. YOLOv11 and Detectron2 models execute feature detection, generating bounding boxes, classification labels, and confidence scores for identified objects. The pipeline handles NASA's specialized image formats and accommodates the unique challenges of lunar imagery such as varying illumination angles, shadowing effects, and complex terrain topology. Post-processing stages filter results based on confidence thresholds and eliminate duplicate detections. The output includes annotated imagery with detected features highlighted, CSV files containing feature metadata (coordinates, dimensions, classifications), and performance metrics for model evaluation. This automated workflow significantly accelerates the analysis process compared to manual methods, enabling systematic surveys of large lunar regions and rapid identification of areas warranting further investigation for mission planning or scientific study.
Tech Stack
Component | Technology |
| Object Detection | YOLOv11 |
| Instance Segmentation | Detectron2 (Mask R-CNN) |
| Image Processing | Python (OpenCV, Pillow) |
| Deep Learning Framework | PyTorch |
| Data Format Handling | NASA PDS Standards |
- Norma Argueta
- Garen Artsrounian
- Tailsy Bobadilla
- Diego De La Fuente
- Edward Garcia-Cuevas
- Yamilena Hernandez
- Rodrigo Martell
- Erick Nava
- Anthony Sanchez-Espindola
- Andy Su
- Kevin Truong
- Andrew Wun
