Principal Investigators
Inclusive Dates 
10/29/2023 to 02/29/2024

Background

Cartographic control is a critical component of making images of a target useful: You need to know where each pixel should go onto a surface, otherwise the data are not as useful as they could be. In planetary missions, this means images taken by spacecraft need to be projected to the proper location on a surface. Placement can be based on Earth-based trajectory reconstruction. However, uncertainties in such a reconstruction often lead to offsets of the same feature in different images. Offsets can be corrected by making a cartographic control network (“control net”) and solving that network (a large matrix minimization problem) to make a controlled image dataset. To create a control net, the locations of the same feature on ≥2 images is identified, and software solves for where the spacecraft and instruments should have been for the feature to project to the same location in each image.

In previous IR&Ds, we developed efficient, unique, mostly automated methods to create control nets for large amounts of data that had not been controllable before: tens of terabytes, with >100,000 images, where each image is up to 1 GB in size. Only through those previous IR&Ds were we able to create optimized code that was, so far, able to win less than $1 million in external funding, with more external proposals actively being written to make use of that tool. However, the tool was optimized for the specific case of sparse image coverage, where few images (≲10) overlap at any given location, and the images tend to look similar (i.e., the lighting and pixel scale are similar). Such image campaigns are common for orbiter missions, such as the Mars Reconnaissance Orbiter, whose data are what our first awarded external grants use.

In this follow-up IR&D, we proposed to explore different methods to optimize our code to solve the opposite case: Many images (up to hundreds), with potentially vastly different viewing and illumination geometries, that overlap at a single location. This is also a common problem in planetary imaging, where, for example, the MESSENGER mission to Mercury returned a quarter-million images with pixel scales 10s of meters to 10s of kilometers, spanning dawn to noon to dusk. There are literally hundreds of images at any given location, and our code before this IR&D would break down in that regime. Adapting our code for this alternative regime has the potential to open new avenues of external funding, both for data analysis of archived data and from active and future planned missions.

Approach

This work was divided into two primary components. The first was to investigate and implement a method to create and register tie points in an optimized manner in the above-described imaging regime. The second was to investigate potential methods to creates lists of images to better optimize what might a priori be matched together in an automated way. An additional goal was to modernize some of the Python code that our codebase uses that no longer works with modern versions of Python (>3.7). The approach to all these goals was to experiment with a few different ideas and gauge what did or did not work: What resulted in code that ran instead of crashed, ran in a reasonable amount of time, and produced a good-quality control net.

Accomplishments

The primary metric of success – that formerly intractable problems of dense imaging were controllable with the revised code – was met. The previous version of the code had several built-in optimizations for sparse imaging that, while they greatly improved the speed of the code, break down in cases of dense imaging. Those assumptions were removed, and different optimizations were created so that the code can operate in cases of dense imaging.

The second goal of investigating ways to pre-screen data that might best be controlled together – such as that with similar lighting or pixel scales – was still ongoing at the time of the IR&D's expiration, but work is ongoing to explore this problem.

The third, ancillary goal of modernizing the code itself to work with new versions of Python had significant progress made, as well, which also resulted in "cleaning" the code to remove some no longer needed components and placing frequently used components into separate functions and programs that could be called and, also, edited in one place. As of the time of this report, the code also now works in modern versions of Python (3.12) without resulting in errors due to Python versioning.