A SEMI-RIGOROUS SENSOR MODEL FOR PRECISION GEOMETRIC PROCESSING OF MINI-RF BISTATIC RADAR IMAGES OF THE MOON

The spaceborne synthetic aperture radar (SAR) instruments known as Mini-RF were designed to image shadowed areas of the lunar poles and assay the presence of ice deposits by quantitative polarimetry. We have developed radargrammetric processing techniques to enhance the value of these observations by removing spacecraft ephemeris errors and distortions caused by topographic parallax so the polarimetry can be compared with other data sets. Here we report on the extension of this capability from monostatic imaging (signal transmitted and received on the same spacecraft) to bistatic (transmission from Earth and reception on the spacecraft) which provides a unique opportunity to measure radar scattering at nonzero phase angles. In either case our radargrammetric sensor models first reconstruct the observed range and Doppler frequency from recorded image coordinates, then determine the ground location with a corrected trajectory on a more detailed topographic surface. The essential difference for bistatic radar is that range and Doppler shift depend on the transmitter as well as receiver trajectory. Incidental differences include the preparation of the images in a different (map projected) coordinate system and use of “squint” (i.e., imaging at nonzero rather than zero Doppler shift) to achieve the desired phase angle. Our approach to the problem is to reconstruct the time-of-observation, range, and Doppler shift of the image pixel by pixel in terms of rigorous geometric optics, then fit these functions with low-order polynomials accurate to a small fraction of a pixel. Range and Doppler estimated by using these polynomials can then be georeferenced rigorously on a new surface with an updated trajectory. This “semi-rigorous” approach (based on rigorous physics but involving fitting functions) speeds the calculation and avoids the need to manage both the original and adjusted trajectory data. We demonstrate the improvement in registration of the bistatic images for Cabeus crater, where the LCROSS spacecraft impacted in 2009, and describe plans to precision-register the entire Mini-RF bistatic data collection. * Corresponding author

Introduction: This abstract is one in a series [1-4] describing our development of techniques for radargrammetry (analogous to photogrammetry but taking account of the principles by which radar images are formed) and their application to mapping the Moon with Mini-RF images.Our overall goals are to use radar stereopairs to produce digital topographic models (DTMs) of medium resolution and broad coverage, and to control and orthorectify (project onto an existing DTM) images to produce image maps and mosaics with greatly improved positional accuracy.The previous abstracts in the series describe the general principles of radargrammetry and focus on the analysis of "standard" (monostatic) observations, which are obtained by transmitting and receiving the radar signal from the Mini-RF instrument in lunar orbit.
The present abstract focuses on bistatic observations, for which the transmitter is located on Earth and the receiver on the spacecraft.These observations are of tremendous scientific interest as part of the overall Mini-RF program of searching for ice deposits at the lunar poles [5,6] because the variation of signal strength with the phase angle between transmitter and receiver may distinguish between coherent backscatter in ice [7] and diffuse scattering by blocky surfaces [8].Controlling the bistatic observations so they are geometrically registered to topographic data and then orthorectifying them to remove parallax distortions are essential steps toward detailed and quantitative analysis.Only by these means can the bistatic polarimetry measurements be corrected for slope effects and correlated with monostatic radar images and other remote sensing data such as optical and thermal images and altimetry on a pixel-by-pixel basis.
Source Data: NASA's Mini-RF investigation consists of two synthetic aperture radar (SAR) imagers for lunar remote sensing: the "Forerunner" Mini-SAR on ISRO's Chandrayaan-1 [5], which operated monostatically from 2008-2009, and the Mini-RF on the NASA Lunar Reconnaissance Orbiter (LRO) [6], which carried out monostatic observations from 2009 until its transmitter failed in December 2010.Both are designed to record the full polarization state of the received signal: 4 parameters, which can be treated as a 4-band image and combined in various ways to yield quantities of interest such as the total backscattered power (S 1 ) and circular polarization ratio (CPR).Our previously described software and techniques are applicable to monostatic data from either instrument, but the bistatic observations analysed here have been obtained exclusively by LRO, receiving S-band (12.6 cm wavelength) signals transmitted from the Arecibo Observatory [9].Following a low-power demonstration of concept in April 2011, approximately 32 bistatic observations have been obtained to date, covering both polar and (as a baseline for possible detection of polar ice) nonpolar targets, notably Cabeus and Kepler craters.The current LRO operations plan includes approximately two bistatic observations per month.
Monostatic Mini-RF observations were processed by Vexcel Corporation into Level 1 (range-azimuth, where azimuth refers to distance along the flight track) and Level 2 (map projected) formats [10].The bistatic observations are being reduced by team members at Sandia National Laboratory [11].Initial products (now available through the NASA Planetary Data System) include raster images of the radar polarization components (Stokes vectors S 1 …S 4 ) on a grid that is referenced to the spacecraft trajectory in a rather complex way.A companion file records the pixel-by-pixel variation of ancillary values including latitude, longitude, range, and phase angle on a raster matching the images.The typical grid spacing is 100 m.
Unfortunately, these products are generated by assuming that the spacecraft ephemeris used in processing is error-free and that all features lie on a zero-elevation surface.In cartographic terms, they are both uncontrolled and unrectified.As a result they are both misaligned with other lunar datasets and contain internal distortions due to topographic parallax.Correcting these errors would be difficult because the (nominal) latitude and longitude of pixels cannot be calculated directly but must be interpolated from the companion files.To facilitate precision processing, we are working to regenerate the bistatic observations in a map projection equivalent to that used in the monostatic Level 2 products.
Technical Approach and Methodology: Precision geometric processing is based on a sensor model-a mathematical and software model capable of computing ground coordinates for a given image line-sample and vice versa-which can be used in multiple ways.For processing Mini-RF monostatic images, we developed a sensor model for Level 1 images in the USGS planetary cartography system ISIS [12], enabling us to orthorectify images by projecting them onto a DTM.By also computing how the ground position varies as a result of small shifts in the spacecraft trajectory (i.e., partial derivatives or "partials" of the coordinates with respect to trajectory adjustments), we can control the images by measuring image-to-ground correspondences and (for mosaics of more than one image) image-to-image ties, then using the bundle adjustment program jigsaw [13].Using LOLA [14] as both ground truth and DTM thus allows us to produce mosaics in which pixels are accurately located regardless of topography and viewing geometry [4].In addition, we created a sensor model for the commercial SOCET SET stereomapping package [15], allowing us to produce DTMs at a useful resolution for regional mapping [2,3].Bistatic images provide little or no new stereo coverage; instead, their science value lies in the comparison of scattered power and polarization values with their monostatic (backscatter) equivalents.Our current work therefore focuses on the development of an ISIS sensor model for bistatic images to enable control by bundle adjustment and orthorectification.
Sensor Model Differences: Bistatic and monostatic radar images are similar to one another (and distinct from optical images) in that the fundamental observable coordinates of the images are the time of minimum range (zero Doppler shift) and the three-dimensional range ("slant range") at that time [16].Bistatic and monostatic images differ fundamentally in the geometric definition of range, however, and the Mini-RF files also differ in incidentals of how the images are

2548.pdf 45th Lunar and Planetary Science Conference (2014)
presented.A prospective sensor model has to deal with both types of differences.Achieving this capability has required substantial work that will be of interest to photogrammetrists (and especially radargrammetrists), but the details are not crucial to most users because in the end the workflow will be identical to that for monostatic images.
The geometry of monostatic imaging is relatively simple.Both the transmitter and receiver are located on the spacecraft, so a surface of given range is a sphere centered on the antenna, and the locus of zero Doppler is a plane perpendicular to the trajectory through the spacecraft position at the given time.The solution for the ground coordinates of a feature lies at the intersection of the sphere and plane with one another (forming a ring) and with the surface of the target.
For bistatic images, the range is a sum of distances from the transmitter on Earth to surface and from the surface to the receiver on the spacecraft, so the constant-range surface is an ellipsoid with one focus at the transmitter and the other at the receiver.Given the large distance between the Earth and Moon, this ellipsoid is well approximated by a paraboloid with focus at the receiver and axis pointing to Earth.The Doppler shift contains terms from the motion of both Earth and Moon relative to the spacecraft, so the zero-Doppler surface is not a plane but a cone with the spacecraft at its tip.Thus, the surfaces of constraint that intersect the Moon at the feature of interest are distorted but topologically equivalent to those for the monostatic case.
Mini-RF monostatic Level 1 images are in "rangeazimuth" coordinates, in which line number relates linearly to time of zero Doppler (and thus to distance along track, referred to as "azimuth") and sample number relates to slant range via a polynomial that is chosen to minimize the distortion of the images if the surface were smooth.Initial bistatic images have initially been gridded in a way that also provides a fairly undistorted image but is difficult to compute with.The latitude and longitude coordinates of each pixel are therefore provided as accompanying image rasters.To avoid the need to use (and interpolate) these coordinate rasters, we have designed a Level 2 bistatic product that is in a map projection identical to that used for monostatic Level 2 images.This Oblique Cylindrical projection is similar to Simple Cylindrical (latitude-longitude) projection but in a rotated coordinate system that is tied to the spacecraft ground track.The result therefore gives a fairly undistorted image in which the line and sample directions approximate azimuth and range.More importantly, nominal latitude and longitude can be computed for any pixel and vice versa, by simple map projection equations [10].
The sensor model image-to-ground calculation then consists of reconstructing the closest approach time and range from the nominal trajectories of the Earth and spacecraft, nominal elevation (zero), and nominal latitude and longitude.This is followed by a calculation to determine where the time-range coordinates would intersect the Moon for given (generally nonzero) elevation and slightly adjusted trajectory.This approach of first undoing a nominal map projection and then doing a corrected projection onto topography is the same as that used by us for Magellan and Cassini radargrammetry [17].
To speed the calculation (and also to avoid having to deal with trajectory data for the Earth and spacecraft simultaneously, which would be difficult in the current ISIS design) we precalculate time and range values for selected (equally spaced) pixels in the image and fit interpolating polynomials to these quantities.The polynomials are then used during processing to do the first step of image-to-ground processing and the second step is performed numerically.The groundto-image transformation is analogous: closest approach time and range for a point can be calculated straightforwardly from the ground coordinates (including elevation) and the interpolating polynomials are used to find the corresponding pixel location.
Status and Future Work: To date, we have demonstrated the capability to reprocess the bistatic observations on an Oblique Cylindrical map grid.We have designed PDS labels for such "Level 2" bistatic images and developed ISIS software to ingest image files in this format.We have also developed the software to fit the interpolating polynomials used for calculating time and range from pixel coordinates, and the sensor model transformations (ground to image and image to ground) making use of these polynomials.
The immediate next step is to create PDS Level 2 labels for a sample bistatic image in Oblique Cylindrical projection, ingest it into ISIS, and verify that the nominal ground coordinates (i.e., based on zero elevation and the nominal trajectory) match the coordinates calculated during image formation at Sandia.Once the sensor model has been validated by this step we will be able to orthorectify bistatic images in ISIS, but the results will only be useful in cases where the errors in the a priori trajectory are smaller than the image resolution.Otherwise the image will be projected onto the wrong part of the DTM and distortions will be increased rather than decreased.To address this problem, we are developing additional software to calculate the partials of the sensor model with respect to trajectory adjustments.Once this is done we will be able to use jigsaw to improve the alignment of the images with the LOLA DTM, based on measured point correspondences between the image and DTM.The result will be bistatic images (including all polarization information) that are aligned to sub-pixel accuracy to the topographic data and to the many monostatic observations that we are controlling separately [4].With these precision-geometry products, lunar scientists will be able to remove slope-related effects and compare the radar scattering behavior of the lunar surface on a pixel-by-pixel basis