DATA_SET_DESCRIPTION |
Data Set Overview
=================
This data set contains derived data products for the MSL Hazard Avoidance
cameras to support rover traverse planning, post-traverse assessment,
rover localization, operation of the robotic arm, and the selection of
science targets. The proximity of the Hazcams to the Martian surface
allows close-up views of the fine-grain texture of materials in the
immediate vicinity of the rover. Hazcam image data provides views of the
surface not viewable by the cameras mounted on the Remote Sensing Mast.
This data set is similar to the reduced data sets for the Hazard
Avoidance cameras on MER [MAKIETAL2003].
Detailed descriptions of all Reduced Data Record (RDR) products are
available in the MSL_CAMERA_SIS.PDF, located in the DOCUMENT directory of
this volume.
Processing
=========
This data set uses the Committee on Data Management and Computation
(CODMAC) data level numbering system. The MSL Hazcam RDRs are considered
CODMAC Level 3 (calibrated data equivalent to NASA Level 1A), CODMAC
Level 4 (resampled data equivalent to NASA Level 1B), or CODMAC Level 5
(derived data equivalent to NASA level 3). The RDRs are derived from the
Hazcam EDR data set and include radiometrically corrected and/or camera
model corrected and/or geometrically altered versions of the raw camera
data, in single frame form. All of the RDR data products in this dataset
have detached PDS labels.
Data
====
There are dozens of types of RDR products, described in detail in
Section 5.1 of the MSL_CAMERA_SIS.PDF. Below are descriptions of the
most commonly used RDRs.
1) Geometrically Corrected (Linearized) Images
EDRs and single-frame RDRs are described by a camera model. This
model, represented by a set of vectors and numbers, permit a point
in space to be traced into the image plane, and vice-versa.
EDR camera models are derived by acquiring images of a calibration
target with known geometry at a fixed azimuth/elevation. The vectors
representing the model are derived from analysis of this imagery.
These vectors are then translated and rotated based on the actual
pointing of the camera to represent the conditions of each specific
image. The results are the camera model for the EDR.
The Navcams use a CAHVOR model, while the Hazcams use a
more general CAHVORE model. Neither are linear and involve some
complex calculations to transform line/sample points in the image
plane to XYZ positions in the scene. To simplify this, the images
are warped, or reprojected, such that they can be described by a
linear CAHV model. This linearization process has several benefits:
a) It removes geometric distortions inherent in the camera
instruments, with the result that straight lines in the scene are
straight in the image.
b) It aligns the images for stereo viewing. Matching points are on
the same image line in both left and right images, and both left
and right models point in the same direction.
c) It facilitates correlation, allowing the use of 1-D correlators.
d) It simplifies the math involved in using the camera model.
Transformation introduces some artifacts in terms of scale change
and/or omitted data (see the references). The linearized CAHV camera
model is derived from the EDR's camera model by considering both the
left and right eye models and constructing a pair of matched linear
CAHV models that conform to the above criteria.
The image is then projected, or warped, from the CAHVOR/CAHVORE
model to the CAHV model. This involves projecting each pixel through
the EDR camera model into space, intersecting it with a surface
(which matters only for Hazcams and is a sphere centered on the
camera), and projecting the pixel back through the CAHV model into
the output image.
See GEOMETRIC_CM.TXT for additional detail.
2) Inverse Lookup Table Scaled Products
If the Hazcam EDR is in 8-bit format as a result of onboard 12 to
8-bit scaling using a Lookup Table (LUT), then an Inverse LUT is
applied to rescale the 8 lowest bits into the 12 lowest bits in the
16-bit signed integer.
3) Radiometrically Corrected Products
There are three types of radiometrically corrected products, and
multiple methods of performing radiometric correction. For more
information, see the MSL_CAMERA_SIS.PDF in the DOCUMENT directory
of this volume.
a) RA products have been corrected to absolute radiance units of
W/m^2/nm/steradian.
b) RI products have been corrected for instrument effects only,
and are in units of DN.
c) IO products are radiance factor (I/F) and are dimensionless.
4) Disparity Files
A Disparity file contains 2 bands of 32-bit floating point numbers
in the Band Sequential order (line, sample). Alternatively, line and
sample may be stored in separate single-band files. The parallax, or
difference measured in pixels, between an object location in two
individual images (typically the left and right images of a stereo
pair) is also called the disparity. Disparity files contain these
disparity values in both the line and sample dimension for each
pixel in the reference image. This reference image is traditionally
the left image of a stereo pair, but could be the right image for
special products. The geometry of the Disparity image is the same as
the geometry of the reference image. This means that for any pixel
in the reference image the disparity of the viewed point can be
obtained from the same pixel location in the Disparity image.
The values in a Disparity image are the 1-based coordinates of the
corresponding point in the non-reference image. Thus, the coordinates
in the reference image are the same as the coordinates in the
Disparity image, and the matching coordinates in the stereo partner
image are the values is the Disparity image. Disparity values of 0.0
indicate no valid disparity exists, for example due to lack of
overlap or correlation failure. This value is reflected in the
MISSING_CONSTANT keyword.
5) XYZ Products
An XYZ file contains 3 bands of 32-bit floating point numbers in
the Band Sequential order. Alternatively, X, Y and Z may be
stored in separate single-band files as a X Component RDR, Y
Component RDR and Z Component RDR, respectively. The single
component RDRs are implicitly the same as the XYZ file, which is
described below. XYZ locations in all coordinate frames for MSL
are expressed in meters unless otherwise noted.
The pixels in an XYZ image are coordinates in 3-D space of the
corresponding pixel in the reference image. This reference image
is traditionally the left image of a stereo pair, but could be
the right image for special products. The geometry of the XYZ
image is the same as the geometry of the reference image. This
means that for any pixel in the reference image the 3-D position
of the viewed point can be obtained from the same pixel location
in the XYZ image. The 3-D points can be referenced to any of the
MSL coordinate systems (specified by DERIVED_IMAGE_PARAMS Group
in the PDS label).
Most XYZ images will contain holes, or pixels for which no XYZ
value exists. These are caused by many factors such as
differences in overlap and correlation failures. Holes are
indicated by X, Y, and Z all having the same specific value.
This value is defined by the MISSING_CONSTANT keyword in the
IMAGE object. For the XYZ RDR, this value is (0.0,0.0,0.0),
meaning that all three bands must be zero (if only one or two
bands are zero, that does not indicate missing data).
6) Range (Distance) Files
A Range (distance) file contains 1 band of 32-bit floating point
numbers. The pixels in a Range image represent Cartesian distances
from a reference point (defined by the RANGE_ORIGIN_VECTOR keyword
in the PDS label) to the XYZ position of each pixel. This reference
point is normally the camera position as defined by the C point of
the camera model. A Range image is derived from an XYZ image and
shares the same pixel geometry and XYZ coordinate system. As with
XYZ images, range images can contain holes, defined by
MISSING_CONSTANT. For MSL, this value is 0.0.
7) Surface Normal Files
A Surface Normal (UVW) file contains 3 bands of 32-bit floating
point numbers in the Band Sequential order. Alternatively, U, V and
W may be stored in separate single-band files as a U Component RDR,
V Component RDR and W Component RDR, respectively. The single
component RDRs are implicitly the same as the UVW file.
The pixels in a UVW image correspond to the pixels in an XYZ file,
with the same image geometry. However, the pixels are interpreted
as a unit vector representing the normal to the surface at the point
represented by the pixel. U contains the X component of the vector,
V the Y component, and W the Z component. The vector is defined to
point out of the surface (e.g. upwards for a flat ground). The unit
vector can be referenced to any of the MSL coordinate systems
(specified by the DERIVED_IMAGE_PARAMS Group in the PDS label).
Most UVW images will contain holes, or pixels for which no UVW
value exists. These are caused by many factors such as differences
in overlap, correlation failures, and insufficient neighbors to
compute a surface normal. Holes are indicated by U, V, and W all
having the same specific value. Unlike XYZ, (0,0,0) is an invalid
value for a UVW file, since they are defined to be unit vectors. Thus
there is no issue with the MISSING_CONSTANT as there is with XYZ,
where (0.0,0.0,0.0) is valid.
8) Surface Roughness Maps
The roughness maps contain surface roughness estimates at each pixel
in the image, along with a 'goodness' flag indicating whether the
roughness meets certain criteria.
For each pixel, the surface normal product defines a reference plane.
XYZ pixels in the area of interest are gathered, and the distance to
the plane is computed. Minimum and maximum distances from the plane
are computed, with outliers excluded. Roughness is defined as the
distance between this min and max.
9) Slope Products
The slope RDR products represent aspects of the slope of the terrain
as determined by stereo imaging. There are several slope types,
described in further detail in the MSL_CAMERA_SIS.PDF.
10) Arm Reachability Maps
The Arm Reachability Maps contain information about whether or not the
instruments on the arm can reach (either contact or image) the object or
location represented by each pixel in the scene. They are derived from
the XYZ and Surface Normal products.
11) Stereo Anaglyphs
A stereo anaglyph is a method of displaying stereo imagery quickly
and conveniently using conventional display technology (no special
hardware) and red/blue glasses. This is done by displaying the left
eye of the stereo pair in the red channel, and displaying the right
eye in the green and blue channels. An anaglyph data product simply
captures that into a single 3-band color image, which can be
displayed using any standard image display program with no knowledge
that it is a stereo image. The red (first) band contains the left
eye image, while the green and blue (second and third) bands each
contain the right eye image (so the right image is duplicated in
the file).
Anaglyphs are created manually from CAHV linearized Full Framed
stereo pair EDRs or mosaics. Often times the images are
stretched prior to creating the anaglyph. After stretching, the
images are converted to a VICAR cube, which creates a single
multi-band image. The final step involves adding the PDS label.
Software
========
Hazcam camera downlink processing software used by the science and
engineering team during operations is focused on rapid reduction,
calibration, and visualization of images in order to make
discoveries, to accurately and expeditiously characterize the
geologic environment around the rover, and to provide timely input
for operational decisions concerning rover navigation and Robotic
Arm target selection.
Hazcam images can be viewed with the program NASAView, developed by
the PDS and available for a variety of computer platforms from the
PDS web site http://pds.nasa.gov/tools/nasa-view.shtml. There is
no charge for NASAView.
Media/Format
============
The data set will be delivered and made available to the public
through the Planetary Data System web sites.
|