This dictionary contains high level classes and attributes used in imaging and
spectrometer products. It also contains classes with attributes used during active mission
operations. Many of these classes are designed to be extended into local mission dictionaries.
## CHANGE LOG ##
1.4.0.0
- upgraded to v1900 of the IM
- removed specific Autoexposure algorithm classes and introduced generic Algorithm_Parameter class
- new/improved definitions for Companding_Parameters, Downsampling_Parameters, Exposure_Parameters
- removed the following attributes: Data_Correction.data_correction_subtype
- removed the following classes because they were insufficiently multi-mission:
- Derived_Product_Parameters
- Frame_Parameters
- Product_Identification
- Stereo_Product_Parameters
- Vector_Range_Origin
- added exposure_type enumeration
- new flat_field_algorithm attribute
- changed compression_type to compression_class and compression_mode_name to compression_type
- added new enumerations for compression_class and compression_type
- added new Instrument_Device_Currents class
1.5.0.0
- upgraded to v1A10 of the IM
- removed old blocks of commented out XML
1.5.1.0
- changed unit of bandwidth attribute from Units_of_Frequency to Units_of_Length
1.6.0.0
- Upgraded to v1B00 of the IM.
- Re-organized attributes into alphabetical order.
- Re-organized classes to make most commonly used parts of label easier to find.
- Created new classes for housekeeping data (Instrument_State_Parameters, Commanded_Parameters) and placed them at the bottom.
- Re-wrote numerous class and attribute definitions, to make dictionary easier for users to understand.
- Renamed some classes and attributes to remove redundant words.
- Re-designed data correction classes (such as Flat_Field_Correction, Radiometric_Correction, etc.) and
other data processing classes such as Companding, Downsampling, etc. to share a common base class.
- Renamed Compression class and related attributes to Onboard_Compression_*. This is to make it clear
that any image compression was performed for data storage/transmission. PDS4 does not allow compressed images.
- Renamed Filter_Parameters to Optical_Filter
1.6.1.0
- Added Brightness_Correction class and associated subclasses/attributes
- Added Pointing_Correction class and associated subclasses/attributes (moved from IMG_SURFACE LDD)
- Added attributes to Color_Processing class: color_dn_scaling_method, color_dn_scaling_factor
- Corrected typo in local_reference_type_check_optical_filter rule
1.7.0.0
- Upgraded to v1D00 of core IM.
- Added Illumination class and associated subclasses/attributes.
- Added Video class to Commanded_Parameters. (bugfix)
1.7.1.0, T.Hare
- Added filter_position_count, https://github.com/pds-data-dictionaries/ldd-img/issues/25
- Added GOP_Frames, https://github.com/pds-data-dictionaries/ldd-img/issues/26
- Added atmospheric_opacity and atmospheric_opacity_reference, https://github.com/pds-data-dictionaries/ldd-img/issues/27
- Added detector_to_image_flip, https://github.com/pds-data-dictionaries/ldd-img/issues/31
- Changed Flat_Field_Correction from 0:1 to 0:* (unbounded), https://github.com/pds-data-dictionaries/ldd-img/issues/30
- Added Spatial_Filter and Image_Filter classes and 6 new children (*filter_window*),
https://github.com/pds-data-dictionaries/ldd-img/issues/28
- Under Video added frame_index and gop_start_index, https://github.com/pds-data-dictionaries/ldd-img/issues/26
- Under Sampling added saturated_pixel_count, valid_pixel_count and missing pixel_count attributes,
https://github.com/pds-data-dictionaries/ldd-img/issues/8
- Added Image_Mask, Image_Mask_File clases and under mask_type, horizon_mask_elevation attributes.
anded new Image_Mask added under both Imaging and Commanded_Parameters classes,
https://github.com/pds-data-dictionaries/ldd-img/issues/29
- Added mask_transparent_value attribute to Image_Mask_File class
1.7.2.0 P.Geissler
- added classes Companding_Table and Companding_Table_Mapping to accomodate attributes:
input_dn_min, input_dn_max, and output_dn
- added the missing y_center attribute. Fixed saturated_pixel_count, missing_pixel_count, valid_pixel_count
1.7.3.0, T.Hare
- Fixes for new enforced rules at LDDTool v11.2.2
- No longer can one use *_type as non enumerated so changing:
radiometric_type, mask_type, and subframe_type to have enumerations
- If an optional attribute, don't set nilable to true,
so setting current_value, erase_count and voltage_value to nilable=false
The Autoexposure class contains attributes used
to identify or describe the algorithm used to automatically
calculate the proper exposure time. This is generally based on
some kind of histogram analysis. The specific autoexposure
algorithm used is defined in the processing_algorithm attribute,
and the specific set of attributes needed to describe it will
vary based on the algorithm. Examples of autoexposure algorithms
include "Maki 2003" used on MER, MSL ECAMs, M2020 ECAMS;
"Maurice 2012" used on MSL ChemCam; "Smith 1997" used on Mars
Pathfinder Imager.
The Brightness_Correction class describes
brightness corrections that were applied to an image or mosaic.
Brightness correction is the process of adjusting the DN values
of adjacent frames in a mosaic so they match visually. It may
also involve contrast or vignetting adjustments. The result may
no longer be radiometrically calibrated due to the adjustments.
The processing_algorithm child of Brightness_Correction
describes the type of brightness correction, and should
correspond to the classes within Brightness_Correction_Image. If
the algorithm is "MIXED", multiple algorithms were used, in
which case the specific information in each
Brightness_Correction_Image must be used.
The Brightness_Correction_File identifies a file
containing brightness correction information. The project SIS
should define the format of this file. Correction information
may appear in the file, in instances of the
Brightness_Correction_Image class, or both (if both, they should
be consistent).
The Brightness_Correction_HSI_Linear class works
just like Brightness_Correction_Linear, except that the color
image is first converted to HSI (Hue, Saturation, Intensity)
space, the correction is applied only to Intensity, and then the
result is converted back to RGB space.
The Brighness_Correction_Image class describes
the brightness correction that was applied to a single image,
whether alone or part of a mosaic. The image this correction
applies to may be identified via the enclosed
Internal_Reference, or via the order in which the
Brightness_Correction_Image objects appear (which matches the
order given in Input_Product_List).
The Brightness_Correction_Linear class describes
a simple linear brightness correction, with an additive
(brightness_offset) and multiplicative (brightness_scale) factor
applied. The result is: output = input * brightness_scale +
brightness_offset. If there are multiple bands, the same
correction is applied to each band.
The Color_Filter_Array class describes whether
or not an image was acquired using a Color Filter Array (CFA)
and if so, whether and how the CFA pattern was removed. A CFA is
a method for making color images using one exposure on a single
sensor plane, where microfilters of different wavelengths are
put in front of pixels in a specific pattern. The most common
pattern is the Bayer pattern, which has a red, blue, and two
green pixels in every 2x2 pixel square. Although generally used
for RGB color, CFA filters can be of any number and wavelength
(see color_filter_array_type).
The Color_Processing class contains parameters
describing color correction or processing and how the image is
represented in color.
The Commanded_Parameters class contains
attributes used to identify or describe the commands sent to a
spacecraft to perform one or more actions resulting in the
acquisition of the current data product. These are distinct from
similar values in the root Imaging class which indicate the
state of the image as acquired.
The Companding class describes whether or not
data is or has had its bit depth reduced (for example conversion
from 12 to 8 bits via a lookup table or bit scaling), the venue
where it occurred (Software or Hardware), and the method used to
complete the companding. The processing_algorithm attribute
specifies how data was companded. Generally this will either be
via a lookup table (such as a square root encoding), or by
shifting bits to preserve the high order bits and discard the
low order bits. The value of this keyword is mission specific
but there are recommended values that should apply across
missions when possible: NONE - no scaling. LUTn - use the
numbered lookup table. Lookup tables are defined in the mission
SIS. It is preferred for "n" to be a number but it could be a
name, for example LUT_MMM_3 to indicate LUT 3 for the MMM
instruments (on MSL). MSB_BITn - Shift to make bit "n" the most
significant. Bits start numbering at 0 so MSB_BIT7 means no
shift for a 12->8 bit companding, while MSB_BIT11 means to shift
right 4 bits for a 12->8 bit companding. AUTOSHIFT - Data should
be shifted to preserve the highest value. This value should only
appear in a command echo; one of the MSB_BITn values should be
used in downlinked data to specify what the actual shift
was.
The Companding _File class specifies the file
containing the decompanding (inverse LUT) table used to process
the data.
The Companding_Table class specifies the look up
table used to compand the data.
The Companding_Table_Mapping class specifies the
mapping between the input DN range and the output DN as the data
are companded.
The Correction_Parameter class specifies
identifier(s) and value for a data correction parameter
applicable to the parent class.
The Data_Processing class contains attributes
describing how processing and/or calibration was performed on a
data product. It is not intended to be used on its own; rather
it is intended to be extended by classes specific to a
particular type of processing, such as Shutter_Subtraction,
Flat_Field_Correction, Companding, etc. The attributes of this
class thus become attributes of the extension class.
The Data_Processing_File class contain
attributes which identify a file containing calibration data
that was applied to the science data. It is not intended to be
used on its own; rather it is intended to be extended by classes
specific to a particular type of file, such as Flat_Field_File.
Note that the "name" attribute is the name of the file; this
attribute should only be used if the file is either not included
in an archive, or if the delivery status is unknown by the data
provider. The External_Reference or Internal_Reference class
should be used instead of name if at all
possible.
The Detectorclass contains attributes describing
the state of the instrument detector. These are values directly
read from the detector and do not necessarily reflect the state
of the image after onboard processing. For example, the entire
image may be read into memory and then subframed in software, in
which case the subframe attributes in this class reflect the
entire image (as read from the detector), whereas those in the
Subframe class represent the final subframe
results.
The Device_Component_State class describes the
state of one component of an imaging instrument or other imaging
device. The meaning of "state" is
device-specific.
The Device_Component_States class provides a
container for the set of states of a component of an imaging
instrument or other imaging device.
The Device_Current class provides the current of
some point on an imaging instrument or other imaging
device.
The Device_Currents class provides a container
for the set of currents of an imaging instrument or other
imaging device.
The Device_Motor_Count class describes the raw
motor count of one actuator on an imaging instrument or other
imaging device (such as a filter wheel, focus motor, or zoom
motor). This information should typically be reported in a more
specific and useable form in other classes, such as a filter
number or wavelength in the Optical_Filter class or a focus
distnace in the Focus class.
The Device_Motor_Counts class provides a
container for the set of raw motor counts of actuators on an
imaging instrument or other imaging device (such as a filter
wheel, focus motor, or zoom motor).
The Device_Parameters class identifies where a
measurement was made. It may refer to an individual imaging
instrument, imaging instrument device, or some defined point on
the instrument or device. The class is intended to be extended
(for example, by Device_Temperature) to add the associated
measurement rather than being used directly.
The Device_Temperature class provides a
container for the temperature of some point on an imaging
instrument or other imaging device.
The Device_Temperatures class provides a
container for the set of temperatures of an imaging instrument
or other imaging device.
The Device_Voltage class provides the voltage of
some point on an imaging instrument or other imaging
device.
The Device_Voltage class provides a container
for the set of voltages of an imaging instrument or other
imaging device.
The Downsampling class describes whether or not
downsampling occurred, the venue where it occurred (Software or
Hardware), the method used to downsample, and the pixel
averaging dimensions. A downsampled image is a smaller version
of the image, resulting in reduced resolution of the same
coverage area. The processing_algorithm attribute specifies the
pixel resolution downsample method used. This varies by mission,
but examples from MSL include: 'Mean' - Downsampling done in
software by calculation of the mean., 'Conditional' - Use
hardware binning if downsampling (by mean calculation) and
subframe arguments are consistent.
The Exposure class contains attributes
identifying the image instrument exposure configuration and
image exposure values. As a child of the Imaging class, these
attribute values identify the actual exposure values when the
image was taken. As a child of the Commanded_Parameters class,
these attribute values are those that were commanded to the
spacecraft at the time the image was taken.
The Flat_Field_Correction class specifies how
flat-field correction was performed on this image. This can be
done either algorithmically, using a
Radial_Flat_Field_Correction, or using a
Flat_Field_File.
The Flat_Field_File class specifies the image
used for flat field correction. The image is divided by this
flat field image in order to apply the flat field correction
(which is the opposite of Radial_Flat_Field_Function).
The Focus class contains attributes that
describe the focus or autofocus parameters for an observation.
As a child of Commanded_Parameters, these indicate the focus
settings used to command the instrument. Otherwise, they
indicate the actual focus used by the
observation.
The Focus_Stack class contains attributes that
describe a set of images taken at different focus settings,
which are often merged to create a best-focus image or combined
to extract range information. Focus stacks are also sometimes
called ZStacks.
The Frame class contains attributes providing
information specific to an image frame. A frame consists of a
sequence of measurements made over a specified time interval,
and may include measurements from different instrument
modes.
The ICER_Parameters class contains attributes
describing onboard compression parameters specific to Joint
Photographic Experts Group (JPEG) image compression. ICER is a
wavelet-based image compression file format used by the NASA
Mars Rovers. ICER has both lossy and lossless compression
modes.
The Illumination class provides attributes
describing the illumination sources used to illuminate the
imaging target.
The Image_Compression_Segment class provides
attributes describing each segment into which data was
partitioned for error containment purposes as part of the
compression process.
The Image_Filter class specifies what kind of
image filtering has been done to the image. Image filtering
looks at image intensity rather the geometry of pixels (cf.
Spatial_Filter).
The Image_Mask specifies how pixels were masked
(removed) from an image. Masks are typically used to suppress
results in areas where they don't belong, for example masking
off spacecraft hardware or removing pixels that did not meet
some processing threshold.
This class identifies a file used for image
masking. The mask_type defines the type of file; if mask_type is
missing then "image" is assumed.
The Imaging class contains classes and
attributes describing both the image product itself and the
imaging instrument. Image product information can include
exposure duration, filters, data correction, sampling, frame,
sub-frames, and how the product was derived. For the imaging
instrument, information can be provided describing the dynamic
physical or operating characteristics of the imaging
instrument.
The Instrument_State class contains classes
providing the values of any dynamic physical or operating
characteristics of the imaging instruments.
The JPEG_Parameters class contains attributes
describing onboard compression parameters specific to Joint
Photographic Experts Group (JPEG) image
compression.
The JPEG_Progressive_Parameters class contains
attributes describing an interlaced progressive JPEG format, in
which data is compressed in multiple passes of progressively
higher detail. This is ideal for large images that will be
displayed while downloading over a slow connection, allowing a
reasonable preview after receiving only a portion of the
data.
The LED_Illumination_Source class provides
attributes describing an individual LED used to illuminate an
imaging target.
The LOCO_Parameters class contains attributes
describing onboard compression parameters specific to Low
Complexity Lossless Compression (LOCO) image compression, a
lossless submode of ICER
Used when the list values have no units.
The Onboard_Color_Matrix class represents a 3x3
matrix that is used onboard to perform color correction. It is
done after de-Bayering, as all three color bands are needed for
each pixel. The first three elements are multiplied by the R,G,B
(respectively) pixel values and summed to get the output Red
pixel value. Similarly, the second three create the output
Green, and the last three the output Blue. If the label is not
present, no correction was performed.
The Onboard_Compression class contains
attributes describing the compression performed onboard a
spacecraft or instrument for data storage and
transmission.
The Onboard_Responsivity class specifies factors
that have been applied to the R, G, and B cells (respectively)
of the Bayer pattern, before de-Bayering (demosaicking) takes
place. The intent of these is to approximately balance the
filters so the de-Bayering process is not skewed, and EDR/ILT
products look reasonable before full radiometric or color
correction is done on the ground. If these factors are not
present, no correction was performed.
The Optical_Filter class defines the filters
used by the camera optics (not to be confused with image
processing software filters). The filter may be identified by
name, identifier, number, or some combination of
these.
The Pixel_Averaging_Dimensions class provides
the height and width, in pixels, of the area over which pixels
were averaged prior to image compression.
The Pointing_Correction class contains
attributes used to identify and describe the camera model
transformations completed in order to update pointing
information of an image or mosaic.
The Pointing_Correction_File class identifies a
file containing pointing correction
information.
The Pointing_Correction_Image class contains
attributes used to identify and describe the camera model
transformations completed in order to update pointing
information of a single image, whether alone or part of a
mosaic.
The Pointing_Model_Parameter class specifies the
name and value (numeric) parameters needed by the pointing model
identified by the pointing_model_name attribute in the
Pointing_Correction parent class. The meaning of any given
parameter is defined by the pointing model.
The Radial_Flat_Field_Function class pecifies
parameters used to generate a synthetic flat field using a
simple radial function of the form: r = (x-x_center)^2 +
(y-y_center)^2 ; flat_field(x,y) = 1 + r0 + r1*r + r2*r^2 +
r3*r^3 . Note that x is in the sample direction of the image,
and y is in the line direction. The image is multiplied by this
function in order to perform a flat field correction (which is
the opposite of Flat_Field_File).
The Radiometric_Correction class is a container
for the type and details of the radiometric calibration
performed on the product.
The Sampling class contains attributes and
classes related to the sampling, scaling, companding, and
compression or reduction in resolution of
data.
The Shutter_Subtraction class specifies
attributes describing the removal from the image of the shutter,
or fixed-pattern.
The Spatial_Filter class specifies what kind of
spatial filtering has been done on the image. Spatial filtering
looks at the geometry of pixels (e.g. XYZ or range values)
rather than their intensity (cf.
Image_Filter).
The Subframe class describes the position and
other optional characteristics of an image subframe, relative to
the original image.
The Video class contains attributes related to
video observations, defined as a regular time series of frames.
The class can be used to describe a single frame within the
video, or the video as a whole.
This section contains the simpleTypes that provide more constraints
than those at the base data type level. The simpleTypes defined here build on the base data
types. This is another component of the common dictionary and therefore falls within the
common namespace.
The active_flag attribute indicates whether or
not the data processing described by the parent class is active.
In general, the presence of the parent class implies it is
active and thus active_flag is optional. The primary purpose for
active_flag is to either explicitly indicate a correction is not
active (for example, if it normally is but was explicitly turned
off), or to be able to provide parameters for historical reasons
that may no longer be relevant to a current correction.
The analog_offset attribute identifies the
analog value that is subtracted from the signal prior to the
analog/digital conversion.
The atmospheric opacity (tau) value used in
radiometric correction.
The atmospheric opacity (tau) target value to
which the image was corrected.
The auto_exposure_data_cut attribute specifies
the DN value which a specified fraction of pixels is permitted
to exceed. The fraction is specified using the
auto_exposure_data_fraction attribute.
The auto_exposure_percent attribute specifies
the auto-exposure early-termination percent. If the desired DN
(auto_exposure_data_cut) is within this percentage of the
measured DN (the DN at which the percentage of pixels above that
DN equals or exceeds the auto_exposure_pixel_fraction), then the
auto exposure algorithm is terminated and the calculated time is
accepted.
The auto_exposure_pixel_fraction attribute
specifies the percentage of pixels whose DN values may exceed
the auto_expsoure_data_cut.
The autofocus_step_count attribute specifies
the number of steps (images) to be taken by an autofocus
algorithm.
The autofocus_step_size attribute specifies the
size in motor counts of each (or the initial) step taken by the
focus adjustment mechanism in an autofocus
algorithm.
The bandwidth attribute provides a measure of
the spectral width of a filter. For a root-mean-square detector
this is the effective bandwidth of the filter, i.e. the full
width of an ideal square filter having a flat response over the
bandwidth and zero response elsewhere. Another common method for
measuring bandwidth is Full Width at Half Maximum, which is the
width of a "bump" on a curve or function. It is given by the
distance between points on the curve at which the function
reaches half of its maximum value.
The best_focus_distance attribute specifies the
estimated distance to best focus.
The brightness_offset attribute defines the
additive factor used for a linear brightness
correction.
The brightness_scale attribute defines the
multiplicative factor used for a linear brightness
correction.
The center_filter_wavelength attribute provides
the wavelength of the center of the passband, or the peak
transmissivity, for an instrument filter.
For single-band images, this defines which
component of the color space is represented by this band. This
keyword is not needed for 3-band images, as all bands are
represented.
The color_dn_scaling_factor attribute specifies
the actual value used to scale the color values. This value is
determined using the color_dn_scaling_method.
The color_dn_scaling_method attribute defines
how the color values are scaled. EXPOSURE_NORMALIZED_COLOR means
that the color values have been normalized based on exposure
time, so neighboring images in a mosaic will have the same color
values. DN_COLOR means that the color values are based on the
raw DNs, so images take full advantage of the available dynamic
range but may not match with neighbors in a
mosaic.
Specifies whether the image still has a CFA
pattern ("Encoded"), the CFA pattern has been removed
("Decoded") or it never had a pattern ("No
CFA").
Defines the type of Color Filter Array (CFA)
used to encode multiple colors in a single exposure. The most
common example of this is the Bayer pattern. This is optional if
there is no CFA. Additional attributes, specific to each CFA
type, define whether or not the CFA pattern has been removed,
and if so, how (e.g. bayer_algorithm).
Defines the color space in which this product is
expressed. Some color spaces (e.g. XYZ or xyY) are independent
of illuminant, while for others (e.g. sRGB or pRGB) the
illuminant matters. It is expected that the defined color spaces
will increase over time.
The color_subsampling_mode attribute specifies
the JPEG color subsampling mode used during compression. Valid
values: '4:2:2' - 4:2:2 chroma subsampling, which is the typical
case, '4:4:4' - 4:4:4 chroma sampling, which indicates no
subsampling, 'Grayscale' - indicates a grayscale
image
The companding_state attribute specifies whether
the data is or has had its bit depth reduced, for example
conversion from 12 to 8 bits via a lookup table or bit scaling.
Valid values: None - values have not been companded. Companded -
values are currently companded. Expanded - values have been
companded but are now expanded back to original
size.
The crosstrack_summing attribute provides the
number of detector pixel values in the crosstrack direction that
have been averaged to produce the final output
pixel.
The current_value attribute provides provides
the current, in the specified units, of an imaging instrument or
some part of the imaging instrument.
The decomposition_stages attribute identifies
the number of stages of decomposition.
The deferred_flag attribute specifies whether
compression was done at the time of image acquisition, or was
deferred until later (typically at downlink time).
The detector_to_image_flip attribute indicates
whether and how the image was flipped (mirror image) along its
optical path through an instrument, from detector to final image
orientation. "Horizontal" means a left-to-right flip, while
"Vertical" means a top-to-bottom-flip. Note that if both this
attribute and detector_to_image_rotation exist, the flip is
assumed to have happened before the rotation.
The detector_to_image_rotation attribute
specifies the clockwise rotation, in degrees, that was applied
to an image along its optical path through an instrument, from
detector to final image orientation. Note that if both this
attribute and detector_to_image_flip exist, the flip is assumed
to have happened before the rotation.
The device_id attribute supplies the identifier
of an imaging instrument, an imaging instrument device, or some
point on the instrument or device.
The device_name attribute supplies the formal
name for an imaging instrument, an imaging instrument device, or
some point on the instrument or device.
The device_state attribute indicates the state
of a sensor or other device associated with the imaging
instrument. These states are interpreted in an
instrument-specific way.
The downtrack_summing attribute provides the
number of detector pixel values in the downtrack direction that
have been averaged to produce the final output
pixel.
Defines the gamma value encoded in this image.
Gamma correction is used to nonlinearly compress the intensities
in an image, and most display systems assume that images are
encoded with an sRGB gamma. Note that this is a string value
because the most common gamma correction ("sRGB") is not
precisely expressible as a gamma exponent. A numeric value
indicates a gamma exponent.
The erase_count specifies the number of times a
detector has been or will be flushed of data in raw counts,
dependent on the parent class for the
attribute.
The error_pixel_count attribute specifies the
number of pixels that are outside a valid DN range, after all
decompression and post decompression processing has been
completed.
The exposure count attribute provides the number
of exposures taken during a certain interval, such as the
duration of one command. For example, this may include the
number of exposures needed by an autoexpose
algorithm.
The exposure_duration attribute provides the
amount of time the instrument sensor was gathering light from
the scene, such as between opening and closing of a shutter, or
between flushing and readout of a CCD.
The exposure_duration_count attribute specifies
the value, in raw counts, for the amount of time the instrument
sensor was gathering light from the scene, such as between
opening and closing of a shutter, or between flushing and
readout of a CCD. This is the raw count either commanded or
taken directly from telemetry as reported by the spacecraft.
This attribute is the same as the exposure_duration but in DN
counts instead of time, and the translation of
exposure_duration_count to exposure_duration will differ by
mission.
The exposure_duration_threshold specifies the
exposure time threshold in raw counts, when
shutter_subtraction_mode = 'Conditional'.
The exposure_type attribute indicates the
exposure setting on a camera. Valid values: 'Manual' - manual
exposure setting, 'Auto' - autoexposure is applied by the
camera, 'Test' - test exposure setting telling the camera to
return a fixed-pattern test image.
The filter_id attribute provides a short string
identifier for an instrument filter through which an image or
measurement was acquired or which is associated with a given
instrument mode.
The filter_name attribute provides the name,
described in the mission documentation, of the optical filter
through which an image or measurement was
acquired.
The filter_number attribute provides the numeric
identifier of an instrument filter through which an image or
measurement was acquired or which is associated with a given
instrument mode.
The filter position count is the position in
motor counts of the filter wheel motor.
The size in pixels of the window used for
filtering in the line dimension. If the window varies across the
image, this could contain the average window or initial window,
as needed by the specific algorithm.
The size in pixels of the window used for
filtering in the sample dimension. If the window varies across
the image, this could contain the average window or initial
window, as needed by the specific algorithm.
The first_line attribute indicates the line
within a source image that corresponds to the first line in a
sub-image.
The first_sample attribute indicates the sample
within a source image that corresponds to the first sample in a
sub-image.
The focus_merge_blending_flag attribute
indicates whether intra-stack image blending has been performed
during a focus merge operation. A value of 'false' means images
were merged without blending.
The focus_merge_registration_flag attribute
indicates whether intra-stack image registration has been
performed during a focus merge operation. A value of 'true'
indicates that intra-stack image registration has been performed
during the focus merge operation, while 'false' indicates that
images have been merged without translation.
The focus_mode attribute specifies the type of
focus command, for example: Autofocus, Manual, ZStack, or
Relative (focus adjustment based on a previous
autofocus).
The focus_position attribute defines, in a
camera-specific way, the focus metric that should be used for
geometric processing of the data (e.g. for creating camera
models). This will often be the focus motor
count.
The focus_position_count attribute specifies a
commanded focus, or the initial focus position used by the
autofocus algorithm.
The focus_stack_flag attribute indicates
whether or not focus stack image products were created during
the autofocus imaging step.
The frame_count attribute indicates the total
number of image frames acquired, such as for a video or focus
stack observation.
The frame_id attribute specifies an
identification for a particular instrument measurement frame. A
frame consists of a sequence of measurements made over a
specified time interval, and may include measurements from
different instrument modes. These sequences repeat from cycle to
cycle and sometimes within a cycle.
The frame_index attribute specifies the sequence
number of this frame in the context of the entire video, i.e.
the first frame of the video would be index 1, up to
frame_count.
The frame_interval attribute defines the time
between the start of successive frames in a video
product.
The frame_rate attribute specifies the
calculated frame rate for video products.
The frame_type_name attribute specifies whether
the image was commanded as part of a stereo pair or as a single
left or right monoscopic image. If frame_type = 'Stereo', a left
and a right image should be present.
The gain_mode_id attribute identifies the gain
state of an instrument. Gain is a constant value which is
multiplied with an instrument's output signal to increase or
decrease the level of that output. These modes may vary by
mission so the permissible values should be set by the mission
dictionaries.
The gain_number attribute specifies the gain
value used in the analog to digital conversion. The gain value
is a multiplicative factor used in the analog to digital
conversion.
The gop_frame_count attribute indicates, for
video products compressed into a group of images (Group Of
Pictures or GOP), the number of images in a GOP. This is not
necessarily the total number of frames in the observation (see
frame_count), as the observation may consist of a number of
GOPs.
Videos can be broken into Groups of Pictures
(GOP)s, which group a number of frames together. The
gop_frame_index attribute specifies the frame index within a
Group Of Pictures (GOP) starting at 1. This is distinct from
frame_index, which is the index into the video as a
whole.
Videos can be broken into Groups of Pictures
(GOP)s, which group a number of frames together. The
gop_start_index attribute specifies the index of the first frame
of the GOP (starting at 1). Thus, frame_index = gop_start_index
+ gop_frame_index - 1.
The height_pixels attribute provides the
vertical dimension, in pixels.
Specifies the elevation above which the image is
masked off.
The icer_quality attribute is a ICER specific
variable for on-board ICER data compression.
The id attribute supplies a short name
(identifier) for the associated value in a group of related
values.
Defines the illuminant that was used in order to
process this image. The valid values are open-ended but examples
of valid values include: None, D65, 3000K,
5000K.
The illumination_wavelength attribute provides
the wavelength of an LED that was used to illuminate this
image.
The input_dn_max attribute provides the value of
the maximum DN in the input image that is assigned a specific DN
in the output image during companding.
The input_dn_min attribute provides the value of
the minimum DN in the input image that is assigned a specific DN
in the output image during companding.
The interframe_delay attribute provides the time
between the end of one frame and the beginning of the next frame
in a video product.
The jpeg_parameter attribute is a JPEG specific
variable which specifies on-board compression determination by
image quality or by compression factor, based on a selected
on-board compression mode.
The jpeg_quality attribute is a JPEG specific
variable which identifies the resultant or targeted image
quality index for on-board data compression.
The line_fov attribute specifies the angular
measure of the field of view of an imaged scene, as measured in
the image line direction (generally vertical).
The lines attribute indicates the total number
of data instances along the vertical axis of an image or
sub-image.
Specifies the pixel value in the mask that will
represent transparent (or NoData/null) for the characterized
image. This is normally defined as 0 in the mask layer. Once
defined, any other value in the mask represents opaque or
translucent (in other words, valid) in the characterized
image.
This identifies the type of mask file. Two
enumerations are given, but these can be expanded if
needed.
The max_auto_exposure_iteration_count attribute
specifies the maximum number of exposure iterations the
instrument will perform in order to obtain the requested
exposure.
The maximum size in pixels of the window used
for filtering in the line dimension. If the window is constant
across the image, filter_window_line should be used
instead.
The maximum size in pixels of the window used
for filtering in the sample dimension. If the window is constant
across the image, filter_window_sample should be used
instead.
The maximum_focus_distance attribute specifies
the estimated distance to the farthest pixel with less than 1
pixel of gaussian blur.
The minimum size in pixels of the window used
for filtering in the line dimension. If the window is constant
across the image, filter_window_line should be used
instead.
The minimum size in pixels of the window used
for filtering in the sample dimension. If the window is constant
across the image, filter_window_sample should be used
instead.
The minimum_focus_distance attribute specifies
the estimated distance to the nearest pixel with less than 1
pixel of gaussian blur.
The missing_pixel_count attribute identifies
the total number of missing pixels defined by the image or image
segment.
The motor_count attribute specifies the raw
motor counts for the specified device, which indicates the
position of the associated mechanism in a device-specific
way.
Specifies the factor that has been multiplied by
the B pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and G pixel values to
produce the output Blue value.
Specifies the factor that has been multiplied by
the G pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and B pixel values to
produce the output Blue value.
Specifies the factor that has been multiplied by
the R pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied G and B pixel values to
produce the output Blue value.
Specifies the factor that has been multiplied by
the B pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and G pixel values to
produce the output Green value.
Specifies the factor that has been multiplied by
the G pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and B pixel values to
produce the output Green value.
Specifies the factor that has been multiplied by
the R pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied G and B pixel values to
produce the output Green value.
Specifies the factor that has been multiplied by
the B pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and G pixel values to
produce the output Red value.
Specifies the factor that has been multiplied by
the G pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied R and B pixel values to
produce the output Red value.
Specifies the factor that has been multiplied by
the R pixel value after de-Bayering (demosaicking) takes place.
This value is summed with the multiplied G and B pixel values to
produce the output Red value.
The onboard_compression_class attribute
identifies the type of on-board compression used for data
storage and transmission. Note that the onboard_compression_type
identifies the specific compression algorithm used (for example,
ICER), whereas the onboard_compression_class gives a simple
indicator of the type of compression mode. Valid values:
'Lossless', 'Lossy', 'Uncompressed'
The onboard_compression_mode attribute
identifies the method used for on-board compression, performed
for the purpose of data storage and transmission. The value for
this attributes represents the raw integer value for
compression, which is then translated to the full name captured
by the onboard_compression_type attribute.
The onboard_compression_quality attribute is an
indication of compression quality, in the range of 0.0 to 1.0.
Losslessly compressed or uncompressed data have a value of 1.0.
Other values are assigned in a manner specific to the
compression mode, but with the property that a higher value
means better quality. Although the values are not directly
comparable across compression types, this facilitates comparison
of compression quality across images independent of compression
mode.
The onboard_compression_rate attribute provides
the average number of bits needed to represent a pixel for image
that was compressed on-board for data storage and
transmission.
The onboard_compression_ratio attribute provides
the ratio of the size, in bytes, of the original uncompressed
data object to its compressed form (original size / compressed
size). Onboard compression is performed for data storage and
transmission.
The onboard_compression_type attribute
identifies the type of on-board compression used for data
storage and transmission. Valid Values: 'ICER', 'LOCO', 'JPEG',
'JPEG Progressive', 'MSSS Lossless', 'None'.
The output_dn attribute provides the value of
the DN in the output image that is assigned to a given range of
DN in the input image during companding.
The pointing_model_name attribute specifies
which of several "pointing models" were used to transform the
camera model based on updated pointing information. These
updates are typically derived from mosaic seam corrections. This
attribute and the associated Pointing_Model_Index classes define
what the updated pointing information is, providing enough
information to re-create the camera model from calibration data.
If present, this attribute overrides the default pointing based
on telemetry. The special value "NONE" shall be interpreted the
same as if the attribute is absent (i.e. the default pointing
model should be used). New model names can be created at any
time; the models themselves should be described in a
mission-specific ancillary file. See also the geom:solution_id
attribute within the geom:Camera_Model_Parameters
class.
The pointing_model_solution_id attribute
specifies the identifier of the pointing correction solution
used to derive the model specified via the enclosing
Pointing_Correction class. This identifier should also appear in
the pointing correction file referenced by the
Data_Correction_File. If there is only one identifier in the
correction file, then pointing_model_solution_id may be omitted.
The pointing_model_solution_id attribute may be reused in the
context of pointing corrections, although uniqueness is
recommended. The pointing correction solution ID namespace is
separate from the coordinate system namespace.
The processing_algorithm attribute specifies the
name of the algorithm used to perform the processing specified
by the enclosing class. Algorithm names should be defined in the
project documentation, and/or in the enclosing class definition.
The processing_venue attribute specifies where
the processing described by the parent class was performed.
In cases where each pass of a progressive JPEG
is downlinked separately, the progressive_stage attribute
indicates the highest pass number contained in this image, which
indicates the available level of detail.
The r0 attribute specifies the 0th-order
polynomial coefficient of the function used to describe an
algorithmic flat field. See Radial_Flat_Field_Function for the
formula.
The r1 attribute specifies the 1st-order
polynomial coefficient of the function used to describe an
algorithmic flat field. See Radial_Flat_Field_Function for the
formula.
The r2 attribute specifies the 2nd-order
polynomial coefficient of the function used to describe an
algorithmic flat field. See Radial_Flat_Field_Function for the
formula.
The r3 attribute specifies specifies the
3rd-order polynomial coefficient of the function used to
describe an algorithmic flat field. See
Radial_Flat_Field_Function for the formula.
The radiometric_type defines the specific type
of radiance measurement. Possible values include "Radiance",
"Spectral Radiance", "Scaled Spectral Radiance". Note: There are
many more possible values, and this definition can be updated to
include more examples over time.
Defines the scaling factor used for Scaled
Radiance or Scaled Spectral Radiance. Scaled radiance is created
by dividing radiance by this factor, which scales the radiance
to what it would be if the sun were at the zenith with a clear
atmosphere.
The raw_count attribute provides the value of
some parameter measured by a spacecraft or instrument sensor in
the raw units reported by that sensor. A separate attribute
should be included alongside the raw_count that translates this
value into the appropriate engineering units. i.e.
temperature_value in degrees C or voltage_value in
Volts
The readout_rate attribute specifies the clock
rate at which values are read from the sensor.
Specifies the conversion factor between DN and
radiance units that has been applied to the blue channel of an
image.
Specifies the factor that has been applied to
the B cell of the Bayer pattern, before de-Bayering
(demosaicking) takes place.
Specifies the factor that has been applied to
the G cell of the Bayer pattern, before de-Bayering
(demosaicking) takes place.
Specifies the factor that has been applied to
the R cell of the Bayer pattern, before de-Bayering
(demosaicking) takes place.
Specifies the conversion factor between DN and
radiance units that has been applied to the green channel of an
image.
Specifies the conversion factor between DN and
radiance units that has been applied to a panchromatic
image.
Specifies the conversion factor between DN and
radiance units that has been applied to the red channel of an
image.
The sample_bit_mask attribute Specifies the
active bits in a sample. Any bit mask is valid in an non-raw
product. Any 8-bit product, whether a scaled raw product or
other, will have the value "2#11111111" and be stored in one
byte. Any 12-bit product, whether an unscaled raw product, or an
ILUT partially-processed product (see companding_method), will
have the value "2#0000111111111111" and be stored in two bytes.
A 15-bit product (e.g. Radiometrically-corrected Calibrated
product type) will have the value "2#0111111111111111" and be
stored in two bytes. Any 32-bit integer product (e.g. Histogram
Raw product) will have the value
"2#11111111111111111111111111111111" and be stored in four
bytes. For floating-point data, sample_bit_mask is not valid and
may be absent. If present, it should be ignored. NOTE: In the
PDS, the domain of sample_bit_mask is dependent upon the
currently-described value in the sample_bits attribute and only
applies to integer values.
The sample_bits attribute specifies the logical
or active number of bits in the data, which is distinct from the
physical number of bits (for example, encoding 12-bit data
within 16-bit words). These logical bits are stored in the low
order (least significant) bits, with unused bits filled with 0
(or 1 for negative integers to preserve a two's complement
representation). This is distinct from the valid data range
(specified by valid_minimum and valid_maximum in
Special_Constants class) because all values, including
missing/invalid flag values, must fit within the sample_bits.
The intent is that the data should be able to be sent through a
communication channel that passes only sample_bits with no loss
in fidelity.
The sample_fov attribute specifies the angular
measure of the field of view of an imaged scene, as measured in
the image sample direction (generally
horizontal).
The samples attribute indicates the total
number of data instances along the horizontal axis of an image
or sub-image.
The sampling_factor attribute provides the
value N, where every Nth data point was kept from the original
data set by selection, averaging, or taking the median. When
applied to an image object, the single value represented in
sampling_factor applies to both the lines and the
samples.
The saturated_pixel_count attribute provides the
number of pixels which were saturated. This can happen when the
sensor acquired a value too low or too high to be measured
accurately or if post-processing cause the output pixel value to
fall below or above the the output range of valid values for the
data or data type.
The segment_count attribute identifies the
number of segments into which the image was partitioned for
error containment purposes.
The segment_number attribute identifies which
compression segment is described in the current Segment class.
The segment_quality attribute identifies the
resultant or targeted image quality index for on-board ICER data
compression. Upon return by the ICER decompress function, the
output quantity segment_quality provides an indication of the
quality of the reconstructed segment. Specifically, the value
returned is a double for which the integer values correspond to
attained min loss values, but in general is an interpolation
between these values. Thus lower values of segment_quality
correspond to higher reconstructed qualities, and a value of
indicates lossless compression. Note that the compressed stream
does not directly contain the value of min loss that was given
to the compressor, but the decompressor does know how far along
in the decompression process it got before it ran out of bits;
this information is used to determine segment_quality. In rare
circumstances the decompressor m ay not be able to determine
segment_quality for a segment that it decompresses. In this case
it sets segment_quality to 1.0. The reconstructed segment might
be either lossy or lossless when this occurs. The technical
condition under which a quality value is not determined is that
the decompressor runs out of the data for the segment before
decoding any bit plane information.
The segment_status attribute provides a bit
mask which provides the status of decoding for the compression
segment identified by segment_number. Upon return by the ICER
decompress function, the output quantity of segment_status
contains a number indicating the decode status. The decode
status may have one or more of the following flags set:
SHORTDATASEG FLAG (bit 0): If this flag is set, then the segment
contained so little data that nothing could be reconstructed in
the segment. INCONSISTENTDATA FLAG (bit 1): If this flag is set,
then one or more pieces of information in the segment header
(specifically, image width, image height, n segs, wavelet
filter, n decomps) are inconsistent with the value(s) in the
first (valid) segment. ICER will ignore the data in this
segment. DUPLICATESEG FLAG (bit 2): If this flag is set, then
the segment index given in the header equals that given by a
previous segment. The decompressor will ignore the data in this
segment. BADBITPLANENUMBER FLAG (bit 3): If this flag is set,
then an ICER internal parameter in the header for this segment
has probably been corrupted. The decompressor will ignore the
data in this segment. BADBITPLANECOUNT FLAG (bit 4): If this
flag is set, then an ICER internal parameter in the header for
this segment has probably been corrupted. The decompressor will
ignore the data in this segment. BADDATA FLAG (bit 5): If this
flag is set, then either the parameter combination given in the
header for this segment are not allowed by ICER, or the segment
number is bad. This probably indicates corrupted data. The
decompressor will ignore the data in this segment.
The sequence_number attribute supplies the
sequence identifier for the associated value in a group of
related values.
The shutter_subtraction_mode specifies whether
shutter subtraction will be performed, or if it is dependent on
the exposure_duration_threshold_count.
The subframe_type attribute specifies the
method of subframing performed on the image. These methods may
vary by mission so the permissible values should be set by the
mission dictionaries. The current enumerations were added for
the MSL mission and can be expanded if needed.
The temperature_status attribute defines the
status of the associated temperature measurement. The status is
interpreted in a device-specific way, but generally 0 indicates
a successful measurement.
The temperature_value attribute provides the
temperature, in the specified units, of some point on an imaging
instrument or other imaging instrument device.
The valid_pixel_count attribute provides the
total number of pixels tagged as valid. This will generally not
include pixels flagged as saturated_pixel_count or
missing_pixel_count.
The value_number attribute provides the value
with no applicable units as named by the associated id, name, or
sequence_number.
The value_string attribute provides the value
with no applicable units as named by the associated id, name, or
sequence_number.
The video_flag attribute indicates whether or
not video products were commanded.
The voltage_value attribute provides provides
the voltage, in the specified units, of an imaging instrument or
some part of the imaging instrument.
The wavelet_filter attribute specifies thefilter
used in the compression and decompression
algorithm.
The width_pixels attribute provides the
horizontal dimension, in pixels.
The x_center attribute specifies the sample
coordinate of the center of the function used to describe an
algorithmic flat field. See Radial_Flat_Field_Function for the
formula.
The y_center attribute specifies the line
coordinate of the center of the function used to describe an
algorithmic flat field. See Radial_Flat_Field_Function for the
formula.
[
{
"dataDictionary": {
"Title": "PDS4 Data Dictionary" ,
"Version": "1.13.0.0" ,
"Date": "Mon May 04 13:19:31 MST 2020" ,
"Description": "This document is a dump of the contents of the PDS4 Data Dictionary" ,
"PropertyMapDictionary": [
]
}
}
]