Android.Hardware.Camera2.CaptureRequest.SensorFrameDuration Property
  1. Let the set of currently configured input/output streams be called S.
  2. Find the minimum frame durations for each stream in S, by looking it up in CameraCharacteristics.ScalerStreamConfigurationMap using StreamConfigurationMap#getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F.
  3. For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r.

Syntax

[Android.Runtime.Register("SENSOR_FRAME_DURATION")]
public static CaptureRequest.Key SensorFrameDuration { get; }

See Also

CaptureRequest.ControlAeMode
CaptureRequest.ControlMode
CameraCharacteristics.InfoSupportedHardwareLevel
CameraCharacteristics.ScalerStreamConfigurationMap
CameraCharacteristics.SensorInfoMaxFrameDuration

Value

Documentation for this section has not yet been entered.

Remarks

  • Requested resolutions of output image streams
  • Availability of binning / skipping modes on the imager
  • The bandwidth of the imager interface
  • The bandwidth of the various ISP processing blocks
  • The image sensor is always configured to output the smallest resolution possible given the application's requested output stream sizes. The smallest resolution is defined as being at least as large as the largest requested output stream size; the camera pipeline must never digitally upsample sensor data when the crop region covers the whole sensor. In general, this means that if only small output stream resolutions are configured, the sensor can provide a higher frame rate.
  • Since any request may use any or all the currently configured output streams, the sensor and ISP must be configured to support scaling a single capture to all the streams at the same time. This means the camera pipeline must be ready to produce the largest requested output size without any delay. Therefore, the overall frame rate of a given configured stream set is governed only by the largest requested stream resolution.
  • Using more than one output stream in a request does not affect the frame duration.
  • Certain format-streams may need to do additional background processing before data is consumed/produced by that stream. These processors can run concurrently to the rest of the camera pipeline, but cannot process more than 1 capture at a time.
  1. Let the set of currently configured input/output streams be called S.
  2. Find the minimum frame durations for each stream in S, by looking it up in CameraCharacteristics.ScalerStreamConfigurationMap using StreamConfigurationMap#getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F.
  3. For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r.

Duration from start of frame exposure to start of next frame exposure.

The maximum frame rate that can be supported by a camera subsystem is a function of many factors:

Since these factors can vary greatly between different ISPs and sensors, the camera abstraction tries to represent the bandwidth restrictions with as simple a model as possible.

The model presented has the following characteristics:

The necessary information for the application, given the model above, is provided via the CameraCharacteristics.ScalerStreamConfigurationMap field using StreamConfigurationMap#getOutputMinFrameDuration(int, Size). These are used to determine the maximum frame rate / minimum frame duration that is possible for a given stream configuration.

Specifically, the application can use the following rules to determine the minimum frame duration it can request from the camera device:

If none of the streams in S_r have a stall time (listed in StreamConfigurationMap#getOutputStallDuration(int,Size) using its respective size/format), then the frame duration in F determines the steady state frame rate that the application will get if it uses R as a repeating request. Let this special kind of request be called Rsimple.

A repeating request Rsimple can be occasionally interleaved by a single capture of a new request Rstall (which has at least one in-use stream with a non-0 stall time) and if Rstall has the same minimum frame duration this will not cause a frame rate loss if all buffers from the previous Rstall have already been delivered.

For more details about stalling, see StreamConfigurationMap#getOutputStallDuration(int,Size).

This control is only effective if CaptureRequest.ControlAeMode or CaptureRequest.ControlMode is set to OFF; otherwise the auto-exposure algorithm will override this value.

Units: Nanoseconds

Range of valid values:

See CameraCharacteristics.SensorInfoMaxFrameDuration, CameraCharacteristics.ScalerStreamConfigurationMap. The duration is capped to max(duration, exposureTime + overhead).

Optional - This value may be null on some devices.

Full capability - Present on all camera devices that report being CameraMetadata.INFO_SUPPORTED_HARDWARE_LEVEL_FULL devices in the CameraCharacteristics.InfoSupportedHardwareLevel key

[Android Documentation]

Requirements

Namespace: Android.Hardware.Camera2
Assembly: Mono.Android (in Mono.Android.dll)
Assembly Versions: 0.0.0.0