Version: 2019.1 (switch to 2018.3 or 2017.4)
LanguageEnglish
  • C#

MediaEncoder.AddFrame

Suggest a change

Success!

Thank you for helping us improve the quality of Unity Documentation. Although we cannot accept all submissions, we do read each suggested change from our users and will make updates where applicable.

Close

Submission failed

For some reason your suggested change could not be submitted. Please <a>try again</a> in a few minutes. And thank you for taking the time to help us improve the quality of Unity Documentation.

Close

Cancel

public bool AddFrame(Texture2D texture);

Parameters

textureTexture containing the pixels to be written into the track for the current frame.

Returns

bool True if the operation succeeded. False otherwise.

Description

Appends a frame to the file's video track.

Keep the number of video frames and audio samples aligned so that each track is syncrhonized as much as possible. For instance, a file with 30FPS video and 48KHz video, each addition of one video frame should be followed by the addition of a buffer of 1600 sample frames.


public bool AddFrame(int width, int height, int rowBytes, TextureFormat format, NativeArray<byte> data);

Parameters

widthImage width.
heightImage height.
rowBytesBytes in one row of pixels. Useful in case lines include padding. Can be set to 0 if there is no padding.
formatPixel format. Only TextureFormat.RGBA32 is supported at this time.
dataBytes containing the image.

Returns

bool True if the operation succeeded. False otherwise.

Description

Appends a frame from a raw buffer to the file's video track.

This version of AddFrame helps save image copying if the source data is not in a Texture2D. For example, when pixel data comes from a AsyncGPUReadbackRequest. For more details, see the note about audio/video alignment in the variant of MediaEncoder.AddFrame taking a Texture2D.

Did you find this page useful? Please give it a rating: