#GstMeta for carrying SMPTE-291M Ancillary data. Note that all the ADF fields
(@DID to @checksum) are 10bit values with parity/non-parity high-bits set.
Parent #GstMeta
The field where the ancillary data is located
Which channel (luminance or chrominance) the ancillary
data is located. 0 if content is SD or stored in the luminance channel
(default). 1 if HD and stored in the chrominance channel.
The line on which the ancillary data is located (max 11bit). There
are two special values: 0x7ff if no line is specified (default), 0x7fe
to specify the ancillary data is on any valid line before active video
The location of the ancillary data packet in a SDI raster relative
to the start of active video (max 12bits). A value of 0 means the ADF of
the ancillary packet starts immediately following SAV. There are 3
special values: 0xfff: No specified location (default), 0xffe: within
HANC data space, 0xffd: within the ancillary data space located between
SAV and EAV
Data Identified
Secondary Data identification (if type 2) or Data block
number (if type 1)
The amount of user data
The User data
The checksum of the ADF
Location of a @GstAncillaryMeta.
Progressive or no field specified (default)
Interlaced first field
Interlaced second field
A bufferpool option to enable extra padding. When a bufferpool supports this
option, gst_buffer_pool_config_set_video_alignment() can be called.
When this option is enabled on the bufferpool,
#GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.
An option that can be activated on a bufferpool to request gl texture upload
meta on buffers from the pool.
When this option is enabled on the bufferpool,
@GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.
An option that can be activated on bufferpool to request video metadata
on buffers from the pool.
Name of the caps feature indicating that the stream is interlaced.
Currently it is only used for video with 'interlace-mode=alternate'
to ensure backwards compatibility for this new mode.
In this mode each buffer carries a single field of interlaced video.
@GST_VIDEO_BUFFER_FLAG_TOP_FIELD and @GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD
indicate whether the buffer carries a top or bottom field. The order of
buffers/fields in the stream and the timestamps on the buffers indicate the
temporal order of the fields.
Top and bottom fields are expected to alternate in this mode.
The frame rate in the caps still signals the frame rate, so the notional field
rate will be twice the frame rate from the caps
(see @GST_VIDEO_INFO_FIELD_RATE_N).
This interface is implemented by elements which can perform some color
balance operation on video frames they process. For example, modifying
the brightness, contrast, hue or saturation.
Example elements are 'xvimagesink' and 'colorbalance'
Get the #GstColorBalanceType of this implementation.
A the #GstColorBalanceType.
The #GstColorBalance implementation
Retrieve the current value of the indicated channel, between min_value
and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
Retrieve a list of the available channels.
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
Sets the current value of the channel to the passed value, which must
be between min_value and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
A helper function called by implementations of the GstColorBalance
interface. It fires the #GstColorBalance::value-changed signal on the
instance, and the #GstColorBalanceChannel::value-changed signal on the
channel object.
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
Get the #GstColorBalanceType of this implementation.
A the #GstColorBalanceType.
The #GstColorBalance implementation
Retrieve the current value of the indicated channel, between min_value
and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
Retrieve a list of the available channels.
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
Sets the current value of the channel to the passed value, which must
be between min_value and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
A helper function called by implementations of the GstColorBalance
interface. It fires the #GstColorBalance::value-changed signal on the
instance, and the #GstColorBalanceChannel::value-changed signal on the
channel object.
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
Fired when the value of the indicated channel has changed.
The #GstColorBalanceChannel
The new value
The #GstColorBalanceChannel object represents a parameter
for modifying the color balance implemented by an element providing the
#GstColorBalance interface. For example, Hue or Saturation.
A string containing a descriptive name for this channel
The minimum valid value for this channel.
The maximum valid value for this channel.
Fired when the value of the indicated channel has changed.
The new value
Color-balance channel class.
the parent class
Color-balance interface.
the parent interface
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
A the #GstColorBalanceType.
The #GstColorBalance implementation
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
An enumeration indicating whether an element implements color balancing
operations in software or in dedicated hardware. In general, dedicated
hardware implementations (such as those provided by xvimagesink) are
preferred.
Color balance is implemented with dedicated
hardware.
Color balance is implemented via software
processing.
This metadata stays relevant as long as video colorspace is unchanged.
This metadata stays relevant as long as video orientation is unchanged.
This metadata stays relevant as long as video size is unchanged.
This metadata is relevant for video streams.
The Navigation interface is used for creating and injecting navigation
related events such as mouse button presses, cursor motion and key presses.
The associated library also provides methods for parsing received events, and
for sending and receiving navigation related bus events. One main usecase is
DVD menu navigation.
The main parts of the API are:
* The GstNavigation interface, implemented by elements which provide an
application with the ability to create and inject navigation events into
the pipeline.
* GstNavigation event handling API. GstNavigation events are created in
response to calls on a GstNavigation interface implementation, and sent in
the pipeline. Upstream elements can use the navigation event API functions
to parse the contents of received messages.
* GstNavigation message handling API. GstNavigation messages may be sent on
the message bus to inform applications of navigation related changes in the
pipeline, such as the mouse moving over a clickable region, or the set of
available angles changing.
The GstNavigation message functions provide functions for creating and
parsing custom bus messages for signaling GstNavigation changes.
Try to retrieve x and y coordinates of a #GstNavigation event.
A boolean indicating success.
The #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
navigation event.
Pointer to a gdouble to receive the y coordinate of the
navigation event.
Inspect a #GstEvent and return the #GstNavigationEventType of the event, or
#GST_NAVIGATION_EVENT_INVALID if the event is not a #GstNavigation event.
A #GstEvent to inspect.
Create a new navigation event given navigation command..
a new #GstEvent
The navigation command to use.
Create a new navigation event for the given key press.
a new #GstEvent
A string identifying the key press.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key release.
a new #GstEvent
A string identifying the released key.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key mouse button press.
a new #GstEvent
The number of the pressed mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key mouse button release.
a new #GstEvent
The number of the released mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the new mouse location.
a new #GstEvent
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the mouse scroll.
a new #GstEvent
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
The x component of the scroll movement.
The y component of the scroll movement.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event signalling that all currently active touch
points are cancelled and should be discarded. For example, under Wayland
this event might be sent when a swipe passes the threshold to be recognized
as a gesture by the compositor.
a new #GstEvent
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for an added touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must stay
unique to this touch point at least until an up event is sent for
the same identifier, or all touch points are cancelled.
The x coordinate of the new touch point.
The y coordinate of the new touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no
data is available.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event signalling the end of a touch frame. Touch
frames signal that all previous down, motion and up events not followed by
another touch frame event already should be considered simultaneous.
a new #GstEvent
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for a moved touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must
correlate to exactly one previous touch_start event.
The x coordinate of the touch point.
The y coordinate of the touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no
data is available.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for a removed touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must
correlate to exactly one previous down event, but can be reused
after sending this event.
The x coordinate of the touch point.
The y coordinate of the touch point.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Inspect a #GstNavigation command event and retrieve the enum value of the
associated command.
TRUE if the navigation command could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to GstNavigationCommand to receive the
type of the navigation event.
Note: Modifier keys (as defined in #GstNavigationModifierType)
[press](GST_NAVIGATION_EVENT_KEY_PRESS) and
[release](GST_NAVIGATION_KEY_PRESS) events are generated even if those states are
present on all other related events
A #GstEvent to inspect.
A pointer to a location to receive
the string identifying the key press. The returned string is owned by the
event, and valid only until the event is unreffed.
TRUE if the event is a #GstNavigation event with associated
modifiers state, otherwise FALSE.
The #GstEvent to modify.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Retrieve the details of either a #GstNavigation mouse button press event or
a mouse button release event. Determine which type the event is using
gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.
TRUE if the button number and both coordinates could be extracted,
otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gint that will receive the button
number associated with the event.
Pointer to a gdouble to receive the x coordinate of the
mouse button event.
Pointer to a gdouble to receive the y coordinate of the
mouse button event.
Inspect a #GstNavigation mouse movement event and extract the coordinates
of the event.
TRUE if both coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Inspect a #GstNavigation mouse scroll event and extract the coordinates
of the event.
TRUE if all coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Pointer to a gdouble to receive the delta_x coordinate of the
mouse movement.
Pointer to a gdouble to receive the delta_y coordinate of the
mouse movement.
Retrieve the details of a #GstNavigation touch-down or touch-motion event.
Determine which type the event is using gst_navigation_event_get_type()
to retrieve the #GstNavigationEventType.
TRUE if all details could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a guint that will receive the
identifier unique to this touch point.
Pointer to a gdouble that will receive the x
coordinate of the touch event.
Pointer to a gdouble that will receive the y
coordinate of the touch event.
Pointer to a gdouble that will receive the
force of the touch event, in the range from 0.0 to 1.0. If pressure
data is not available, NaN will be set instead.
Retrieve the details of a #GstNavigation touch-up event.
TRUE if all details could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a guint that will receive the
identifier unique to this touch point.
Pointer to a gdouble that will receive the x
coordinate of the touch event.
Pointer to a gdouble that will receive the y
coordinate of the touch event.
Try to set x and y coordinates on a #GstNavigation event. The event must
be writable.
A boolean indicating success.
The #GstEvent to modify.
The x coordinate to set.
The y coordinate to set.
Check a bus message to see if it is a #GstNavigation event, and return
the #GstNavigationMessageType identifying the type of the message if so.
The type of the #GstMessage, or
#GST_NAVIGATION_MESSAGE_INVALID if the message is not a #GstNavigation
notification.
A #GstMessage to inspect.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application
that the current angle, or current number of angles available in a
multiangle video has changed.
The new #GstMessage.
A #GstObject to set as source of the new message.
The currently selected angle.
The number of viewing angles now available.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_COMMANDS_CHANGED
The new #GstMessage.
A #GstObject to set as source of the new message.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_EVENT.
The new #GstMessage.
A #GstObject to set as source of the new message.
A navigation #GstEvent
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_MOUSE_OVER.
The new #GstMessage.
A #GstObject to set as source of the new message.
%TRUE if the mouse has entered a clickable area of the display.
%FALSE if it over a non-clickable area.
Parse a #GstNavigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED
and extract the @cur_angle and @n_angles parameters.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a #guint to receive the new
current angle number, or NULL
A pointer to a #guint to receive the new angle
count, or NULL.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_EVENT
and extract contained #GstEvent. The caller must unref the @event when done
with it.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
a pointer to a #GstEvent to receive
the contained navigation event.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_MOUSE_OVER
and extract the active/inactive flag. If the mouse over event is marked
active, it indicates that the mouse is over a clickable area.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a gboolean to receive the
active/inactive state, or NULL.
Inspect a #GstQuery and return the #GstNavigationQueryType associated with
it if it is a #GstNavigation query.
The #GstNavigationQueryType of the query, or
#GST_NAVIGATION_QUERY_INVALID
The query to inspect
Create a new #GstNavigation angles query. When executed, it will
query the pipeline for the set of currently available angles, which may be
greater than one in a multiangle video.
The new query.
Create a new #GstNavigation commands query. When executed, it will
query the pipeline for the set of currently available commands.
The new query.
Parse the current angle number in the #GstNavigation angles @query into the
#guint pointed to by the @cur_angle variable, and the number of available
angles into the #guint pointed to by the @n_angles variable.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
Pointer to a #guint into which to store the
currently selected angle value from the query, or NULL
Pointer to a #guint into which to store the
number of angles value from the query, or NULL
Parse the number of commands in the #GstNavigation commands @query.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the number of commands in this query.
Parse the #GstNavigation command query and retrieve the @nth command from
it into @cmd. If the list contains less elements than @nth, @cmd will be
set to #GST_NAVIGATION_COMMAND_INVALID.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the nth command to retrieve.
a pointer to store the nth command into.
Set the #GstNavigation angles query result field in @query.
a #GstQuery
the current viewing angle to set.
the number of viewing angles to set.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
A list of @GstNavigationCommand values, @n_cmds entries long.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
An array containing @n_cmds
@GstNavigationCommand values.
sending a navigation event.
Use #GstNavigationInterface.send_event_simple() instead.
Sends an event to the navigation interface.
The navigation interface instance
The event to send
Sends the indicated command to the navigation interface.
The navigation interface instance
The command to issue
Sends an event to the navigation interface.
The navigation interface instance
The event to send
The navigation interface instance
The type of the key event. Recognised values are "key-press" and
"key-release"
Character representation of the key. This is typically as produced
by XKeysymToString.
Sends a mouse event to the navigation interface. Mouse event coordinates
are sent relative to the display space of the related output area. This is
usually the size in pixels of the window associated with the element
implementing the #GstNavigation interface.
The navigation interface instance
The type of mouse event, as a text string. Recognised values are
"mouse-button-press", "mouse-button-release" and "mouse-move".
The button number of the button being pressed or released. Pass 0
for mouse-move events.
The x coordinate of the mouse event.
The y coordinate of the mouse event.
Sends a mouse scroll event to the navigation interface. Mouse event coordinates
are sent relative to the display space of the related output area. This is
usually the size in pixels of the window associated with the element
implementing the #GstNavigation interface.
The navigation interface instance
The x coordinate of the mouse event.
The y coordinate of the mouse event.
The delta_x coordinate of the mouse event.
The delta_y coordinate of the mouse event.
A set of commands that may be issued to an element providing the
#GstNavigation interface. The available commands can be queried via
the gst_navigation_query_new_commands() query.
For convenience in handling DVD navigation, the MENU commands are aliased as:
GST_NAVIGATION_COMMAND_DVD_MENU = @GST_NAVIGATION_COMMAND_MENU1
GST_NAVIGATION_COMMAND_DVD_TITLE_MENU = @GST_NAVIGATION_COMMAND_MENU2
GST_NAVIGATION_COMMAND_DVD_ROOT_MENU = @GST_NAVIGATION_COMMAND_MENU3
GST_NAVIGATION_COMMAND_DVD_SUBPICTURE_MENU = @GST_NAVIGATION_COMMAND_MENU4
GST_NAVIGATION_COMMAND_DVD_AUDIO_MENU = @GST_NAVIGATION_COMMAND_MENU5
GST_NAVIGATION_COMMAND_DVD_ANGLE_MENU = @GST_NAVIGATION_COMMAND_MENU6
GST_NAVIGATION_COMMAND_DVD_CHAPTER_MENU = @GST_NAVIGATION_COMMAND_MENU7
An invalid command entry
Execute navigation menu command 1. For DVD,
this enters the DVD root menu, or exits back to the title from the menu.
Execute navigation menu command 2. For DVD,
this jumps to the DVD title menu.
Execute navigation menu command 3. For DVD,
this jumps into the DVD root menu.
Execute navigation menu command 4. For DVD,
this jumps to the Subpicture menu.
Execute navigation menu command 5. For DVD,
the jumps to the audio menu.
Execute navigation menu command 6. For DVD,
this jumps to the angles menu.
Execute navigation menu command 7. For DVD,
this jumps to the chapter menu.
Select the next button to the left in a menu,
if such a button exists.
Select the next button to the right in a menu,
if such a button exists.
Select the button above the current one in a
menu, if such a button exists.
Select the button below the current one in a
menu, if such a button exists.
Activate (click) the currently selected
button in a menu, if such a button exists.
Switch to the previous angle in a
multiangle feature.
Switch to the next angle in a multiangle
feature.
Enum values for the various events that an element implementing the
GstNavigation interface might send up the pipeline. Touch events have been
inspired by the libinput API, and have the same meaning here.
Returned from
gst_navigation_event_get_type() when the passed event is not a navigation event.
A key press event. Use
gst_navigation_event_parse_key_event() to extract the details from the event.
A key release event. Use
gst_navigation_event_parse_key_event() to extract the details from the event.
A mouse button press event. Use
gst_navigation_event_parse_mouse_button_event() to extract the details from the
event.
A mouse button release event. Use
gst_navigation_event_parse_mouse_button_event() to extract the details from the
event.
A mouse movement event. Use
gst_navigation_event_parse_mouse_move_event() to extract the details from the
event.
A navigation command event. Use
gst_navigation_event_parse_command() to extract the details from the event.
A mouse scroll event. Use gst_navigation_event_parse_mouse_scroll_event()
to extract the details from the event.
An event describing a new touch point, which will be assigned an identifier
that is unique to it for the duration of its movement on the screen.
Use gst_navigation_event_parse_touch_event() to extract the details
from the event.
An event describing the movement of an active touch point across
the screen. Use gst_navigation_event_parse_touch_event() to extract
the details from the event.
An event describing a removed touch point. After this event,
its identifier may be reused for any new touch points.
Use gst_navigation_event_parse_touch_up_event() to extract the details
from the event.
An event signaling the end of a sequence of simultaneous touch events.
An event cancelling all currently active touch points.
Navigation interface.
the parent interface
The navigation interface instance
The event to send
A set of notifications that may be received on the bus when navigation
related status changes.
Returned from
gst_navigation_message_get_type() when the passed message is not a
navigation message.
Sent when the mouse moves over or leaves a
clickable region of the output, such as a DVD menu button.
Sent when the set of available commands
changes and should re-queried by interested applications.
Sent when display angles in a multi-angle
feature (such as a multiangle DVD) change - either angles have appeared or
disappeared.
Sent when a navigation event was not handled
by any element in the pipeline (Since: 1.6)
Flags to indicate the state of modifier keys and mouse buttons
in events.
Typical modifier keys are Shift, Control, Meta, Super, Hyper, Alt, Compose,
Apple, CapsLock or ShiftLock.
the Shift key.
the Control key.
the third modifier key
the fourth modifier key
the fifth modifier key
the sixth modifier key
the seventh modifier key
the first mouse button (usually the left button).
the second mouse button (usually the right button).
the third mouse button (usually the mouse wheel button or middle button).
the fourth mouse button (typically the "Back" button).
the fifth mouse button (typically the "forward" button).
the Super modifier
the Hyper modifier
the Meta modifier
A mask covering all entries in #GdkModifierType.
the Meta modifier
Types of navigation interface queries.
invalid query
command query
viewing angle query
Returns the #GstVideoAncillaryDID16 of the ancillary data.
a #GstVideoAncillary
Check if GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD is set on @buf (Since: 1.18).
a #GstBuffer
Check if GST_VIDEO_BUFFER_FLAG_TOP_FIELD is set on @buf (Since: 1.18).
a #GstBuffer
Generic caps string for video, for use in pad templates.
string format that describes the pixel layout, as string
(e.g. "I420", "RGB", "YV12", "YUY2", "AYUV", etc.)
Generic caps string for video, for use in pad templates.
Requires caps features as a string, e.g.
"memory:SystemMemory".
string format that describes the pixel layout, as string
(e.g. "I420", "RGB", "YV12", "YUY2", "AYUV", etc.)
The entire set of flags for the @frame
a #GstVideoCodecFrame
Checks whether the given @flag is set
a #GstVideoCodecFrame
a flag to check for
This macro sets the given bits
a #GstVideoCodecFrame
Flag to set, can be any number of bits in guint32.
This macro usets the given bits.
a #GstVideoCodecFrame
Flag to unset
Tests if the buffer should only be decoded but not sent downstream.
a #GstVideoCodecFrame
Tests if the frame must be encoded as a keyframe. Applies only to
frames provided to encoders. Decoders can safely ignore this field.
a #GstVideoCodecFrame
Tests if encoder should output stream headers before outputting the
resulting encoded buffer for the given frame.
Applies only to frames provided to encoders. Decoders can safely
ignore this field.
a #GstVideoCodecFrame
Tests if the frame is a synchronization point (like a keyframe).
Decoder implementations can use this to detect keyframes.
a #GstVideoCodecFrame
Sets the buffer to not be sent downstream.
Decoder implementation can use this if they have frames that
are not meant to be displayed.
Encoder implementation can safely ignore this field.
a #GstVideoCodecFrame
Sets the frame to be a synchronization point (like a keyframe).
Encoder implementations should set this accordingly.
Decoder implementing parsing features should set this when they
detect such a synchronization point.
a #GstVideoCodecFrame
#GstVideoAlphaMode, the alpha mode to use.
Default is #GST_VIDEO_ALPHA_MODE_COPY.
#G_TYPE_DOUBLE, the alpha color value to use.
Default to 1.0
#G_TYPE_BOOLEAN, whether gst_video_converter_frame() will return immediately
without waiting for the conversion to complete. A subsequent
gst_video_converter_frame_finish() must be performed to ensure completion of the
conversion before subsequent use. Default %FALSE
#G_TYPE_UINT, the border color to use if #GST_VIDEO_CONVERTER_OPT_FILL_BORDER
is set to %TRUE. The color is in ARGB format.
Default 0xff000000
#GstVideoChromaMode, set the chroma resample mode subsampled
formats. Default is #GST_VIDEO_CHROMA_MODE_FULL.
#GstVideoChromaMethod, The resampler method to use for
chroma resampling. Other options for the resampler can be used, see
the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_LINEAR
#G_TYPE_INT, height in the destination frame, default destination height
#G_TYPE_INT, width in the destination frame, default destination width
#G_TYPE_INT, x position in the destination frame, default 0
#G_TYPE_INT, y position in the destination frame, default 0
#GstVideoDitherMethod, The dither method to use when
changing bit depth.
Default is #GST_VIDEO_DITHER_BAYER.
#G_TYPE_UINT, The quantization amount to dither to. Components will be
quantized to multiples of this value.
Default is 1
#G_TYPE_BOOLEAN, if the destination rectangle does not fill the complete
destination image, render a border with
#GST_VIDEO_CONVERTER_OPT_BORDER_ARGB. Otherwise the unusded pixels in the
destination are untouched. Default %TRUE.
#GstVideoGammaMode, set the gamma mode.
Default is #GST_VIDEO_GAMMA_MODE_NONE.
#GstVideoMatrixMode, set the color matrix conversion mode for
converting between Y'PbPr and non-linear RGB (R'G'B').
Default is #GST_VIDEO_MATRIX_MODE_FULL.
#GstVideoPrimariesMode, set the primaries conversion mode.
Default is #GST_VIDEO_PRIMARIES_MODE_NONE.
#GstVideoResamplerMethod, The resampler method to use for
resampling. Other options for the resampler can be used, see
the #GstVideoResampler. Default is #GST_VIDEO_RESAMPLER_METHOD_CUBIC
#G_TYPE_UINT, The number of taps for the resampler.
Default is 0: let the resampler choose a good value.
#G_TYPE_INT, source height to convert, default source height
#G_TYPE_INT, source width to convert, default source width
#G_TYPE_INT, source x position to start conversion, default 0
#G_TYPE_INT, source y position to start conversion, default 0
#G_TYPE_UINT, maximum number of threads to use. Default 1, 0 for the number
of cores.
Utility function that video decoder elements can use in case they encountered
a data processing error that may be fatal for the current "data unit" but
need not prevent subsequent decoding. Such errors are counted and if there
are too many, as configured in the context's max_errors, the pipeline will
post an error message and the application will be requested to stop further
media processing. Otherwise, it is considered a "glitch" and only a warning
is logged. In either case, @ret is set to the proper value to
return to upstream/caller (indicating either GST_FLOW_ERROR or GST_FLOW_OK).
the base video decoder element that generates the error
element defined weight of the error, added to error count
like CORE, LIBRARY, RESOURCE or STREAM (see #gstreamer-GstGError)
error code defined for that domain (see #gstreamer-GstGError)
the message to display (format string and args enclosed in
parentheses)
debugging information for the message (format string and args
enclosed in parentheses)
variable to receive return value
Gives the segment of the element.
base decoder instance
Default maximum number of errors tolerated before signaling error.
Gives the segment of the element.
base decoder instance
The name of the templates for the sink pad.
Gives the pointer to the sink #GstPad object of the element.
a #GstVideoDecoder
The name of the templates for the source pad.
Gives the pointer to the source #GstPad object of the element.
a #GstVideoDecoder
Obtain a lock to protect the decoder function from concurrent access.
video decoder instance
Release the lock that protects the decoder function from concurrent access.
video decoder instance
Generic caps string for video wit DMABuf(GST_CAPS_FEATURE_MEMORY_DMABUF)
feature, for use in pad templates. As drm-format is supposed to be defined
at run-time it's not predefined here.
Gives the segment of the element.
base parse instance
Gives the segment of the element.
base parse instance
The name of the templates for the sink pad.
Gives the pointer to the sink #GstPad object of the element.
a #GstVideoEncoder
The name of the templates for the source pad.
Gives the pointer to the source #GstPad object of the element.
a #GstVideoEncoder
Obtain a lock to protect the encoder function from concurrent access.
video encoder instance
Release the lock that protects the encoder function from concurrent access.
video encoder instance
List of all video formats, for use in template caps strings.
Formats are sorted by decreasing "quality", using these criteria by priority:
- number of components
- depth
- subsampling factor of the width
- subsampling factor of the height
- number of planes
- native endianness preferred
- pixel stride
- poffset
- prefer non-complex formats
- prefer YUV formats over RGB ones
- prefer I420 over YV12
- format name
Declare all video formats as a string.
Formats are sorted by decreasing "quality", using these criteria by priority:
- number of components
- depth
- subsampling factor of the width
- subsampling factor of the height
- number of planes
- native endianness preferred
- pixel stride
- poffset
- prefer non-complex formats
- prefer YUV formats over RGB ones
- prefer I420 over YV12
- format name
This is similar to %GST_VIDEO_FORMATS_ALL but includes formats like DMA_DRM
that do not have a software converter. This should be used for passthrough
template caps.
This is similar to %GST_VIDEO_FORMATS_ALL_STR but includes formats like
DMA_DRM for which no software converter exists. This should be used for
passthrough template caps.
This macro checks if %GST_VIDEO_FORMAT_FLAG_SUBTILES is set. When this
flag is set, it means that the tile sizes must be scaled as per the
subsampling.
a #GstVideoFormatInfo
Tests that the given #GstVideoFormatInfo represents a valid un-encoded
format.
Number of planes. This is the number of planes the pixel layout is
organized in in memory. The number of planes can be less than the
number of components (e.g. Y,U,V,A or R, G, B, A) when multiple
components are packed into one plane.
Examples: RGB/RGBx/RGBA: 1 plane, 3/3/4 components;
I420: 3 planes, 3 components; NV21/NV12: 2 planes, 3 components.
a #GstVideoFormatInfo
Plane number where the given component can be found. A plane may
contain data for multiple components.
a #GstVideoFormatInfo
the component index
pixel stride for the given component. This is the amount of bytes to the
pixel immediately to the right, so basically bytes from one pixel to the
next. When bits < 8, the stride is expressed in bits.
Examples: for 24-bit RGB, the pixel stride would be 3 bytes, while it
would be 4 bytes for RGBx or ARGB, and 8 bytes for ARGB64 or AYUV64.
For planar formats such as I420 the pixel stride is usually 1. For
YUY2 it would be 2 bytes.
a #GstVideoFormatInfo
the component index
Row stride in bytes, that is number of bytes from the first pixel component
of a row to the first pixel component in the next row. This might include
some row padding (memory not actually used for anything, to make sure the
beginning of the next row is aligned in a particular way).
a #GstVideoFormatInfo
an array of strides
the component index
See #GstVideoTileInfo.height.
Returns the tile height.
a #GstVideoFormatInfo
the plane index
Provides the size in bytes of a tile in the specified @plane. This replaces
the width and height shift, which was limited to power of two dimensions.
a #GstVideoFormatInfo
the plane index
See #GstVideoTileInfo.stride.
Returns the stride of one tile, regardless of the internal details of the
tile (could be a complex system with subtile) the tiles size should alway
match the tile width multiplied by the tile stride.
a #GstVideoFormatInfo
the plane index
See #GstVideoTileInfo.width.
Return the width of one tile in pixels, zero if its not an integer.
a #GstVideoFormatInfo
the plane index
The height of a field. It's the height of the full frame unless split-field
(alternate) interlacing is in use.
The padded height in pixels of a plane (padded size divided by the plane stride).
In case of GST_VIDEO_INTERLACE_MODE_ALTERNATE info, this macro returns the
plane heights used to hold a single field, not the full frame.
The size passed as third argument is the size of the pixel data and should
not contain any extra metadata padding.
It is not valid to use this macro with a TILED format.
G_TYPE_DOUBLE, B parameter of the cubic filter. The B
parameter controls the bluriness. Values between 0.0 and
2.0 are accepted. 1/3 is the default.
Below are some values of popular filters:
B C
Hermite 0.0 0.0
Spline 1.0 0.0
Catmull-Rom 0.0 1/2
Mitchell 1/3 1/3
Robidoux 0.3782 0.3109
Robidoux
Sharp 0.2620 0.3690
Robidoux
Soft 0.6796 0.1602
G_TYPE_DOUBLE, C parameter of the cubic filter. The C
parameter controls the Keys alpha value. Values between 0.0 and
2.0 are accepted. 1/3 is the default.
See #GST_VIDEO_RESAMPLER_OPT_CUBIC_B for some more common values
G_TYPE_DOUBLE, specifies the size of filter envelope for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
1.0 and 5.0. 2.0 is the default.
G_TYPE_INT, limits the maximum number of taps to use.
16 is the default.
G_TYPE_DOUBLE, specifies sharpening of the filter for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
0.0 and 1.0. 0.0 is the default.
G_TYPE_DOUBLE, specifies sharpness of the filter for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
0.5 and 1.5. 1.0 is the default.
#GstVideoDitherMethod, The dither method to use for propagating
quatization errors.
Cast @obj to a #GstVideoSink without runtime type check.
a #GstVideoSink or derived object
Get the sink #GstPad of @obj.
a #GstVideoSink
use this macro to create new tile modes.
the mode number to create
the tile mode type
Encode the number of tile in X and Y into the stride.
number of tiles in X
number of tiles in Y
Check if @mode is an indexed tile type
a tile mode
Get the tile mode type of @mode
the tile mode
Extract the number of tiles in X from the stride value.
plane stride
Extract the number of tiles in Y from the stride value.
plane stride
Active Format Description (AFD)
For details, see Table 6.14 Active Format in:
ATSC Digital Television Standard:
Part 4 – MPEG-2 Video System Characteristics
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf
and Active Format Description in Complete list of AFD codes
https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codes
and SMPTE ST2016-1
parent #GstMeta
0 for progressive or field 1 and 1 for field 2
#GstVideoAFDSpec that applies to @afd
#GstVideoAFDValue AFD value
Enumeration of the different standards that may apply to AFD data:
0) ETSI/DVB:
https://www.etsi.org/deliver/etsi_ts/101100_101199/101154/02.01.01_60/ts_101154v020101p.pdf
1) ATSC A/53:
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf
2) SMPTE ST2016-1:
AFD value is from DVB/ETSI standard
AFD value is from ATSC A/53 standard
Enumeration of the various values for Active Format Description (AFD)
AFD should be included in video user data whenever the rectangular
picture area containing useful information does not extend to the full height or width of the coded
frame. AFD data may also be included in user data when the rectangular picture area containing
useful information extends to the full height and width of the coded frame.
For details, see Table 6.14 Active Format in:
ATSC Digital Television Standard:
Part 4 – MPEG-2 Video System Characteristics
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf
and Active Format Description in Complete list of AFD codes
https://en.wikipedia.org/wiki/Active_Format_Description#Complete_list_of_AFD_codes
and SMPTE ST2016-1
Notes:
1) AFD 0 is undefined for ATSC and SMPTE ST2016-1, indicating that AFD data is not available:
If Bar Data is not present, AFD '0000' indicates that exact information
is not available and the active image should be assumed to be the same as the coded frame. AFD '0000'.
AFD '0000' accompanied by Bar Data signals that the active image’s aspect ratio is narrower than 16:9,
but is not 4:3 or 14:9. As the exact aspect ratio cannot be conveyed by AFD alone, wherever possible,
AFD ‘0000’ should be accompanied by Bar Data to define the exact vertical or horizontal extent
of the active image.
2) AFD 0 is reserved for DVB/ETSI
3) values 1, 5, 6, 7, and 12 are reserved for both ATSC and DVB/ETSI
4) values 2 and 3 are not recommended for ATSC, but are valid for DVB/ETSI
Unavailable (see note 0 below).
For 4:3 coded frame, letterbox 16:9 image,
at top of the coded frame. For 16:9 coded frame, full frame 16:9 image,
the same as the coded frame.
For 4:3 coded frame, letterbox 14:9 image,
at top of the coded frame. For 16:9 coded frame, pillarbox 14:9 image,
horizontally centered in the coded frame.
For 4:3 coded frame, letterbox image with an aspect ratio
greater than 16:9, vertically centered in the coded frame. For 16:9 coded frame,
letterbox image with an aspect ratio greater than 16:9.
For 4:3 coded frame, full frame 4:3 image,
the same as the coded frame. For 16:9 coded frame, full frame 16:9 image, the same as
the coded frame.
For 4:3 coded frame, full frame 4:3 image, the same as
the coded frame. For 16:9 coded frame, pillarbox 4:3 image, horizontally centered in the
coded frame.
For 4:3 coded frame, letterbox 16:9 image, vertically centered in
the coded frame with all image areas protected. For 16:9 coded frame, full frame 16:9 image,
with all image areas protected.
For 4:3 coded frame, letterbox 14:9 image, vertically centered in
the coded frame. For 16:9 coded frame, pillarbox 14:9 image, horizontally centered in the
coded frame.
For 4:3 coded frame, full frame 4:3 image, with alternative 14:9
center. For 16:9 coded frame, pillarbox 4:3 image, with alternative 14:9 center.
For 4:3 coded frame, letterbox 16:9 image, with alternative 14:9
center. For 16:9 coded frame, full frame 16:9 image, with alternative 14:9 center.
For 4:3 coded frame, letterbox 16:9 image, with alternative 4:3
center. For 16:9 coded frame, full frame 16:9 image, with alternative 4:3 center.
Extra buffer metadata for performing an affine transformation using a 4x4
matrix. The transformation matrix can be composed with
gst_video_affine_transformation_meta_apply_matrix().
The vertices operated on are all in the range 0 to 1, not in
Normalized Device Coordinates (-1 to +1). Transforming points in this space
are assumed to have an origin at (0.5, 0.5, 0.5) in a left-handed coordinate
system with the x-axis moving horizontally (positive values to the right),
the y-axis moving vertically (positive values up the screen) and the z-axis
perpendicular to the screen (positive values into the screen).
parent #GstMeta
the column-major 4x4 transformation matrix
Apply a transformation using the given 4x4 transformation matrix.
Performs the multiplication, meta->matrix X matrix.
a #GstVideoAffineTransformationMeta
a 4x4 transformation matrix to be applied
VideoAggregator can accept AYUV, ARGB and BGRA video streams. For each of the requested
sink pads it will compare the incoming geometry and framerate to define the
output parameters. Indeed output video frames will have the geometry of the
biggest incoming video stream and the framerate of the fastest incoming one.
VideoAggregator will do colorspace conversion.
Zorder for each input stream can be configured on the
#GstVideoAggregatorPad.
The returned #GstTaskPool is used internally for performing parallel
video format conversions/scaling/etc during the
#GstVideoAggregatorPadClass::prepare_frame_start() process.
Subclasses can add their own operation to perform using the returned
#GstTaskPool during #GstVideoAggregatorClass::aggregate_frames().
the #GstTaskPool that can be used by subclasses
for performing concurrent operations
the #GstVideoAggregator
Causes the element to aggregate on a timeout even when no live source is
connected to its sinks. See #GstAggregator:min-upstream-latency for a
companion property: in the vast majority of cases where you plan to plug in
live sources with a non-zero latency, you should set it to a non-zero value.
The #GstVideoInfo representing the currently set
srcpad caps.
An implementation of GstPad that can be used with #GstVideoAggregator.
See #GstVideoAggregator for more details.
Requests the pad to check and update the converter before the next usage to
update for any changes that have happened.
a #GstVideoAggregatorPad
Finish preparing @prepared_frame.
If overriden, `prepare_frame_start` must also be overriden.
the #GstVideoAggregatorPad
the parent #GstVideoAggregator
the #GstVideoFrame to prepare into
Begin preparing the frame from the pad buffer and sets it to prepared_frame.
If overriden, `prepare_frame_finish` must also be overriden.
the #GstVideoAggregatorPad
the parent #GstVideoAggregator
the input #GstBuffer to prepare
the #GstVideoFrame to prepare into
Returns the currently queued buffer that is going to be used
for the current output frame.
This must only be called from the #GstVideoAggregatorClass::aggregate_frames virtual method,
or from the #GstVideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.
The return value is only valid until #GstVideoAggregatorClass::aggregate_frames or #GstVideoAggregatorPadClass::prepare_frame
returns.
The currently queued buffer
a #GstVideoAggregatorPad
Returns the currently prepared video frame that has to be aggregated into
the current output frame.
This must only be called from the #GstVideoAggregatorClass::aggregate_frames virtual method,
or from the #GstVideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.
The return value is only valid until #GstVideoAggregatorClass::aggregate_frames or #GstVideoAggregatorPadClass::prepare_frame
returns.
The currently prepared video frame
a #GstVideoAggregatorPad
Checks if the pad currently has a buffer queued that is going to be used
for the current output frame.
This must only be called from the #GstVideoAggregatorClass::aggregate_frames virtual method,
or from the #GstVideoAggregatorPadClass::prepare_frame virtual method of the aggregator pads.
%TRUE if the pad has currently a buffer queued
a #GstVideoAggregatorPad
Allows selecting that this pad requires an output format with alpha
a #GstVideoAggregatorPad
%TRUE if this pad requires alpha output
The #GstVideoInfo currently set on the pad
the #GstVideoAggregatorPad
the parent #GstVideoAggregator
the input #GstBuffer to prepare
the #GstVideoFrame to prepare into
the #GstVideoAggregatorPad
the parent #GstVideoAggregator
the #GstVideoFrame to prepare into
An implementation of GstPad that can be used with #GstVideoAggregator.
See #GstVideoAggregator for more details.
Extra alignment parameters for the memory of video buffers. This
structure is usually used to configure the bufferpool if it supports the
#GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.
extra pixels on the top
extra pixels on the bottom
extra pixels on the left side
extra pixels on the right side
array with extra alignment requirements for the strides
Set @align to its default values with no padding and no alignment.
a #GstVideoAlignment
Different alpha modes.
When input and output have alpha, it will be copied.
When the input has no alpha, alpha will be set to
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
set all alpha to
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
multiply all alpha with
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE.
When the input format has no alpha but the output format has, the
alpha value will be set to #GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
Video Ancillary data, according to SMPTE-291M specification.
Note that the contents of the data are always stored as 8bit data (i.e. do not contain
the parity check bits).
The Data Identifier
The Secondary Data Identifier (if type 2) or the Data
Block Number (if type 1)
The amount of data (in bytes) in @data (max 255 bytes)
The user data content of the Ancillary packet.
Does not contain the ADF, DID, SDID nor CS.
Some know types of Ancillary Data identifiers.
CEA 708 Ancillary data according to SMPTE 334
CEA 608 Ancillary data according to SMPTE 334
AFD/Bar Ancillary data according to SMPTE 2016-3 (Since: 1.18)
Bar data should be included in video user data
whenever the rectangular picture area containing useful information
does not extend to the full height or width of the coded frame
and AFD alone is insufficient to describe the extent of the image.
Note: either vertical or horizontal bars are specified, but not both.
For more details, see:
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf
and SMPTE ST2016-1
parent #GstMeta
0 for progressive or field 1 and 1 for field 2
if true then bar data specifies letterbox, otherwise pillarbox
If @is_letterbox is true, then the value specifies the
last line of a horizontal letterbox bar area at top of reconstructed frame.
Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox
bar area at the left side of the reconstructed frame
If @is_letterbox is true, then the value specifies the
first line of a horizontal letterbox bar area at bottom of reconstructed frame.
Otherwise, it specifies the first horizontal
luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.
Additional video buffer flags. These flags can potentially be used on any
buffers carrying closed caption data, or video data - even encoded data.
Note that these are only valid for #GstCaps of type: video/... and caption/...
They can conflict with other extended buffer flags.
If the #GstBuffer is interlaced. In mixed
interlace-mode, this flags specifies if the frame is
interlaced or progressive.
If the #GstBuffer is interlaced, then the first field
in the video frame is the top field. If unset, the
bottom field is first.
If the #GstBuffer is interlaced, then the first field
(as defined by the %GST_VIDEO_BUFFER_FLAG_TFF flag setting)
is repeated.
If the #GstBuffer is interlaced, then only the
first field (as defined by the %GST_VIDEO_BUFFER_FLAG_TFF
flag setting) is to be displayed (Since: 1.16).
The #GstBuffer contains one or more specific views,
such as left or right eye view. This flags is set on
any buffer that contains non-mono content - even for
streams that contain only a single viewpoint. In mixed
mono / non-mono streams, the absence of the flag marks
mono buffers.
When conveying stereo/multiview content with
frame-by-frame methods, this flag marks the first buffer
in a bundle of frames that belong together.
The video frame has the top field only. This is the
same as GST_VIDEO_BUFFER_FLAG_TFF |
GST_VIDEO_BUFFER_FLAG_ONEFIELD (Since: 1.16).
Use GST_VIDEO_BUFFER_IS_TOP_FIELD() to check for this flag.
If the #GstBuffer is interlaced, then only the
first field (as defined by the %GST_VIDEO_BUFFER_FLAG_TFF
flag setting) is to be displayed (Since: 1.16).
The video frame has the bottom field only. This is
the same as GST_VIDEO_BUFFER_FLAG_ONEFIELD
(GST_VIDEO_BUFFER_FLAG_TFF flag unset) (Since: 1.16).
Use GST_VIDEO_BUFFER_IS_BOTTOM_FIELD() to check for this flag.
The #GstBuffer contains the end of a video field or frame
boundary such as the last subframe or packet (Since: 1.18).
Offset to define more flags
Create a new bufferpool that can allocate video frames. This bufferpool
supports all the video bufferpool options.
a new #GstBufferPool to allocate video frames
Extra buffer metadata providing Closed Caption.
parent #GstMeta
The type of Closed Caption contained in the meta.
The Closed Caption data.
The size in bytes of @data
The various known types of Closed Caption (CC).
Unknown type of CC
CEA-608 as byte pairs. Note that
this format is not recommended since is does not specify to
which field the caption comes from and therefore assumes
it comes from the first field (and that there is no information
on the second field). Use @GST_VIDEO_CAPTION_TYPE_CEA708_RAW
if you wish to store CEA-608 from two fields and prefix each byte pair
with 0xFC for the first field and 0xFD for the second field.
CEA-608 as byte triplets as defined
in SMPTE S334-1 Annex A. The second and third byte of the byte triplet
is the raw CEA608 data, the first byte is a bitfield: The top/7th bit is
0 for the second field, 1 for the first field, bit 6 and 5 are 0 and
bits 4 to 0 are a 5 bit unsigned integer that represents the line
offset relative to the base-line of the original image format (line 9
for 525-line field 1, line 272 for 525-line field 2, line 5 for
625-line field 1 and line 318 for 625-line field 2).
CEA-708 as cc_data byte triplets. They
can also contain 608-in-708 and the first byte of each triplet has to
be inspected for detecting the type.
CEA-708 (and optionally CEA-608) in
a CDP (Caption Distribution Packet) defined by SMPTE S-334-2.
Contains the whole CDP (starting with 0x9669).
Parses fixed Closed Caption #GstCaps and returns the corresponding caption
type, or %GST_VIDEO_CAPTION_TYPE_UNKNOWN.
#GstVideoCaptionType.
Fixed #GstCaps to parse
Creates new caps corresponding to @type.
new #GstCaps
#GstVideoCaptionType
Extra flags that influence the result from gst_video_chroma_resample_new().
no flags
the input is interlaced
Different subsampling and upsampling methods
Duplicates the chroma samples when
upsampling and drops when subsampling
Uses linear interpolation to reconstruct
missing chroma and averaging to subsample
Different chroma downsampling and upsampling modes
do full chroma up and down sampling
only perform chroma upsampling
only perform chroma downsampling
disable chroma resampling
Perform resampling of @width chroma pixels in @lines.
a #GstVideoChromaResample
pixel lines
the number of pixels on one line
Free @resample
a #GstVideoChromaResample
The resampler must be fed @n_lines at a time. The first line should be
at @offset.
a #GstVideoChromaResample
the number of input lines
the first line
Create a new resampler object for the given parameters. When @h_factor or
@v_factor is > 0, upsampling will be used, otherwise subsampling is
performed.
a new #GstVideoChromaResample that should be freed with
gst_video_chroma_resample_free() after usage.
a #GstVideoChromaMethod
a #GstVideoChromaSite
#GstVideoChromaFlags
the #GstVideoFormat
horizontal resampling factor
vertical resampling factor
Various Chroma sitings.
unknown cositing
no cositing
chroma is horizontally cosited
chroma is vertically cosited
choma samples are sited on alternate lines
chroma samples cosited with luma samples
jpeg style cositing, also for mpeg1 and mjpeg
mpeg2 style cositing
DV style cositing
Convert @s to a #GstVideoChromaSite
a #GstVideoChromaSite or %GST_VIDEO_CHROMA_SITE_UNKNOWN when @s does
not contain a valid chroma-site description.
a chromasite string
Converts @site to its string representation.
a string representation of @site
or %NULL if @site contains undefined value or
is equal to %GST_VIDEO_CHROMA_SITE_UNKNOWN
a #GstVideoChromaSite
This meta is primarily for internal use in GStreamer elements to support
VP8/VP9 transparent video stored into WebM or Matroska containers, or
transparent static AV1 images. Nothing prevents you from using this meta
for custom purposes, but it generally can't be used to easily to add support
for alpha channels to CODECs or formats that don't support that out of the
box.
parent #GstMeta
the encoded alpha frame
#GstMetaInfo pointer that describes #GstVideoCodecAlphaMeta.
A #GstVideoCodecFrame represents a video frame both in raw and
encoded form.
Unique identifier for the frame. Use this if you need
to get hold of the frame later (like when data is being decoded).
Typical usage in decoders is to set this on the opaque value provided
to the library and get back the frame using gst_video_decoder_get_frame()
Decoding timestamp
Presentation timestamp
Duration of the frame
Distance in frames from the last synchronization point.
the input #GstBuffer that created this frame. The buffer is owned
by the frame and references to the frame instead of the buffer should
be kept.
the output #GstBuffer. Implementations should set this either
directly, or by using the
gst_video_decoder_allocate_output_frame() or
gst_video_decoder_allocate_output_buffer() methods. The buffer is
owned by the frame and references to the frame instead of the
buffer should be kept.
Running time when the frame will be used.
Gets private data set on the frame by the subclass via
gst_video_codec_frame_set_user_data() previously.
The previously set user_data
a #GstVideoCodecFrame
Increases the refcount of the given frame by one.
@buf
a #GstVideoCodecFrame
Sets @user_data on the frame and the #GDestroyNotify that will be called when
the frame is freed. Allows to attach private data by the subclass to frames.
If a @user_data was previously set, then the previous set @notify will be called
before the @user_data is replaced.
a #GstVideoCodecFrame
private data
a #GDestroyNotify
Decreases the refcount of the frame. If the refcount reaches 0, the frame
will be freed.
a #GstVideoCodecFrame
Flags for #GstVideoCodecFrame
is the frame only meant to be decoded
is the frame a synchronization point (keyframe)
should the output frame be made a keyframe
should the encoder output stream headers
The buffer data is corrupted.
Structure representing the state of an incoming or outgoing video
stream for encoders and decoders.
Decoders and encoders will receive such a state through their
respective @set_format vmethods.
Decoders and encoders can set the downstream state, by using the
gst_video_decoder_set_output_state() or
gst_video_encoder_set_output_state() methods.
The #GstVideoInfo describing the stream
The #GstCaps used in the caps negotiation of the pad.
a #GstBuffer corresponding to the
'codec_data' field of a stream, or NULL.
The #GstCaps for allocation query and pool
negotiation. Since: 1.10
Mastering display color volume information (HDR metadata) for the stream.
Content light level information for the stream.
Increases the refcount of the given state by one.
@buf
a #GstVideoCodecState
Decreases the refcount of the state. If the refcount reaches 0, the state
will be freed.
a #GstVideoCodecState
The color matrix is used to convert between Y'PbPr and
non-linear RGB (R'G'B')
unknown matrix
identity matrix. Order of coefficients is
actually GBR, also IEC 61966-2-1 (sRGB)
FCC Title 47 Code of Federal Regulations 73.682 (a)(20)
ITU-R BT.709 color matrix, also ITU-R BT1361
/ IEC 61966-2-4 xvYCC709 / SMPTE RP177 Annex B
ITU-R BT.601 color matrix, also SMPTE170M / ITU-R BT1358 525 / ITU-R BT1700 NTSC
SMPTE 240M color matrix
ITU-R BT.2020 color matrix. Since: 1.6
Converts the @value to the #GstVideoColorMatrix
The matrix coefficients (MatrixCoefficients) value is
defined by "ISO/IEC 23001-8 Section 7.3 Table 4"
and "ITU-T H.273 Table 4".
"H.264 Table E-5" and "H.265 Table E.5" share the identical values.
the matched #GstVideoColorMatrix
a ITU-T H.273 matrix coefficients value
Get the coefficients used to convert between Y'PbPr and R'G'B' using @matrix.
When:
|[
0.0 <= [Y',R',G',B'] <= 1.0)
(-0.5 <= [Pb,Pr] <= 0.5)
]|
the general conversion is given by:
|[
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))
]|
and the other way around:
|[
R' = Y' + Cr*2*(1-Kr)
G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb)
B' = Y' + Cb*2*(1-Kb)
]|
TRUE if @matrix was a YUV color format and @Kr and @Kb contain valid
values.
a #GstVideoColorMatrix
result red channel coefficient
result blue channel coefficient
Converts #GstVideoColorMatrix to the "matrix coefficients"
(MatrixCoefficients) value defined by "ISO/IEC 23001-8 Section 7.3 Table 4"
and "ITU-T H.273 Table 4".
"H.264 Table E-5" and "H.265 Table E.5" share the identical values.
The value of ISO/IEC 23001-8 matrix coefficients.
a #GstVideoColorMatrix
The color primaries define the how to transform linear RGB values to and from
the CIE XYZ colorspace.
unknown color primaries
BT709 primaries, also ITU-R BT1361 / IEC
61966-2-4 / SMPTE RP177 Annex B
BT470M primaries, also FCC Title 47 Code
of Federal Regulations 73.682 (a)(20)
BT470BG primaries, also ITU-R BT601-6
625 / ITU-R BT1358 625 / ITU-R BT1700 625 PAL & SECAM
SMPTE170M primaries, also ITU-R
BT601-6 525 / ITU-R BT1358 525 / ITU-R BT1700 NTSC
SMPTE240M primaries
Generic film (colour filters using
Illuminant C)
ITU-R BT2020 primaries. Since: 1.6
Adobe RGB primaries. Since: 1.8
SMPTE ST 428 primaries (CIE 1931
XYZ). Since: 1.16
SMPTE RP 431 primaries (ST 431-2
(2011) / DCI P3). Since: 1.16
SMPTE EG 432 primaries (ST 432-1
(2010) / P3 D65). Since: 1.16
EBU 3213 primaries (JEDEC P22
phosphors). Since: 1.16
Converts the @value to the #GstVideoColorPrimaries
The colour primaries (ColourPrimaries) value is
defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2".
"H.264 Table E-3" and "H.265 Table E.3" share the identical values.
the matched #GstVideoColorPrimaries
a ITU-T H.273 colour primaries value
Get information about the chromaticity coordinates of @primaries.
a #GstVideoColorPrimariesInfo for @primaries.
a #GstVideoColorPrimaries
Checks whether @primaries and @other are functionally equivalent
TRUE if @primaries and @other can be considered equivalent.
a #GstVideoColorPrimaries
another #GstVideoColorPrimaries
Converts #GstVideoColorPrimaries to the "colour primaries" (ColourPrimaries)
value defined by "ISO/IEC 23001-8 Section 7.1 Table 2"
and "ITU-T H.273 Table 2".
"H.264 Table E-3" and "H.265 Table E.3" share the identical values.
The value of ISO/IEC 23001-8 colour primaries.
a #GstVideoColorPrimaries
Structure describing the chromaticity coordinates of an RGB system. These
values can be used to construct a matrix to transform RGB to and from the
XYZ colorspace.
a #GstVideoColorPrimaries
reference white x coordinate
reference white y coordinate
red x coordinate
red y coordinate
green x coordinate
green y coordinate
blue x coordinate
blue y coordinate
Possible color range values. These constants are defined for 8 bit color
values and can be scaled for other bit depths.
unknown range
[0..255] for 8 bit components
[16..235] for 8 bit components. Chroma has
[16..240] range.
Compute the offset and scale values for each component of @info. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
@info and @range.
a #GstVideoColorRange
a #GstVideoFormatInfo
output offsets
output scale
Structure describing the color info.
the color range. This is the valid range for the samples.
It is used to convert the samples to Y'PbPr values.
the color matrix. Used to convert between Y'PbPr and
non-linear RGB (R'G'B')
the transfer function. used to convert between R'G'B' and RGB
color primaries. used to convert between R'G'B' and CIE XYZ
Parse the colorimetry string and update @cinfo with the parsed
values.
%TRUE if @color points to valid colorimetry info.
a #GstVideoColorimetry
a colorimetry string
Compare the 2 colorimetry sets for equality
%TRUE if @cinfo and @other are equal.
a #GstVideoColorimetry
another #GstVideoColorimetry
Compare the 2 colorimetry sets for functionally equality
%TRUE if @cinfo and @other are equivalent.
a #GstVideoColorimetry
bitdepth of a format associated with @cinfo
another #GstVideoColorimetry
bitdepth of a format associated with @other
Check if the colorimetry information in @info matches that of the
string @color.
%TRUE if @color conveys the same colorimetry info as the color
information in @info.
a #GstVideoInfo
a colorimetry string
Make a string representation of @cinfo.
a string representation of @cinfo
or %NULL if all the entries of @cinfo are unknown values.
a #GstVideoColorimetry
Content light level information specified in CEA-861.3, Appendix A.
the maximum content light level
(abbreviated to MaxCLL) in candelas per square meter (cd/m^2 and nit)
the maximum frame average light level
(abbreviated to MaxFLL) in candelas per square meter (cd/m^2 and nit)
Parse @caps and update @linfo
%TRUE if @linfo was successfully set to @caps
a #GstVideoContentLightLevel
a #GstCaps
Parse @caps and update @linfo
if @caps has #GstVideoContentLightLevel and could be parsed
a #GstVideoContentLightLevel
a #GstCaps
Parse the value of content-light-level caps field and update @minfo
with the parsed values.
%TRUE if @linfo points to valid #GstVideoContentLightLevel.
a #GstVideoContentLightLevel
a content-light-level string from caps
Initialize @linfo
a #GstVideoContentLightLevel
Checks equality between @linfo and @other.
%TRUE if @linfo and @other are equal.
a #GstVideoContentLightLevel
a #GstVideoContentLightLevel
Convert @linfo to its string representation.
a string representation of @linfo.
a #GstVideoContentLightLevel
Convert the pixels of @src into @dest using @convert.
If #GST_VIDEO_CONVERTER_OPT_ASYNC_TASKS is %TRUE then this function will
return immediately and needs to be followed by a call to
gst_video_converter_frame_finish().
a #GstVideoConverter
a #GstVideoFrame
a #GstVideoFrame
Wait for a previous async conversion performed using
gst_video_converter_frame() to complete.
a #GstVideoConverter
Free @convert
a #GstVideoConverter
Get the current configuration of @convert.
a #GstStructure that remains valid for as long as @convert is valid
or until gst_video_converter_set_config() is called.
a #GstVideoConverter
Retrieve the input format of @convert.
a #GstVideoInfo
a #GstVideoConverter
Retrieve the output format of @convert.
a #GstVideoInfo
a #GstVideoConverter
Set @config as extra configuration for @convert.
If the parameters in @config can not be set exactly, this function returns
%FALSE and will try to update as much state as possible. The new state can
then be retrieved and refined with gst_video_converter_get_config().
Look at the `GST_VIDEO_CONVERTER_OPT_*` fields to check valid configuration
option and values.
%TRUE when @config could be set.
a #GstVideoConverter
a #GstStructure
Create a new converter object to convert between @in_info and @out_info
with @config.
Returns (nullable): a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
Create a new converter object to convert between @in_info and @out_info
with @config.
The optional @pool can be used to spawn threads, this is useful when
creating new converters rapidly, for example when updating cropping.
Returns (nullable): a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
a #GstTaskPool to spawn threads from
Extra buffer metadata describing image cropping.
parent #GstMeta
the horizontal offset
the vertical offset
the cropped width
the cropped height
This base class is for video decoders turning encoded data into raw video
frames.
The GstVideoDecoder base class and derived subclasses should cooperate as
follows:
## Configuration
* Initially, GstVideoDecoder calls @start when the decoder element
is activated, which allows the subclass to perform any global setup.
* GstVideoDecoder calls @set_format to inform the subclass of caps
describing input video data that it is about to receive, including
possibly configuration data.
While unlikely, it might be called more than once, if changing input
parameters require reconfiguration.
* Incoming data buffers are processed as needed, described in Data
Processing below.
* GstVideoDecoder calls @stop at end of all processing.
## Data processing
* The base class gathers input data, and optionally allows subclass
to parse this into subsequently manageable chunks, typically
corresponding to and referred to as 'frames'.
* Each input frame is provided in turn to the subclass' @handle_frame
callback.
* When the subclass enables the subframe mode with `gst_video_decoder_set_subframe_mode`,
the base class will provide to the subclass the same input frame with
different input buffers to the subclass @handle_frame
callback. During this call, the subclass needs to take
ownership of the input_buffer as @GstVideoCodecFrame.input_buffer
will have been changed before the next subframe buffer is received.
The subclass will call `gst_video_decoder_have_last_subframe`
when a new input frame can be created by the base class.
Every subframe will share the same @GstVideoCodecFrame.output_buffer
to write the decoding result. The subclass is responsible to protect
its access.
* If codec processing results in decoded data, the subclass should call
@gst_video_decoder_finish_frame to have decoded data pushed
downstream. In subframe mode
the subclass should call @gst_video_decoder_finish_subframe until the
last subframe where it should call @gst_video_decoder_finish_frame.
The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER
on buffers or using its own logic to collect the subframes.
In case of decoding failure, the subclass must call
@gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe,
to allow the base class to do timestamp and offset tracking, and possibly
to requeue the frame for a later attempt in the case of reverse playback.
## Shutdown phase
* The GstVideoDecoder class calls @stop to inform the subclass that data
parsing will be stopped.
## Additional Notes
* Seeking/Flushing
* When the pipeline is seeked or otherwise flushed, the subclass is
informed via a call to its @reset callback, with the hard parameter
set to true. This indicates the subclass should drop any internal data
queues and timestamps and prepare for a fresh set of buffers to arrive
for parsing and decoding.
* End Of Stream
* At end-of-stream, the subclass @parse function may be called some final
times with the at_eos parameter set to true, indicating that the element
should not expect any more data to be arriving, and it should parse and
remaining frames and call gst_video_decoder_have_frame() if possible.
The subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It also
needs to provide information about the output caps, when they are known.
This may be when the base class calls the subclass' @set_format function,
though it might be during decoding, before calling
@gst_video_decoder_finish_frame. This is done via
@gst_video_decoder_set_output_state
The subclass is also responsible for providing (presentation) timestamps
(likely based on corresponding input ones). If that is not applicable
or possible, the base class provides limited framerate based interpolation.
Similarly, the base class provides some limited (legacy) seeking support
if specifically requested by the subclass, as full-fledged support
should rather be left to upstream demuxer, parser or alike. This simple
approach caters for seeking and duration reporting using estimated input
bitrates. To enable it, a subclass should call
@gst_video_decoder_set_estimate_rate to enable handling of incoming
byte-streams.
The base class provides some support for reverse playback, in particular
in case incoming data is not packetized or upstream does not provide
fragments on keyframe boundaries. However, the subclass should then be
prepared for the parsing and frame processing stage to occur separately
(in normal forward processing, the latter immediately follows the former),
The subclass also needs to ensure the parsing stage properly marks
keyframes, unless it knows the upstream elements will do so properly for
incoming data.
The bare minimum that a functional subclass needs to implement is:
* Provide pad templates
* Inform the base class of output caps via
@gst_video_decoder_set_output_state
* Parse input data, if it is not considered packetized from upstream
Data will be provided to @parse which should invoke
@gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to
separate the data belonging to each video frame.
* Accept data in @handle_frame and provide decoded results to
@gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
The #GstVideoDecoder
The frame to handle
%TRUE if the decoder should be drained afterwards.
The #GstVideoDecoder
Timestamp of the missing data
Duration of the missing data
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
Removes next @n_bytes of input data and adds it to currently parsed frame.
a #GstVideoDecoder
the number of bytes to add
Helper function that allocates a buffer to hold a video frame for @decoder's
current #GstVideoCodecState.
You should use gst_video_decoder_allocate_output_frame() instead of this
function, if possible at all.
allocated buffer, or NULL if no buffer could be
allocated (e.g. when downstream is flushing or shutting down)
a #GstVideoDecoder
Helper function that allocates a buffer to hold a video frame for @decoder's
current #GstVideoCodecState. Subclass should already have configured video
state and set src pad caps.
The buffer allocated here is owned by the frame and you should only
keep references to the frame, not the buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoDecoder
a #GstVideoCodecFrame
Same as #gst_video_decoder_allocate_output_frame except it allows passing
#GstBufferPoolAcquireParams to the sub call gst_buffer_pool_acquire_buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoDecoder
a #GstVideoCodecFrame
a #GstBufferPoolAcquireParams
Similar to gst_video_decoder_finish_frame(), but drops @frame in any
case and posts a QoS message with the frame's details on the bus.
In any case, the frame is considered finished and released.
a #GstFlowReturn, usually GST_FLOW_OK.
a #GstVideoDecoder
the #GstVideoCodecFrame to drop
Drops input data.
The frame is not considered finished until the whole frame
is finished or dropped by the subclass.
a #GstFlowReturn, usually GST_FLOW_OK.
a #GstVideoDecoder
the #GstVideoCodecFrame
@frame should have a valid decoded data buffer, whose metadata fields
are then appropriately set according to frame data and pushed downstream.
If no output data is provided, @frame is considered skipped.
In any case, the frame is considered finished and released.
After calling this function the output buffer of the frame is to be
considered read-only. This function will also change the metadata
of the buffer.
a #GstFlowReturn resulting from sending data downstream
a #GstVideoDecoder
a decoded #GstVideoCodecFrame
Indicate that a subframe has been finished to be decoded
by the subclass. This method should be called for all subframes
except the last subframe where @gst_video_decoder_finish_frame
should be called instead.
a #GstFlowReturn, usually GST_FLOW_OK.
a #GstVideoDecoder
the #GstVideoCodecFrame
Lets #GstVideoDecoder sub-classes to know the memory @allocator
used by the base class and its @params.
Unref the @allocator after use it.
a #GstVideoDecoder
the #GstAllocator
used
the
#GstAllocationParams of @allocator
the instance of the #GstBufferPool used
by the decoder; free it after use it
a #GstVideoDecoder
currently configured byte to time conversion setting
a #GstVideoDecoder
Get a pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame identified by @frame_number.
a #GstVideoDecoder
system_frame_number of a frame
Get all pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame.
a #GstVideoDecoder
Queries the number of the last subframe received by
the decoder baseclass in the @frame.
the current subframe index received in subframe mode, 1 otherwise.
a #GstVideoDecoder
the #GstVideoCodecFrame to update
Query the configured decoder latency. Results will be returned via
@min_latency and @max_latency.
a #GstVideoDecoder
address of variable in which to store the
configured minimum latency, or %NULL
address of variable in which to store the
configured mximum latency, or %NULL
Determines maximum possible decoding time for @frame that will
allow it to decode and arrive in time (as determined by QoS events).
In particular, a negative result means decoding in time is no longer possible
and should therefore occur as soon/skippy as possible.
max decoding time.
a #GstVideoDecoder
a #GstVideoCodecFrame
currently configured decoder tolerated error count.
a #GstVideoDecoder
Queries decoder required format handling.
%TRUE if required format handling is enabled.
a #GstVideoDecoder
Queries if the decoder requires a sync point before it starts outputting
data in the beginning.
%TRUE if a sync point is required in the beginning.
a #GstVideoDecoder
Get the oldest pending unfinished #GstVideoCodecFrame
oldest pending unfinished #GstVideoCodecFrame.
a #GstVideoDecoder
Get the #GstVideoCodecState currently describing the output stream.
#GstVideoCodecState describing format of video data.
a #GstVideoDecoder
Queries whether input data is considered packetized or not by the
base class.
TRUE if input data is considered packetized.
a #GstVideoDecoder
Returns the number of bytes previously added to the current frame
by calling gst_video_decoder_add_to_frame().
The number of bytes pending for the current frame
a #GstVideoDecoder
Queries the number of subframes in the frame processed by
the decoder baseclass.
the current subframe processed received in subframe mode.
a #GstVideoDecoder
the #GstVideoCodecFrame to update
The current QoS proportion.
a #GstVideoDecoder
current QoS proportion, or %NULL
Queries whether input data is considered as subframes or not by the
base class. If FALSE, each input buffer will be considered as a full
frame.
TRUE if input data is considered as sub frames.
a #GstVideoDecoder
Gathers all data collected for currently parsed frame, gathers corresponding
metadata and passes it along for further processing, i.e. @handle_frame.
a #GstFlowReturn
a #GstVideoDecoder
Indicates that the last subframe has been processed by the decoder
in @frame. This will release the current frame in video decoder
allowing to receive new frames from upstream elements. This method
must be called in the subclass @handle_frame callback.
a #GstFlowReturn, usually GST_FLOW_OK.
a #GstVideoDecoder
the #GstVideoCodecFrame to update
Sets the audio decoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with gst_audio_decoder_merge_tags().
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
MT safe.
a #GstVideoDecoder
a #GstTagList to merge, or NULL to unset
previously-set tags
the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
Returns caps that express @caps (or sink template caps if @caps == NULL)
restricted to resolution/format/... combinations supported by downstream
elements.
a #GstCaps owned by caller
a #GstVideoDecoder
initial caps
filter caps
Similar to gst_video_decoder_drop_frame(), but simply releases @frame
without any processing other than removing it from list of pending frames,
after which it is considered finished and released.
a #GstVideoDecoder
the #GstVideoCodecFrame to release
Allows the #GstVideoDecoder subclass to request from the base class that
a new sync should be requested from upstream, and that @frame was the frame
when the subclass noticed that a new sync point is required. A reason for
the subclass to do this could be missing reference frames, for example.
The base class will then request a new sync point from upstream as long as
the time that passed since the last one is exceeding
#GstVideoDecoder:min-force-key-unit-interval.
The subclass can signal via @flags how the frames until the next sync point
should be handled:
* If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT is selected then
all following input frames until the next sync point are discarded.
This can be useful if the lack of a sync point will prevent all further
decoding and the decoder implementation is not very robust in handling
missing references frames.
* If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT is selected
then all output frames following @frame are marked as corrupted via
%GST_BUFFER_FLAG_CORRUPTED. Corrupted frames can be automatically
dropped by the base class, see #GstVideoDecoder:discard-corrupted-frames.
Subclasses can manually mark frames as corrupted via %GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED
before calling gst_video_decoder_finish_frame().
a #GstVideoDecoder
a #GstVideoCodecFrame
#GstVideoDecoderRequestSyncPointFlags
Allows baseclass to perform byte to time estimated conversion.
a #GstVideoDecoder
whether to enable byte to time conversion
Same as #gst_video_decoder_set_output_state() but also allows you to also set
the interlacing mode.
the newly configured output state.
a #GstVideoDecoder
a #GstVideoFormat
A #GstVideoInterlaceMode
The width in pixels
The height in pixels
An optional reference #GstVideoCodecState
Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder latency
is. If the provided values changed from previously provided ones, this will
also post a LATENCY message on the bus so the pipeline can reconfigure its
global latency.
a #GstVideoDecoder
minimum latency
maximum latency
Sets numbers of tolerated decoder errors, where a tolerated one is then only
warned about, but more than tolerated will lead to fatal error. You can set
-1 for never returning fatal errors. Default is set to
GST_VIDEO_DECODER_MAX_ERRORS.
The '-1' option was added in 1.4
a #GstVideoDecoder
max tolerated errors
Configures decoder format needs. If enabled, subclass needs to be
negotiated with format caps before it can process any data. It will then
never be handed any data before it has been configured.
Otherwise, it might be handed data without having been configured and
is then expected being able to do so either by default
or based on the input data.
a #GstVideoDecoder
new state
Configures whether the decoder requires a sync point before it starts
outputting data in the beginning. If enabled, the base class will discard
all non-sync point frames in the beginning and after a flush and does not
pass it to the subclass.
If the first frame is not a sync point, the base class will request a sync
point via the force-key-unit event.
a #GstVideoDecoder
new state
Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
as the output state for the decoder.
Any previously set output state on @decoder will be replaced by the newly
created one.
If the subclass wishes to copy over existing fields (like pixel aspec ratio,
or framerate) from an existing #GstVideoCodecState, it can be provided as a
@reference.
If the subclass wishes to override some fields from the output state (like
pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
The new output state will only take effect (set on pads and buffers) starting
from the next call to #gst_video_decoder_finish_frame().
the newly configured output state.
a #GstVideoDecoder
a #GstVideoFormat
The width in pixels
The height in pixels
An optional reference #GstVideoCodecState
Allows baseclass to consider input data as packetized or not. If the
input is packetized, then the @parse method will not be called.
a #GstVideoDecoder
whether the input data should be considered as packetized.
If this is set to TRUE, it informs the base class that the subclass
can receive the data at a granularity lower than one frame.
Note that in this mode, the subclass has two options. It can either
require the presence of a GST_VIDEO_BUFFER_FLAG_MARKER to mark the
end of a frame. Or it can operate in such a way that it will decode
a single frame at a time. In this second case, every buffer that
arrives to the element is considered part of the same frame until
gst_video_decoder_finish_frame() is called.
In either case, the same #GstVideoCodecFrame will be passed to the
GstVideoDecoderClass:handle_frame vmethod repeatedly with a
different GstVideoCodecFrame:input_buffer every time until the end of the
frame has been signaled using either method.
This method must be called during the decoder subclass @set_format call.
a #GstVideoDecoder
whether the input data should be considered as subframes.
Lets #GstVideoDecoder sub-classes decide if they want the sink pad
to use the default pad query handler to reply to accept-caps queries.
By setting this to true it is possible to further customize the default
handler with %GST_PAD_SET_ACCEPT_INTERSECT and
%GST_PAD_SET_ACCEPT_TEMPLATE
a #GstVideoDecoder
if the default pad accept-caps query handling should be used
GstVideoDecoderRequestSyncPointFlags to use for the automatically
requested sync points if `automatic-request-sync-points` is enabled.
If set to %TRUE the decoder will automatically request sync points when
it seems like a good idea, e.g. if the first frames are not key frames or
if packet loss was reported by upstream.
If set to %TRUE the decoder will discard frames that are marked as
corrupted instead of outputting them.
Maximum number of tolerated consecutive decode errors. See
gst_video_decoder_set_max_errors() for more details.
Minimum interval between force-key-unit events sent upstream by the
decoder. Setting this to 0 will cause every event to be handled, setting
this to %GST_CLOCK_TIME_NONE will cause every event to be ignored.
See gst_video_event_new_upstream_force_key_unit() for more details about
force-key-unit events.
If set to %TRUE the decoder will handle QoS events received
from downstream elements.
This includes dropping output frames which are detected as late
using the metrics reported by those events.
Subclasses can override any of the available virtual methods or not, as
needed. At minimum @handle_frame needs to be overridden, and @set_format
and likely as well. If non-packetized input is supported or expected,
@parse needs to be overridden as well.
The #GstVideoDecoder
The frame to handle
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
%TRUE if the decoder should be drained afterwards.
The #GstVideoDecoder
Timestamp of the missing data
Duration of the missing data
Flags to be used in combination with gst_video_decoder_request_sync_point().
See the function documentation for more details.
discard all following
input until the next sync point.
discard all following
output until the next sync point.
The interface allows unified access to control flipping and rotation
operations of video-sources or operators.
#GstVideoDirectionInterface interface.
parent interface type.
GstVideoDither provides implementations of several dithering algorithms
that can be applied to lines of video pixels to quantize and dither them.
Free @dither
a #GstVideoDither
Dither @width pixels starting from offset @x in @line using @dither.
@y is the line number of @line in the output image.
a #GstVideoDither
pointer to the pixels of the line
x coordinate
y coordinate
the width
Make a new dither object for dithering lines of @format using the
algorithm described by @method.
Each component will be quantized to a multiple of @quantizer. Better
performance is achieved when @quantizer is a power of 2.
@width is the width of the lines that this ditherer will handle.
a new #GstVideoDither
a #GstVideoDitherMethod
a #GstVideoDitherFlags
a #GstVideoFormat
quantizer
the width of the lines
Extra flags that influence the result from gst_video_chroma_resample_new().
no flags
the input is interlaced
quantize values in addition to adding dither.
Different dithering methods to use.
no dithering
propagate rounding errors downwards
Dither with floyd-steinberg error diffusion
Dither with Sierra Lite error diffusion
ordered dither using a bayer pattern
This base class is for video encoders turning raw video into
encoded video data.
GstVideoEncoder and subclass should cooperate as follows.
## Configuration
* Initially, GstVideoEncoder calls @start when the encoder element
is activated, which allows subclass to perform any global setup.
* GstVideoEncoder calls @set_format to inform subclass of the format
of input video data that it is about to receive. Subclass should
setup for encoding and configure base class as appropriate
(e.g. latency). While unlikely, it might be called more than once,
if changing input parameters require reconfiguration. Baseclass
will ensure that processing of current configuration is finished.
* GstVideoEncoder calls @stop at end of all processing.
## Data processing
* Base class collects input data and metadata into a frame and hands
this to subclass' @handle_frame.
* If codec processing results in encoded data, subclass should call
@gst_video_encoder_finish_frame to have encoded data pushed
downstream.
* If implemented, baseclass calls subclass @pre_push just prior to
pushing to allow subclasses to modify some metadata on the buffer.
If it returns GST_FLOW_OK, the buffer is pushed downstream.
* GstVideoEncoderClass will handle both srcpad and sinkpad events.
Sink events will be passed to subclass if @event callback has been
provided.
## Shutdown phase
* GstVideoEncoder class calls @stop to inform the subclass that data
parsing will be stopped.
Subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It should
also be able to provide fixed src pad caps in @getcaps by the time it calls
@gst_video_encoder_finish_frame.
Things that subclass need to take care of:
* Provide pad templates
* Provide source pad caps before pushing the first buffer
* Accept data in @handle_frame and provide encoded results to
@gst_video_encoder_finish_frame.
The #GstVideoEncoder:qos property will enable the Quality-of-Service
features of the encoder which gather statistics about the real-time
performance of the downstream elements. If enabled, subclasses can
use gst_video_encoder_get_max_encode_time() to check if input frames
are already late and drop them right away to give a chance to the
pipeline to catch up.
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Helper function that allocates a buffer to hold an encoded video frame
for @encoder's current #GstVideoCodecState.
allocated buffer
a #GstVideoEncoder
size of the buffer
Helper function that allocates a buffer to hold an encoded video frame for @encoder's
current #GstVideoCodecState. Subclass should already have configured video
state and set src pad caps.
The buffer allocated here is owned by the frame and you should only
keep references to the frame, not the buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoEncoder
a #GstVideoCodecFrame
size of the buffer
@frame must have a valid encoded data buffer, whose metadata fields
are then appropriately set according to frame data or no buffer at
all if the frame should be dropped.
It is subsequently pushed downstream or provided to @pre_push.
In any case, the frame is considered finished and released.
After calling this function the output buffer of the frame is to be
considered read-only. This function will also change the metadata
of the buffer.
a #GstFlowReturn resulting from sending data downstream
a #GstVideoEncoder
an encoded #GstVideoCodecFrame
If multiple subframes are produced for one input frame then use this method
for each subframe, except for the last one. Before calling this function,
you need to fill frame->output_buffer with the encoded buffer to push.
You must call #gst_video_encoder_finish_frame() for the last sub-frame
to tell the encoder that the frame has been fully encoded.
This function will change the metadata of @frame and frame->output_buffer
will be pushed downstream.
a #GstFlowReturn resulting from pushing the buffer downstream.
a #GstVideoEncoder
a #GstVideoCodecFrame being encoded
Lets #GstVideoEncoder sub-classes to know the memory @allocator
used by the base class and its @params.
Unref the @allocator after use it.
a #GstVideoEncoder
the #GstAllocator
used
the
#GstAllocationParams of @allocator
Get a pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame identified by @frame_number.
a #GstVideoEncoder
system_frame_number of a frame
Get all pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame.
a #GstVideoEncoder
Query the configured encoding latency. Results will be returned via
@min_latency and @max_latency.
a #GstVideoEncoder
address of variable in which to store the
configured minimum latency, or %NULL
address of variable in which to store the
configured maximum latency, or %NULL
Determines maximum possible encoding time for @frame that will
allow it to encode and arrive in time (as determined by QoS events).
In particular, a negative result means encoding in time is no longer possible
and should therefore occur as soon/skippy as possible.
If no QoS events have been received from downstream, or if
#GstVideoEncoder:qos is disabled this function returns #G_MAXINT64.
max decoding time.
a #GstVideoEncoder
a #GstVideoCodecFrame
Returns the minimum force-keyunit interval, see gst_video_encoder_set_min_force_key_unit_interval()
for more details.
the minimum force-keyunit interval
the encoder
Get the oldest unfinished pending #GstVideoCodecFrame
oldest unfinished pending #GstVideoCodecFrame
a #GstVideoEncoder
Get the current #GstVideoCodecState
#GstVideoCodecState describing format of video data.
a #GstVideoEncoder
Checks if @encoder is currently configured to handle Quality-of-Service
events from downstream.
%TRUE if the encoder is configured to perform Quality-of-Service.
the encoder
Sets the video encoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with gst_video_encoder_merge_tags().
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
MT safe.
a #GstVideoEncoder
a #GstTagList to merge, or NULL to unset
previously-set tags
the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Returns caps that express @caps (or sink template caps if @caps == NULL)
restricted to resolution/format/... combinations supported by downstream
elements (e.g. muxers).
a #GstCaps owned by caller
a #GstVideoEncoder
initial caps
filter caps
Set the codec headers to be sent downstream whenever requested.
a #GstVideoEncoder
a list of #GstBuffer containing the codec header
Informs baseclass of encoding latency. If the provided values changed from
previously provided ones, this will also post a LATENCY message on the bus
so the pipeline can reconfigure its global latency.
a #GstVideoEncoder
minimum latency
maximum latency
Sets the minimum interval for requesting keyframes based on force-keyunit
events. Setting this to 0 will allow to handle every event, setting this to
%GST_CLOCK_TIME_NONE causes force-keyunit events to be ignored.
the encoder
minimum interval
Request minimal value for PTS passed to handle_frame.
For streams with reordered frames this can be used to ensure that there
is enough time to accommodate first DTS, which may be less than first PTS
a #GstVideoEncoder
minimal PTS that will be passed to handle_frame
Creates a new #GstVideoCodecState with the specified caps as the output state
for the encoder.
Any previously set output state on @encoder will be replaced by the newly
created one.
The specified @caps should not contain any resolution, pixel-aspect-ratio,
framerate, codec-data, .... Those should be specified instead in the returned
#GstVideoCodecState.
If the subclass wishes to copy over existing fields (like pixel aspect ratio,
or framerate) from an existing #GstVideoCodecState, it can be provided as a
@reference.
If the subclass wishes to override some fields from the output state (like
pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
The new output state will only take effect (set on pads and buffers) starting
from the next call to #gst_video_encoder_finish_frame().
the newly configured output state.
a #GstVideoEncoder
the #GstCaps to use for the output
An optional reference @GstVideoCodecState
Configures @encoder to handle Quality-of-Service events from downstream.
the encoder
the new qos value.
Minimum interval between force-keyunit requests in nanoseconds. See
gst_video_encoder_set_min_force_key_unit_interval() for more details.
Subclasses can override any of the available virtual methods or not, as
needed. At minimum @handle_frame needs to be overridden, and @set_format
and @get_caps are likely needed as well.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Field order of interlaced content. This is only valid for
interlace-mode=interleaved and not interlace-mode=mixed. In the case of
mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via
buffer flags.
unknown field order for interlaced content.
The actual field order is signalled via buffer flags.
top field is first
bottom field is first
Convert @order to a #GstVideoFieldOrder
the #GstVideoFieldOrder of @order or
#GST_VIDEO_FIELD_ORDER_UNKNOWN when @order is not a valid
string representation for a #GstVideoFieldOrder.
a field order
Convert @order to its string representation.
@order as a string.
a #GstVideoFieldOrder
Provides useful functions and a base class for video filters.
The videofilter will by default enable QoS on the parent GstBaseTransform
to implement frame dropping.
The video filter class structure.
the parent class structure
Extra video flags
no flags
a variable fps is selected, fps_n and fps_d
denote the maximum fps of the video
Each color has been scaled by the alpha
value.
Enum value describing the most common video formats.
See the [GStreamer raw video format design document](https://gstreamer.freedesktop.org/documentation/additional/design/mediatype-video-raw.html#formats)
for details about the layout and packing of these formats in memory.
Unknown or unset video format id
Encoded video format. Only ever use that in caps for
special video formats in combination with non-system
memory GstCapsFeatures where it does not make sense
to specify a real video format.
planar 4:2:0 YUV
planar 4:2:0 YVU (like I420 but UV planes swapped)
packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...)
packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...)
sparse rgb packed into 32 bit, space last
sparse reverse rgb packed into 32 bit, space last
sparse rgb packed into 32 bit, space first
sparse reverse rgb packed into 32 bit, space first
rgb with alpha channel last
reverse rgb with alpha channel last
rgb with alpha channel first
reverse rgb with alpha channel first
RGB packed into 24 bits without padding (`R-G-B-R-G-B`)
reverse RGB packed into 24 bits without padding (`B-G-R-B-G-R`)
planar 4:1:1 YUV
planar 4:2:2 YUV
packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...)
planar 4:4:4 YUV
packed 4:2:2 10-bit YUV, complex format
packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order
planar 4:2:0 YUV with interleaved UV plane
planar 4:2:0 YUV with interleaved VU plane
8-bit grayscale
16-bit grayscale, most significant byte first
16-bit grayscale, least significant byte first
packed 4:4:4 YUV (Y-U-V ...)
rgb 5-6-5 bits per component
reverse rgb 5-6-5 bits per component
rgb 5-5-5 bits per component
reverse rgb 5-5-5 bits per component
packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
planar 4:4:2:0 AYUV
8-bit paletted RGB
planar 4:1:0 YUV
planar 4:1:0 YUV (like YUV9 but UV planes swapped)
packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...)
rgb with alpha channel first, 16 bits (native endianness) per channel
packed 4:4:4 YUV with alpha channel, 16 bits (native endianness) per channel (A0-Y0-U0-V0 ...)
packed 4:4:4 RGB, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 8 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:2:2 YUV with interleaved UV plane (Since: 1.2)
planar 4:4:4 YUV with interleaved UV plane (Since: 1.2)
NV12 with 64x32 tiling in zigzag pattern (Since: 1.4)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:2:2 YUV with interleaved VU plane (Since: 1.6)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
packed 4:4:4 YUV (U-Y-V ...) (Since: 1.10)
packed 4:2:2 YUV (V0-Y0-U0-Y1 V2-Y2-U2-Y3 V4 ...)
planar 4:4:4:4 ARGB, 8 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
10-bit grayscale, packed into 32bit words (2 bits padding) (Since: 1.14)
10-bit variant of @GST_VIDEO_FORMAT_NV12, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
10-bit variant of @GST_VIDEO_FORMAT_NV16, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
Fully packed variant of NV12_10LE32 (Since: 1.16)
packed 4:2:2 YUV, 10 bits per channel (Since: 1.16)
packed 4:4:4 YUV, 10 bits per channel(A-V-Y-U...) (Since: 1.16)
packed 4:4:4 YUV with alpha channel (V0-U0-Y0-A0...) (Since: 1.16)
packed 4:4:4 RGB with alpha channel(B-G-R-A), 10 bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.16)
packed 4:4:4 RGB with alpha channel(R-G-B-A), 10 bits for R/G/B channel and MSB 2 bits for alpha channel (Since: 1.18)
planar 4:4:4 YUV, 16 bits per channel (Since: 1.18)
planar 4:4:4 YUV, 16 bits per channel (Since: 1.18)
planar 4:2:0 YUV with interleaved UV plane, 16 bits per channel (Since: 1.18)
planar 4:2:0 YUV with interleaved UV plane, 16 bits per channel (Since: 1.18)
planar 4:2:0 YUV with interleaved UV plane, 12 bits per channel (Since: 1.18)
planar 4:2:0 YUV with interleaved UV plane, 12 bits per channel (Since: 1.18)
packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V) (Since: 1.18)
packed 4:2:2 YUV, 12 bits per channel (Y-U-Y-V) (Since: 1.18)
packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...) (Since: 1.18)
packed 4:4:4:4 YUV, 12 bits per channel(U-Y-V-A...) (Since: 1.18)
NV12 with 4x4 tiles in linear order.
NV12 with 32x32 tiles in linear order.
Planar 4:4:4 RGB, R-G-B order
Planar 4:4:4 RGB, B-G-R order
Planar 4:2:0 YUV with interleaved UV plane with alpha as
3rd plane.
RGB with alpha channel first, 16 bits (little endian)
per channel.
RGB with alpha channel first, 16 bits (big endian)
per channel.
RGB with alpha channel last, 16 bits (little endian)
per channel.
RGB with alpha channel last, 16 bits (big endian)
per channel.
Reverse RGB with alpha channel last, 16 bits (little endian)
per channel.
Reverse RGB with alpha channel last, 16 bits (big endian)
per channel.
Reverse RGB with alpha channel first, 16 bits (little endian)
per channel.
Reverse RGB with alpha channel first, 16 bits (big endian)
per channel.
NV12 with 16x32 Y tiles and 16x16 UV tiles.
NV12 with 8x128 tiles in linear order.
NV12 10bit big endian with 8x128 tiles in linear order.
@GST_VIDEO_FORMAT_NV12_10LE40 with 4x4 pixels tiles (5 bytes
per tile row). This format is produced by Verisilicon/Hantro decoders.
@GST_VIDEO_FORMAT_DMA_DRM represent the DMA DRM special format. It's
only used with memory:DMABuf #GstCapsFeatures, where an extra
parameter (drm-format) is required to define the image format and
its memory layout.
Mediatek 10bit NV12 little endian with 16x32 tiles in linear order, tile 2
bits.
Mediatek 10bit NV12 little endian with 16x32 tiles in linear order, raster
2 bits.
planar 4:4:2:2 YUV, 8 bits per channel
planar 4:4:4:4 YUV, 8 bits per channel
planar 4:4:4:4 YUV, 12 bits per channel
planar 4:4:4:4 YUV, 12 bits per channel
planar 4:4:2:2 YUV, 12 bits per channel
planar 4:4:2:2 YUV, 12 bits per channel
planar 4:4:2:0 YUV, 12 bits per channel
planar 4:4:2:0 YUV, 12 bits per channel
planar 4:4:4:4 YUV, 16 bits per channel
planar 4:4:4:4 YUV, 16 bits per channel
planar 4:4:2:2 YUV, 16 bits per channel
planar 4:4:2:2 YUV, 16 bits per channel
planar 4:4:2:0 YUV, 16 bits per channel
planar 4:4:2:0 YUV, 16 bits per channel
planar 4:4:4 RGB, 16 bits per channel
planar 4:4:4 RGB, 16 bits per channel
packed RGB with alpha, 8 bits per channel
Converts a FOURCC value into the corresponding #GstVideoFormat.
If the FOURCC cannot be represented by #GstVideoFormat,
#GST_VIDEO_FORMAT_UNKNOWN is returned.
the #GstVideoFormat describing the FOURCC value
a FOURCC value representing raw YUV video
Find the #GstVideoFormat for the given parameters.
a #GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to
not specify a known format.
the amount of bits used for a pixel
the amount of bits used to store a pixel. This value is bigger than
@depth
the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN
the red mask
the green mask
the blue mask
the alpha mask, or 0 if no alpha mask
Convert the @format string to its #GstVideoFormat.
the #GstVideoFormat for @format or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
a format string
Get the #GstVideoFormatInfo for @format
The #GstVideoFormatInfo for @format.
a #GstVideoFormat
Get the default palette of @format. This the palette used in the pack
function for paletted formats.
the default palette of @format or %NULL when
@format does not have a palette.
a #GstVideoFormat
size of the palette in bytes
Converts a #GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If @format has
no corresponding FOURCC value, 0 is returned.
the FOURCC corresponding to @format
a #GstVideoFormat video format
Returns a string containing a descriptive name for
the #GstVideoFormat if there is one, or NULL otherwise.
the name corresponding to @format
a #GstVideoFormat video format
The different video flags that a format info can have.
The video format is YUV, components are numbered
0=Y, 1=U, 2=V.
The video format is RGB, components are numbered
0=R, 1=G, 2=B.
The video is gray, there is one gray component
with index 0.
The video format has an alpha components with
the number 3.
The video format has data stored in little
endianness.
The video format has a palette. The palette
is stored in the second plane and indexes are stored in the first plane.
The video format has a complex layout that
can't be described with the usual information in the #GstVideoFormatInfo.
This format can be used in a
#GstVideoFormatUnpack and #GstVideoFormatPack function.
The format is tiled, there is tiling information
in the last plane.
The tile size varies per plane according to the subsampling.
Information for a video format.
#GstVideoFormat
string representation of the format
use readable description of the format
#GstVideoFormatFlags
The number of bits used to pack data items. This can be less than 8
when multiple pixels are stored in a byte. for values > 8 multiple bytes
should be read according to the endianness flag before applying the shift
and mask.
the number of components in the video format.
the number of bits to shift away to get the component data
the depth in bits for each component
the pixel stride of each component. This is the amount of
bytes to the pixel immediately to the right. When bits < 8, the stride is
expressed in bits. For 24-bit RGB, this would be 3 bytes, for example,
while it would be 4 bytes for RGBx or ARGB.
the number of planes for this format. The number of planes can be
less than the amount of components when multiple components are packed into
one plane.
the plane number where a component can be found
the offset in the plane where the first pixel of the components
can be found.
subsampling factor of the width for the component. Use
GST_VIDEO_SUB_SCALE to scale a width.
subsampling factor of the height for the component. Use
GST_VIDEO_SUB_SCALE to scale a height.
the format of the unpacked pixels. This format must have the
#GST_VIDEO_FORMAT_FLAG_UNPACK flag set.
an unpack function for this format
the amount of lines that will be packed
an pack function for this format
The tiling mode
The width of a tile, in bytes, represented as a shift. DEPRECATED,
use tile_info[] array instead.
The height of a tile, in bytes, represented as a shift. DEPREACTED,
use tile_info[] array instead.
Information about the tiles for each of the planes.
Fill @components with the number of all the components packed in plane @p
for the format @info. A value of -1 in @components indicates that no more
components are packed in the plane.
#GstVideoFormatInfo
a plane number
array used to store component numbers
Extrapolate @plane stride from the first stride of an image. This helper is
useful to support legacy API were only one stride is supported.
The extrapolated stride for @plane
#GstVideoFormatInfo
a plane number
The fist plane stride
Packs @width pixels from @src to the given planes and strides in the
format @info. The pixels from source have each component interleaved
and will be packed into the planes in @data.
This function operates on pack_lines lines, meaning that @src should
contain at least pack_lines lines with a stride of @sstride and @y
should be a multiple of pack_lines.
Subsampled formats will use the horizontally and vertically cosited
component from the source. Subsampling should be performed before
packing.
Because this function does not have a x coordinate, it is not possible to
pack pixels starting from an unaligned position. For tiled images this
means that packing should start from a tile coordinate. For subsampled
formats this means that a complete pixel needs to be packed.
a #GstVideoFormatInfo
flags to control the packing
a source array
the source array stride
pointers to the destination data planes
strides of the destination planes
the chroma siting of the target when subsampled (not used)
the y position in the image to pack to
the amount of pixels to pack.
Unpacks @width pixels from the given planes and strides containing data of
format @info. The pixels will be unpacked into @dest with each component
interleaved as per @info's unpack_format, which will usually be one of
#GST_VIDEO_FORMAT_ARGB, #GST_VIDEO_FORMAT_AYUV, #GST_VIDEO_FORMAT_ARGB64 or
#GST_VIDEO_FORMAT_AYUV64 depending on the format to unpack.
@dest should at least be big enough to hold @width * bytes_per_pixel bytes
where bytes_per_pixel relates to the unpack format and will usually be
either 4 or 8 depending on the unpack format. bytes_per_pixel will be
the same as the pixel stride for plane 0 for the above formats.
For subsampled formats, the components will be duplicated in the destination
array. Reconstruction of the missing components can be performed in a
separate step after unpacking.
a #GstVideoFormatInfo
flags to control the unpacking
a destination array
pointers to the data planes
strides of the planes
the x position in the image to start from
the y position in the image to start from
the amount of pixels to unpack.
A video frame obtained from gst_video_frame_map()
the #GstVideoInfo
#GstVideoFrameFlags for the frame
the mapped buffer
pointer to metadata if any
id of the mapped frame. the id can for example be used to
identify the frame in case of multiview video.
pointers to the plane data
mappings of the planes
Copy the contents from @src to @dest.
Note: Since: 1.18, @dest dimensions are allowed to be
smaller than @src dimensions.
TRUE if the contents could be copied.
a #GstVideoFrame
a #GstVideoFrame
Copy the plane with index @plane from @src to @dest.
Note: Since: 1.18, @dest dimensions are allowed to be
smaller than @src dimensions.
TRUE if the contents could be copied.
a #GstVideoFrame
a #GstVideoFrame
a plane
Unmap the memory previously mapped with gst_video_frame_map.
a #GstVideoFrame
Use @info and @buffer to fill in the values of @frame. @frame is usually
allocated on the stack, and you will pass the address to the #GstVideoFrame
structure allocated on the stack; gst_video_frame_map() will then fill in
the structures with the various video-specific information you need to access
the pixels of the video buffer. You can then use accessor macros such as
GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(),
GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc.
to get to the pixels.
|[<!-- language="C" -->
GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);
for (h = 0; h < height; ++h) {
for (w = 0; w < width; ++w) {
guint8 *pixel = pixels + h * stride + w * pixel_stride;
memset (pixel, 0, pixel_stride);
}
}
gst_video_frame_unmap (&vframe);
}
...
]|
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
The purpose of this function is to make it easy for you to get to the video
pixels in a generic way, without you having to worry too much about details
such as whether the video data is allocated in one contiguous memory chunk
or multiple memory chunks (e.g. one for each plane); or if custom strides
and custom plane offsets are used or not (as signalled by GstVideoMeta on
each buffer). This function will just fill the #GstVideoFrame structure
with the right values and if you use the accessor macros everything will
just work and you can access the data easily. It also maps the underlying
memory chunks for you.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
#GstMapFlags
Use @info and @buffer to fill in the values of @frame with the video frame
information of frame @id.
When @id is -1, the default frame is mapped. When @id != -1, this function
will return %FALSE when there is no GstVideoMeta with that id.
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
the frame id to map
#GstMapFlags
Extra video frame flags
no flags
The video frame is interlaced. In mixed
interlace-mode, this flag specifies if the frame is interlaced or
progressive.
The video frame has the top field first
The video frame has the repeat flag
The video frame has one field
The video contains one or
more non-mono views
The video frame is the first
in a set of corresponding views provided as sequential frames.
The video frame has the top field only. This
is the same as GST_VIDEO_FRAME_FLAG_TFF | GST_VIDEO_FRAME_FLAG_ONEFIELD
(Since: 1.16).
The video frame has one field
The video frame has the bottom field
only. This is the same as GST_VIDEO_FRAME_FLAG_ONEFIELD
(GST_VIDEO_FRAME_FLAG_TFF flag unset) (Since: 1.16).
Additional mapping flags for gst_video_frame_map().
Don't take another reference of the buffer and store it in
the GstVideoFrame. This makes sure that the buffer stays
writable while the frame is mapped, but requires that the
buffer reference stays valid until the frame is unmapped again.
Offset to define more flags
The orientation of the GL texture.
Top line first in memory, left row first
Bottom line first in memory, left row first
Top line first in memory, right row first
Bottom line first in memory, right row first
The GL texture type.
Luminance texture, GL_LUMINANCE
Luminance-alpha texture, GL_LUMINANCE_ALPHA
RGB 565 texture, GL_RGB
RGB texture, GL_RGB
RGBA texture, GL_RGBA
R texture, GL_RED_EXT
RG texture, GL_RG_EXT
Extra buffer metadata for uploading a buffer to an OpenGL texture
ID. The caller of gst_video_gl_texture_upload_meta_upload() must
have OpenGL set up and call this from a thread where it is valid
to upload something to an OpenGL texture.
parent #GstMeta
Orientation of the textures
Number of textures that are generated
Type of each texture
Uploads the buffer which owns the meta to a specific texture ID.
%TRUE if uploading succeeded, %FALSE otherwise.
a #GstVideoGLTextureUploadMeta
the texture IDs to upload to
disable gamma handling
convert between input and output gamma
Different gamma conversion modes
Information describing image properties. This information can be filled
in from GstCaps with gst_video_info_from_caps(). The information is also used
to store the specific video info when mapping a video frame with
gst_video_frame_map().
Use the provided macros to access the info in this structure.
the format info of the video
the interlace mode
additional video flags
the width of the video
the height of the video
the default size of one frame
the number of views for multiview video
a #GstVideoChromaSite.
the colorimetry info
the pixel-aspect-ratio numerator
the pixel-aspect-ratio denominator
the framerate numerator
the framerate denominator
offsets of the planes
strides of the planes
Allocate a new #GstVideoInfo that is also initialized with
gst_video_info_init().
a new #GstVideoInfo. free with gst_video_info_free().
Parse @caps to generate a #GstVideoInfo.
A #GstVideoInfo, or %NULL if @caps couldn't be parsed
a #GstCaps
Adjust the offset and stride fields in @info so that the padding and
stride alignment in @align is respected.
Extra padding will be added to the right side when stride alignment padding
is required and @align will be updated with the new padding values.
%FALSE if alignment could not be applied, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
a #GstVideoInfo
alignment parameters
Extra padding will be added to the right side when stride alignment padding
is required and @align will be updated with the new padding values.
This variant of gst_video_info_align() provides the updated size, in bytes,
of each video plane after the alignment, including all horizontal and vertical
paddings.
In case of GST_VIDEO_INTERLACE_MODE_ALTERNATE info, the returned sizes are the
ones used to hold a single field, not the full frame.
%FALSE if alignment could not be applied, e.g. because the
size of a frame can't be represented as a 32 bit integer
a #GstVideoInfo
alignment parameters
array used to store the plane sizes
Converts among various #GstFormat types. This function handles
GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For
raw video, GST_FORMAT_DEFAULT corresponds to video frames. This
function can be used to handle pad queries of the type GST_QUERY_CONVERT.
TRUE if the conversion was successful.
a #GstVideoInfo
#GstFormat of the @src_value
value to convert
#GstFormat of the @dest_value
pointer to destination value
Copy a GstVideoInfo structure.
a new #GstVideoInfo. free with gst_video_info_free.
a #GstVideoInfo
Free a GstVideoInfo structure previously allocated with gst_video_info_new()
or gst_video_info_copy().
a #GstVideoInfo
Compares two #GstVideoInfo and returns whether they are equal or not
%TRUE if @info and @other are equal, else %FALSE.
a #GstVideoInfo
a #GstVideoInfo
Set the default info for a video frame of @format and @width and @height.
Note: This initializes @info first, no values are preserved. This function
does not set the offsets correctly for interlaced vertically
subsampled formats.
%FALSE if the returned video info is invalid, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
a #GstVideoInfo
the format
a width
a height
Same as #gst_video_info_set_format but also allowing to set the interlaced
mode.
%FALSE if the returned video info is invalid, e.g. because the
size of a frame can't be represented as a 32 bit integer.
a #GstVideoInfo
the format
a #GstVideoInterlaceMode
a width
a height
Convert the values of @info into a #GstCaps.
a new #GstCaps containing the info of @info.
a #GstVideoInfo
Parse @caps and update @info.
TRUE if @caps could be parsed
#GstVideoInfo
a #GstCaps
Initialize @info with default values.
a #GstVideoInfo
Information describing a DMABuf image properties. It wraps #GstVideoInfo and
adds DRM information such as drm-fourcc and drm-modifier, required for
negotiation and mapping.
the associated #GstVideoInfo
the fourcc defined by drm
the drm modifier
Allocate a new #GstVideoInfoDmaDrm that is also initialized with
gst_video_info_dma_drm_init().
a new #GstVideoInfoDmaDrm.
Free it with gst_video_info_dma_drm_free().
Parse @caps to generate a #GstVideoInfoDmaDrm. Please note that the
@caps should be a dma drm caps. The gst_video_is_dma_drm_caps() can
be used to verify it before calling this function.
A #GstVideoInfoDmaDrm,
or %NULL if @caps couldn't be parsed.
a #GstCaps
Free a #GstVideoInfoDmaDrm structure previously allocated with
gst_video_info_dma_drm_new()
a #GstVideoInfoDmaDrm
Convert the values of @drm_info into a #GstCaps. Please note that the
@caps returned will be a dma drm caps which sets format field to DMA_DRM,
and contains a new drm-format field. The value of drm-format field is
composed of a drm fourcc and a modifier, such as NV12:0x0100000000000002.
a new #GstCaps containing the
info in @drm_info.
a #GstVideoInfoDmaDrm
Convert the #GstVideoInfoDmaDrm into a traditional #GstVideoInfo with
recognized video format. For DMA kind memory, the non linear DMA format
should be recognized as #GST_VIDEO_FORMAT_DMA_DRM. This helper function
sets @info's video format into the default value according to @drm_info's
drm_fourcc field.
%TRUE if @info is converted correctly.
a #GstVideoInfoDmaDrm
#GstVideoInfo
Parse @caps and update @info. Please note that the @caps should be
a dma drm caps. The gst_video_is_dma_drm_caps() can be used to verify
it before calling this function.
TRUE if @caps could be parsed
#GstVideoInfoDmaDrm
a #GstCaps
Fills @drm_info if @info's format has a valid drm format and @modifier is also
valid
%TRUE if @drm_info is filled correctly.
#GstVideoInfoDmaDrm
a #GstVideoInfo
the associated modifier value.
Initialize @drm_info with default values.
a #GstVideoInfoDmaDrm
The possible values of the #GstVideoInterlaceMode describing the interlace
mode of the stream.
all frames are progressive
2 fields are interleaved in one video
frame. Extra buffer flags describe the field order.
frames contains both interlaced and
progressive video, the buffer flags describe the frame and fields.
2 fields are stored in one buffer, use the
frame ID to get access to the required field. For multiview (the
'views' property > 1) the fields of view N can be found at frame ID
(N * 2) and (N * 2) + 1.
Each field has only half the amount of lines as noted in the
height property. This mode requires multiple GstVideoMeta metadata
to describe the fields.
1 field is stored in one buffer,
@GST_VIDEO_BUFFER_FLAG_TF or @GST_VIDEO_BUFFER_FLAG_BF indicates if
the buffer is carrying the top or bottom field, respectively. The top and
bottom buffers must alternate in the pipeline, with this mode
(Since: 1.16).
Convert @mode to a #GstVideoInterlaceMode
the #GstVideoInterlaceMode of @mode or
#GST_VIDEO_INTERLACE_MODE_PROGRESSIVE when @mode is not a valid
string representation for a #GstVideoInterlaceMode.
a mode
Convert @mode to its string representation.
@mode as a string.
a #GstVideoInterlaceMode
Mastering display color volume information defined by SMPTE ST 2086
(a.k.a static HDR metadata).
the xy coordinates of primaries in the CIE 1931 color space.
the index 0 contains red, 1 is for green and 2 is for blue.
each value is normalized to 50000 (meaning that in unit of 0.00002)
the xy coordinates of white point in the CIE 1931 color space.
each value is normalized to 50000 (meaning that in unit of 0.00002)
the maximum value of display luminance
in unit of 0.0001 candelas per square metre (cd/m^2 and nit)
the minimum value of display luminance
in unit of 0.0001 candelas per square metre (cd/m^2 and nit)
Set string representation of @minfo to @caps
%TRUE if @minfo was successfully set to @caps
a #GstVideoMasteringDisplayInfo
a #GstCaps
Parse @caps and update @minfo
%TRUE if @caps has #GstVideoMasteringDisplayInfo and could be parsed
a #GstVideoMasteringDisplayInfo
a #GstCaps
Initialize @minfo
a #GstVideoMasteringDisplayInfo
Checks equality between @minfo and @other.
%TRUE if @minfo and @other are equal.
a #GstVideoMasteringDisplayInfo
a #GstVideoMasteringDisplayInfo
Convert @minfo to its string representation
a string representation of @minfo
a #GstVideoMasteringDisplayInfo
Extract #GstVideoMasteringDisplayInfo from @mastering
%TRUE if @minfo was filled with @mastering
a #GstVideoMasteringDisplayInfo
a #GstStructure representing #GstVideoMasteringDisplayInfo
Used to represent display_primaries and white_point of
#GstVideoMasteringDisplayInfo struct. See #GstVideoMasteringDisplayInfo
the x coordinate of CIE 1931 color space in unit of 0.00002.
the y coordinate of CIE 1931 color space in unit of 0.00002.
Different color matrix conversion modes
do conversion between color matrices
use the input color matrix to convert
to and from R'G'B
use the output color matrix to convert
to and from R'G'B
disable color matrix conversion.
Extra buffer metadata describing image properties
This meta can also be used by downstream elements to specifiy their
buffer layout requirements for upstream. Upstream should try to
fit those requirements, if possible, in order to prevent buffer copies.
This is done by passing a custom #GstStructure to
gst_query_add_allocation_meta() when handling the ALLOCATION query.
This structure should be named 'video-meta' and can have the following
fields:
- padding-top (uint): extra pixels on the top
- padding-bottom (uint): extra pixels on the bottom
- padding-left (uint): extra pixels on the left side
- padding-right (uint): extra pixels on the right side
The padding fields have the same semantic as #GstVideoMeta.alignment
and so represent the paddings requested on produced video buffers.
Since 1.24 it can be serialized using gst_meta_serialize() and
gst_meta_deserialize().
parent #GstMeta
the buffer this metadata belongs to
additional video flags
the video format
identifier of the frame
the video width
the video height
the number of planes in the image
array of offsets for the planes. This field might not always be
valid, it is used by the default implementation of @map.
array of strides for the planes. This field might not always be
valid, it is used by the default implementation of @map.
the paddings and alignment constraints of the video buffer.
It is up to the caller of `gst_buffer_add_video_meta_full()` to set it
using gst_video_meta_set_alignment(), if they did not it defaults
to no padding and no alignment. Since: 1.18
Compute the padded height of each plane from @meta (padded size
divided by stride).
It is not valid to call this function with a meta associated to a
TILED video format.
%TRUE if @meta's alignment is valid and @plane_height has been
updated, %FALSE otherwise
a #GstVideoMeta
array used to store the plane height
Compute the size, in bytes, of each video plane described in @meta including
any padding and alignment constraint defined in @meta->alignment.
%TRUE if @meta's alignment is valid and @plane_size has been
updated, %FALSE otherwise
a #GstVideoMeta
array used to store the plane sizes
Map the video plane with index @plane in @meta and return a pointer to the
first byte of the plane and the stride of the plane.
TRUE if the map operation was successful.
a #GstVideoMeta
a plane
a #GstMapInfo
the data of @plane
the stride of @plane
@GstMapFlags
Set the alignment of @meta to @alignment. This function checks that
the paddings defined in @alignment are compatible with the strides
defined in @meta and will fail to update if they are not.
%TRUE if @alignment's meta has been updated, %FALSE if not
a #GstVideoMeta
a #GstVideoAlignment
Unmap a previously mapped plane with gst_video_meta_map().
TRUE if the memory was successfully unmapped.
a #GstVideoMeta
a plane
a #GstMapInfo
Extra data passed to a video transform #GstMetaTransformFunction such as:
"gst-video-scale".
the input #GstVideoInfo
the output #GstVideoInfo
Get the #GQuark for the "gst-video-scale" metadata transform operation.
a #GQuark
GstVideoMultiviewFlags are used to indicate extra properties of a
stereo/multiview stream beyond the frame layout and buffer mapping
that is conveyed in the #GstVideoMultiviewMode.
No flags
For stereo streams, the
normal arrangement of left and right views is reversed.
The left view is vertically
mirrored.
The left view is horizontally
mirrored.
The right view is
vertically mirrored.
The right view is
horizontally mirrored.
For frame-packed
multiview modes, indicates that the individual
views have been encoded with half the true width or height
and should be scaled back up for display. This flag
is used for overriding input layout interpretation
by adjusting pixel-aspect-ratio.
For side-by-side, column interleaved or checkerboard packings, the
pixel width will be doubled. For row interleaved and top-bottom
encodings, pixel height will be doubled.
The video stream contains both
mono and multiview portions, signalled on each buffer by the
absence or presence of the @GST_VIDEO_BUFFER_FLAG_MULTIPLE_VIEW
buffer flag.
See #GstVideoMultiviewFlags.
#GstVideoMultiviewFramePacking represents the subset of #GstVideoMultiviewMode
values that can be applied to any video frame without needing extra metadata.
It can be used by elements that provide a property to override the
multiview interpretation of a video stream when the video doesn't contain
any markers.
This enum is used (for example) on playbin, to re-interpret a played
video stream as a stereoscopic video. The individual enum values are
equivalent to and have the same value as the matching #GstVideoMultiviewMode.
A special value indicating
no frame packing info.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are
provided in the left and right half of the frame respectively.
Left and right eye
views are provided in the left and right half of the frame, but
have been sampled using quincunx method, with half-pixel offset
between the 2 views.
Alternating vertical
columns of pixels represent the left and right eye view respectively.
Alternating horizontal
rows of pixels represent the left and right eye view respectively.
The top half of the frame
contains the left eye, and the bottom half the right eye.
Pixels are arranged with
alternating pixels representing left and right eye views in a
checkerboard fashion.
All possible stereoscopic 3D and multiview representations.
In conjunction with #GstVideoMultiviewFlags, describes how
multiview content is being transported in the stream.
A special value indicating
no multiview information. Used in GstVideoInfo and other places to
indicate that no specific multiview handling has been requested or
provided. This value is never carried on caps.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are
provided in the left and right half of the frame respectively.
Left and right eye
views are provided in the left and right half of the frame, but
have been sampled using quincunx method, with half-pixel offset
between the 2 views.
Alternating vertical
columns of pixels represent the left and right eye view respectively.
Alternating horizontal
rows of pixels represent the left and right eye view respectively.
The top half of the frame
contains the left eye, and the bottom half the right eye.
Pixels are arranged with
alternating pixels representing left and right eye views in a
checkerboard fashion.
Left and right eye views
are provided in separate frames alternately.
Multiple
independent views are provided in separate frames in sequence.
This method only applies to raw video buffers at the moment.
Specific view identification is via the `GstVideoMultiviewMeta`
and #GstVideoMeta(s) on raw video buffers.
Multiple views are
provided as separate #GstMemory framebuffers attached to each
#GstBuffer, described by the `GstVideoMultiviewMeta`
and #GstVideoMeta(s)
The #GstVideoMultiviewMode value
Given a string from a caps multiview-mode field,
output the corresponding #GstVideoMultiviewMode
or #GST_VIDEO_MULTIVIEW_MODE_NONE
multiview-mode field string from caps
Given a #GstVideoMultiviewMode returns the multiview-mode caps string
for insertion into a caps structure
The caps string representation of the mode, or NULL if invalid.
A #GstVideoMultiviewMode value
The interface allows unified access to control flipping and autocenter
operation of video-sources or operators.
Parses the "image-orientation" tag and transforms it into the
#GstVideoOrientationMethod enum.
TRUE if there was a valid "image-orientation" tag in the taglist.
A #GstTagList
The location where to return the orientation.
Get the horizontal centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the horizontal flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Set the horizontal centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the horizontal flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Set the vertical centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the vertical flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Get the horizontal centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the horizontal flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Set the horizontal centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the horizontal flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Set the vertical centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the vertical flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
#GstVideoOrientationInterface interface.
parent interface type.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
The different video orientation methods.
Identity (no rotation)
Rotate clockwise 90 degrees
Rotate 180 degrees
Rotate counter-clockwise 90 degrees
Flip horizontally
Flip vertically
Flip across upper left/lower right diagonal
Flip across upper right/lower left diagonal
Select flip method based on image-orientation tag
Current status depends on plugin internal setup
The #GstVideoOverlay interface is used for 2 main purposes :
* To get a grab on the Window where the video sink element is going to render.
This is achieved by either being informed about the Window identifier that
the video sink element generated, or by forcing the video sink element to use
a specific Window identifier for rendering.
* To force a redrawing of the latest video frame the video sink element
displayed on the Window. Indeed if the #GstPipeline is in #GST_STATE_PAUSED
state, moving the Window around will damage its content. Application
developers will want to handle the Expose events themselves and force the
video sink element to refresh the Window's content.
Using the Window created by the video sink is probably the simplest scenario,
in some cases, though, it might not be flexible enough for application
developers if they need to catch events such as mouse moves and button
clicks.
Setting a specific Window identifier on the video sink element is the most
flexible solution but it has some issues. Indeed the application needs to set
its Window identifier at the right time to avoid internal Window creation
from the video sink element. To solve this issue a #GstMessage is posted on
the bus to inform the application that it should set the Window identifier
immediately. Here is an example on how to do that correctly:
|[
static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);
XSetWindowBackgroundPixmap (disp, win, None);
XMapRaised (disp, win);
XSync (disp, FALSE);
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
win);
gst_message_unref (message);
return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
NULL);
...
}
]|
## Two basic usage scenarios
There are two basic usage scenarios: in the simplest case, the application
uses #playbin or #playsink or knows exactly what particular element is used
for video output, which is usually the case when the application creates
the videosink to use (e.g. #xvimagesink, #ximagesink, etc.) itself; in this
case, the application can just create the videosink element, create and
realize the window to render the video on and then
call gst_video_overlay_set_window_handle() directly with the XID or native
window handle, before starting up the pipeline.
As #playbin and #playsink implement the video overlay interface and proxy
it transparently to the actual video sink even if it is created later, this
case also applies when using these elements.
In the other and more common case, the application does not know in advance
what GStreamer video sink element will be used for video output. This is
usually the case when an element such as #autovideosink is used.
In this case, the video sink element itself is created
asynchronously from a GStreamer streaming thread some time after the
pipeline has been started up. When that happens, however, the video sink
will need to know right then whether to render onto an already existing
application window or whether to create its own window. This is when it
posts a prepare-window-handle message, and that is also why this message needs
to be handled in a sync bus handler which will be called from the streaming
thread directly (because the video sink will need an answer right then).
As response to the prepare-window-handle element message in the bus sync
handler, the application may use gst_video_overlay_set_window_handle() to tell
the video sink to render onto an existing window surface. At this point the
application should already have obtained the window handle / XID, so it
just needs to set it. It is generally not advisable to call any GUI toolkit
functions or window system functions from the streaming thread in which the
prepare-window-handle message is handled, because most GUI toolkits and
windowing systems are not thread-safe at all and a lot of care would be
required to co-ordinate the toolkit and window system calls of the
different threads (Gtk+ users please note: prior to Gtk+ 2.18
`GDK_WINDOW_XID` was just a simple structure access, so generally fine to do
within the bus sync handler; this macro was changed to a function call in
Gtk+ 2.18 and later, which is likely to cause problems when called from a
sync handler; see below for a better approach without `GDK_WINDOW_XID`
used in the callback).
## GstVideoOverlay and Gtk+
|[
#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h> // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h> // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
if (video_window_handle != 0) {
GstVideoOverlay *overlay;
// GST_MESSAGE_SRC (message) will be the video sink element
overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle (overlay, video_window_handle);
} else {
g_warning ("Should have obtained video_window_handle by now!");
}
gst_message_unref (message);
return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
// Tell Gtk+/Gdk to create a native window for this widget instead of
// drawing onto the parent widget.
// This is here just for pedagogical purposes, GDK_WINDOW_XID will call
// it as well in newer Gtk versions
if (!gdk_window_ensure_native (widget->window))
g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif
#ifdef GDK_WINDOWING_X11
{
gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
video_window_handle = xid;
}
#endif
#ifdef GDK_WINDOWING_WIN32
{
HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
video_window_handle = (guintptr) wnd;
}
#endif
}
...
int
main (int argc, char **argv)
{
GtkWidget *video_window;
GtkWidget *app_window;
...
app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
...
video_window = gtk_drawing_area_new ();
g_signal_connect (video_window, "realize",
G_CALLBACK (video_widget_realize_cb), NULL);
gtk_widget_set_double_buffered (video_window, FALSE);
...
// usually the video_window will not be directly embedded into the
// application window like this, but there will be many other widgets
// and the video window will be embedded in one of them instead
gtk_container_add (GTK_CONTAINER (ap_window), video_window);
...
// show the GUI
gtk_widget_show_all (app_window);
// realize window now so that the video window gets created and we can
// obtain its XID/HWND before the pipeline is started up and the videosink
// asks for the XID/HWND of the window to render onto
gtk_widget_realize (video_window);
// we should have the XID/HWND now
g_assert (video_window_handle != 0);
...
// set up sync handler for setting the xid once the pipeline is started
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
NULL);
gst_object_unref (bus);
...
gst_element_set_state (pipeline, GST_STATE_PLAYING);
...
}
]|
## GstVideoOverlay and Qt
|[
#include <glib.h>;
#include <gst/gst.h>;
#include <gst/video/videooverlay.h>;
#include <QApplication>;
#include <QTimer>;
#include <QWidget>;
int main(int argc, char *argv[])
{
if (!g_thread_supported ())
g_thread_init (NULL);
gst_init (&argc, &argv);
QApplication app(argc, argv);
app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));
// prepare the pipeline
GstElement *pipeline = gst_pipeline_new ("xvoverlay");
GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
gst_element_link (src, sink);
// prepare the ui
QWidget window;
window.resize(320, 240);
window.show();
WId xwinid = window.winId();
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);
// run the pipeline
GstStateChangeReturn sret = gst_element_set_state (pipeline,
GST_STATE_PLAYING);
if (sret == GST_STATE_CHANGE_FAILURE) {
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
// Exit application
QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
}
int ret = app.exec();
window.hide();
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return ret;
}
]|
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will install "render-rectangle" property into the
class.
The class on which the properties will be installed
The first free property ID to use
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will parse and set the render rectangle calling
gst_video_overlay_set_render_rectangle().
%TRUE if the @property_id matches the GstVideoOverlay property
The instance on which the property is set
The highest property ID.
The property ID
The #GValue to be set
Tell an overlay that it has been exposed. This will redraw the current frame
in the drawable even if the pipeline is PAUSED.
a #GstVideoOverlay to expose.
Tell an overlay that it should handle events from the window system. These
events are forwarded upstream as navigation events. In some window system,
events are not propagated in the window hierarchy if a client is listening
for them. This method allows you to disable events handling completely
from the #GstVideoOverlay.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
This will call the video overlay's set_window_handle method. You
should use this method to tell to an overlay to display video output to a
specific window (e.g. an XWindow on X11). Passing 0 as the @handle will
tell the overlay to stop using that window and create an internal one.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
Tell an overlay that it has been exposed. This will redraw the current frame
in the drawable even if the pipeline is PAUSED.
a #GstVideoOverlay to expose.
This will post a "have-window-handle" element message on the bus.
This function should only be used by video overlay plugin developers.
a #GstVideoOverlay which got a window
a platform-specific handle referencing the window
Tell an overlay that it should handle events from the window system. These
events are forwarded upstream as navigation events. In some window system,
events are not propagated in the window hierarchy if a client is listening
for them. This method allows you to disable events handling completely
from the #GstVideoOverlay.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
This will post a "prepare-window-handle" element message on the bus
to give applications an opportunity to call
gst_video_overlay_set_window_handle() before a plugin creates its own
window.
This function should only be used by video overlay plugin developers.
a #GstVideoOverlay which does not yet have an Window handle set
Configure a subregion as a video target within the window set by
gst_video_overlay_set_window_handle(). If this is not used or not supported
the video will fill the area of the window set as the overlay to 100%.
By specifying the rectangle, the video can be overlayed to a specific region
of that window only. After setting the new rectangle one should call
gst_video_overlay_expose() to force a redraw. To unset the region pass -1 for
the @width and @height parameters.
This method is needed for non fullscreen video overlay in UI toolkits that
do not support subwindows.
%FALSE if not supported by the sink.
a #GstVideoOverlay
the horizontal offset of the render area inside the window
the vertical offset of the render area inside the window
the width of the render area inside the window
the height of the render area inside the window
This will call the video overlay's set_window_handle method. You
should use this method to tell to an overlay to display video output to a
specific window (e.g. an XWindow on X11). Passing 0 as the @handle will
tell the overlay to stop using that window and create an internal one.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
Functions to create and handle overlay compositions on video buffers.
An overlay composition describes one or more overlay rectangles to be
blended on top of a video buffer.
This API serves two main purposes:
* it can be used to attach overlay information (subtitles or logos)
to non-raw video buffers such as GL/VAAPI/VDPAU surfaces. The actual
blending of the overlay can then be done by e.g. the video sink that
processes these non-raw buffers.
* it can also be used to blend overlay rectangles on top of raw video
buffers, thus consolidating blending functionality for raw video in
one place.
Together, this allows existing overlay elements to easily handle raw
and non-raw video as input in without major changes (once the overlays
have been put into a #GstVideoOverlayComposition object anyway) - for raw
video the overlay can just use the blending function to blend the data
on top of the video, and for surface buffers it can just attach them to
the buffer and let the sink render the overlays.
Creates a new video overlay composition object to hold one or more
overlay rectangles.
Note that since 1.20 this allows to pass %NULL for @rectangle.
a new #GstVideoOverlayComposition. Unref with
gst_video_overlay_composition_unref() when no longer needed.
a #GstVideoOverlayRectangle to add to the
composition
Adds an overlay rectangle to an existing overlay composition object. This
must be done right after creating the overlay composition.
a #GstVideoOverlayComposition
a #GstVideoOverlayRectangle to add to the
composition
Blends the overlay rectangles in @comp on top of the raw video data
contained in @video_buf. The data in @video_buf must be writable and
mapped appropriately.
Since @video_buf data is read and will be modified, it ought be
mapped with flag GST_MAP_READWRITE.
a #GstVideoOverlayComposition
a #GstVideoFrame containing raw video data in a
supported format. It should be mapped using GST_MAP_READWRITE
Makes a copy of @comp and all contained rectangles, so that it is possible
to modify the composition and contained rectangles (e.g. add additional
rectangles or change the render co-ordinates or render dimension). The
actual overlay pixel data buffers contained in the rectangles are not
copied.
a new #GstVideoOverlayComposition equivalent
to @comp.
a #GstVideoOverlayComposition to copy
Returns the @n-th #GstVideoOverlayRectangle contained in @comp.
the @n-th rectangle, or NULL if @n is out of
bounds. Will not return a new reference, the caller will need to
obtain her own reference using gst_video_overlay_rectangle_ref()
if needed.
a #GstVideoOverlayComposition
number of the rectangle to get
Returns the sequence number of this composition. Sequence numbers are
monotonically increasing and unique for overlay compositions and rectangles
(meaning there will never be a rectangle with the same sequence number as
a composition).
the sequence number of @comp
a #GstVideoOverlayComposition
Takes ownership of @comp and returns a version of @comp that is writable
(i.e. can be modified). Will either return @comp right away, or create a
new writable copy of @comp and unref @comp itself. All the contained
rectangles will also be copied, but the actual overlay pixel data buffers
contained in the rectangles are not copied.
a writable #GstVideoOverlayComposition
equivalent to @comp.
a #GstVideoOverlayComposition to copy
Returns the number of #GstVideoOverlayRectangle<!-- -->s contained in @comp.
the number of rectangles
a #GstVideoOverlayComposition
Extra buffer metadata describing image overlay data.
parent #GstMeta
the attached #GstVideoOverlayComposition
Overlay format flags.
no flags
RGB are premultiplied by A/255.
a global-alpha value != 1 is set.
#GstVideoOverlay interface
parent interface type.
a #GstVideoOverlay to expose.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
An opaque video overlay rectangle object. A rectangle contains a single
overlay rectangle which can be added to a composition.
Creates a new video overlay rectangle with ARGB or AYUV pixel data.
The layout in case of ARGB of the components in memory is B-G-R-A
on little-endian platforms
(corresponding to #GST_VIDEO_FORMAT_BGRA) and A-R-G-B on big-endian
platforms (corresponding to #GST_VIDEO_FORMAT_ARGB). In other words,
pixels are treated as 32-bit words and the lowest 8 bits then contain
the blue component value and the highest 8 bits contain the alpha
component value. Unless specified in the flags, the RGB values are
non-premultiplied. This is the format that is used by most hardware,
and also many rendering libraries such as Cairo, for example.
The pixel data buffer must have #GstVideoMeta set.
a new #GstVideoOverlayRectangle. Unref with
gst_video_overlay_rectangle_unref() when no longer needed.
a #GstBuffer pointing to the pixel memory
the X co-ordinate on the video where the top-left corner of this
overlay rectangle should be rendered to
the Y co-ordinate on the video where the top-left corner of this
overlay rectangle should be rendered to
the render width of this rectangle on the video
the render height of this rectangle on the video
flags
Makes a copy of @rectangle, so that it is possible to modify it
(e.g. to change the render co-ordinates or render dimension). The
actual overlay pixel data buffers contained in the rectangle are not
copied.
a new #GstVideoOverlayRectangle equivalent
to @rectangle.
a #GstVideoOverlayRectangle to copy
Retrieves the flags associated with a #GstVideoOverlayRectangle.
This is useful if the caller can handle both premultiplied alpha and
non premultiplied alpha, for example. By knowing whether the rectangle
uses premultiplied or not, it can request the pixel data in the format
it is stored in, to avoid unnecessary conversion.
the #GstVideoOverlayFormatFlags associated with the rectangle.
a #GstVideoOverlayRectangle
Retrieves the global-alpha value associated with a #GstVideoOverlayRectangle.
the global-alpha value associated with the rectangle.
a #GstVideoOverlayRectangle
a #GstBuffer holding the ARGB pixel data with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
a #GstBuffer holding the AYUV pixel data with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
a #GstBuffer holding the pixel data with
format as originally provided and specified in video meta with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the ARGB pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the AYUV pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the render position and render dimension of the overlay
rectangle on the video.
TRUE if valid render dimensions were retrieved.
a #GstVideoOverlayRectangle
address where to store the X render offset
address where to store the Y render offset
address where to store the render width
address where to store the render height
Returns the sequence number of this rectangle. Sequence numbers are
monotonically increasing and unique for overlay compositions and rectangles
(meaning there will never be a rectangle with the same sequence number as
a composition).
Using the sequence number of a rectangle as an indicator for changed
pixel-data of a rectangle is dangereous. Some API calls, like e.g.
gst_video_overlay_rectangle_set_global_alpha(), automatically update
the per rectangle sequence number, which is misleading for renderers/
consumers, that handle global-alpha themselves. For them the
pixel-data returned by gst_video_overlay_rectangle_get_pixels_*()
won't be different for different global-alpha values. In this case a
renderer could also use the GstBuffer pointers as a hint for changed
pixel-data.
the sequence number of @rectangle
a #GstVideoOverlayRectangle
Sets the global alpha value associated with a #GstVideoOverlayRectangle. Per-
pixel alpha values are multiplied with this value. Valid
values: 0 <= global_alpha <= 1; 1 to deactivate.
@rectangle must be writable, meaning its refcount must be 1. You can
make the rectangles inside a #GstVideoOverlayComposition writable using
gst_video_overlay_composition_make_writable() or
gst_video_overlay_composition_copy().
a #GstVideoOverlayRectangle
Global alpha value (0 to 1.0)
Sets the render position and dimensions of the rectangle on the video.
This function is mainly for elements that modify the size of the video
in some way (e.g. through scaling or cropping) and need to adjust the
details of any overlays to match the operation that changed the size.
@rectangle must be writable, meaning its refcount must be 1. You can
make the rectangles inside a #GstVideoOverlayComposition writable using
gst_video_overlay_composition_make_writable() or
gst_video_overlay_composition_copy().
a #GstVideoOverlayRectangle
render X position of rectangle on video
render Y position of rectangle on video
render width of rectangle
render height of rectangle
The different flags that can be used when packing and unpacking.
No flag
When the source has a smaller depth
than the target format, set the least significant bits of the target
to 0. This is likely slightly faster but less accurate. When this flag
is not specified, the most significant bits of the source are duplicated
in the least significant bits of the destination.
The source is interlaced. The unpacked
format will be interlaced as well with each line containing
information from alternating fields. (Since: 1.2)
Different primaries conversion modes
disable conversion between primaries
do conversion between primaries only
when it can be merged with color matrix conversion.
fast conversion between primaries
Helper structure representing a rectangular area.
X coordinate of rectangle's top-left point
Y coordinate of rectangle's top-left point
width of the rectangle
height of the rectangle
Extra buffer metadata describing an image region of interest
parent #GstMeta
GQuark describing the semantic of the Roi (f.i. a face, a pedestrian)
identifier of this particular ROI
identifier of its parent ROI, used f.i. for ROI hierarchisation.
x component of upper-left corner
y component of upper-left corner
bounding box width
bounding box height
list of #GstStructure containing element-specific params for downstream,
see gst_video_region_of_interest_meta_add_param(). (Since: 1.14)
Attach element-specific parameters to @meta meant to be used by downstream
elements which may handle this ROI.
The name of @s is used to identify the element these parameters are meant for.
This is typically used to tell encoders how they should encode this specific region.
For example, a structure named "roi/x264enc" could be used to give the
QP offsets this encoder should use when encoding the region described in @meta.
Multiple parameters can be defined for the same meta so different encoders
can be supported by cross platform applications).
a #GstVideoRegionOfInterestMeta
a #GstStructure
Retrieve the parameter for @meta having @name as structure name,
or %NULL if there is none.
See also: gst_video_region_of_interest_meta_add_param()
a #GstStructure
a #GstVideoRegionOfInterestMeta
a name.
#GstVideoResampler is a structure which holds the information
required to perform various kinds of resampling filtering.
the input size
the output size
the maximum number of taps
the number of phases
array with the source offset for each output element
array with the phase to use for each output element
array with new number of taps for each phase
the taps for all phases
Clear a previously initialized #GstVideoResampler @resampler.
a #GstVideoResampler
Different resampler flags.
no flags
when no taps are given, half the
number of calculated taps. This can be used when making scalers
for the different fields of an interlaced picture. Since: 1.10
Different subsampling and upsampling methods
Duplicates the samples when
upsampling and drops when downsampling
Uses linear interpolation to reconstruct
missing samples and averaging to downsample
Uses cubic interpolation
Uses sinc interpolation
Uses lanczos interpolation
H.264 H.265 metadata from SEI User Data Unregistered messages
parent #GstMeta
User Data Unregistered UUID
Unparsed data buffer
Size of the data buffer
#GstMetaInfo pointer that describes #GstVideoSEIUserDataUnregisteredMeta.
#GstVideoScaler is a utility object for rescaling and resampling
video frames using various interpolation / sampling methods.
Scale a rectangle of pixels in @src with @src_stride to @dest with
@dest_stride using the horizontal scaler @hscaler and the vertical
scaler @vscale.
One or both of @hscale and @vscale can be NULL to only perform scaling in
one dimension or do a copy without scaling.
@x and @y are the coordinates in the destination image to process.
a horizontal #GstVideoScaler
a vertical #GstVideoScaler
a #GstVideoFormat for @srcs and @dest
source pixels
source pixels stride
destination pixels
destination pixels stride
the horizontal destination offset
the vertical destination offset
the number of output pixels to scale
the number of output lines to scale
Combine a scaler for Y and UV into one scaler for the packed @format.
a new horizontal videoscaler for @format.
a scaler for the Y component
a scaler for the U and V components
the input video format
the output video format
Free a previously allocated #GstVideoScaler @scale.
a #GstVideoScaler
For a given pixel at @out_offset, get the first required input pixel at
@in_offset and the @n_taps filter coefficients.
Note that for interlaced content, @in_offset needs to be incremented with
2 to get the next input line.
an array of @n_tap gdouble values with filter coefficients.
a #GstVideoScaler
an output offset
result input offset
result n_taps
Get the maximum number of taps for @scale.
the maximum number of taps
a #GstVideoScaler
Horizontally scale the pixels in @src to @dest, starting from @dest_offset
for @width samples.
a #GstVideoScaler
a #GstVideoFormat for @src and @dest
source pixels
destination pixels
the horizontal destination offset
the number of pixels to scale
Vertically combine @width pixels in the lines in @src_lines to @dest.
@dest is the location of the target line at @dest_offset and
@srcs are the input lines for @dest_offset.
a #GstVideoScaler
a #GstVideoFormat for @srcs and @dest
source pixels lines
destination pixels
the vertical destination offset
the number of pixels to scale
Make a new @method video scaler. @in_size source lines/pixels will
be scaled to @out_size destination lines/pixels.
@n_taps specifies the amount of pixels to use from the source for one output
pixel. If n_taps is 0, this function chooses a good value automatically based
on the @method and @in_size/@out_size.
a #GstVideoScaler
a #GstVideoResamplerMethod
#GstVideoScalerFlags
number of taps to use
number of source elements
number of destination elements
extra options
Different scale flags.
no flags
Set up a scaler for interlaced content
Provides useful functions and a base class for video sinks.
GstVideoSink will configure the default base sink to drop frames that
arrive later than 20ms as this is considered the default threshold for
observing out-of-sync frames.
Use gst_video_center_rect() instead.
the #GstVideoRectangle describing the source area
the #GstVideoRectangle describing the destination area
a pointer to a #GstVideoRectangle which will receive the result area
a #gboolean indicating if scaling should be applied or not
Notifies the subclass of changed #GstVideoInfo.
A #GstCaps.
A #GstVideoInfo corresponding to @caps.
Whether to show video frames during preroll. If set to %FALSE, video
frames will only be rendered in PLAYING state.
video width (derived class needs to set this)
video height (derived class needs to set this)
The video sink class structure. Derived classes should override the
@show_frame virtual function.
the parent class structure
A #GstCaps.
A #GstVideoInfo corresponding to @caps.
Description of a tile. This structure allow to describe arbitrary tile
dimensions and sizes.
The width in pixels of a tile. This value can be zero if the number of
pixels per line is not an integer value.
The stride (in bytes) of a tile line. Regardless if the tile have sub-tiles
this stride multiplied by the height should be equal to
#GstVideoTileInfo.size. This value is used to translate into linear stride
when older APIs are being used to expose this format.
The size in bytes of a tile. This value must be divisible by
#GstVideoTileInfo.stride.
Enum value describing the available tiling modes.
Unknown or unset tile mode
Every four adjacent blocks - two
horizontally and two vertically are grouped together and are located
in memory in Z or flipped Z order. In case of odd rows, the last row
of blocks is arranged in linear order.
Tiles are in row order.
Enum value describing the most common tiling types.
Tiles are indexed. Use
gst_video_tile_get_index () to retrieve the tile at the requested
coordinates.
@field_count must be 0 for progressive video and 1 or 2 for interlaced.
A representation of a SMPTE time code.
@hours must be positive and less than 24. Will wrap around otherwise.
@minutes and @seconds must be positive and less than 60.
@frames must be less than or equal to @config.fps_n / @config.fps_d
These values are *NOT* automatically normalized.
the corresponding #GstVideoTimeCodeConfig
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
@field_count is 0 for progressive, 1 or 2 for interlaced.
@latest_daiy_jam reference is stolen from caller.
a new #GstVideoTimeCode with the given values.
The values are not checked for being in a valid range. To see if your
timecode actually has valid content, use gst_video_time_code_is_valid().
Numerator of the frame rate
Denominator of the frame rate
The latest daily jam of the #GstVideoTimeCode
#GstVideoTimeCodeFlags
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
a new empty, invalid #GstVideoTimeCode
The resulting config->latest_daily_jam is set to
midnight, and timecode is set to the given time.
This might return a completely invalid timecode, use
gst_video_time_code_new_from_date_time_full() to ensure
that you would get %NULL instead in that case.
the #GstVideoTimeCode representation of @dt.
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
The resulting config->latest_daily_jam is set to
midnight, and timecode is set to the given time.
the #GstVideoTimeCode representation of @dt, or %NULL if
no valid timecode could be created.
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
a new #GstVideoTimeCode from the given string or %NULL
if the string could not be passed.
The string that represents the #GstVideoTimeCode
Adds or subtracts @frames amount of frames to @tc. tc needs to
contain valid data, as verified by gst_video_time_code_is_valid().
a valid #GstVideoTimeCode
How many frames to add or subtract
This makes a component-wise addition of @tc_inter to @tc. For example,
adding ("01:02:03:04", "00:01:00:00") will return "01:03:03:04".
When it comes to drop-frame timecodes,
adding ("00:00:00;00", "00:01:00:00") will return "00:01:00;02"
because of drop-frame oddities. However,
adding ("00:09:00;02", "00:01:00:00") will return "00:10:00;00"
because this time we can have an exact minute.
A new #GstVideoTimeCode with @tc_inter added or %NULL
if the interval can't be added.
The #GstVideoTimeCode where the diff should be added. This
must contain valid timecode values.
The #GstVideoTimeCodeInterval to add to @tc.
The interval must contain valid values, except that for drop-frame
timecode, it may also contain timecodes which would normally
be dropped. These are then corrected to the next reasonable timecode.
Initializes @tc with empty/zero/NULL values and frees any memory
it might currently use.
a #GstVideoTimeCode
Compares @tc1 and @tc2. If both have latest daily jam information, it is
taken into account. Otherwise, it is assumed that the daily jam of both
@tc1 and @tc2 was at the same time. Both time codes must be valid.
1 if @tc1 is after @tc2, -1 if @tc1 is before @tc2, 0 otherwise.
a valid #GstVideoTimeCode
another valid #GstVideoTimeCode
a new #GstVideoTimeCode with the same values as @tc.
a #GstVideoTimeCode
how many frames have passed since the daily jam of @tc.
a valid #GstVideoTimeCode
Frees @tc.
a #GstVideoTimeCode
Adds one frame to @tc.
a valid #GstVideoTimeCode
@field_count is 0 for progressive, 1 or 2 for interlaced.
@latest_daiy_jam reference is stolen from caller.
Initializes @tc with the given values.
The values are not checked for being in a valid range. To see if your
timecode actually has valid content, use gst_video_time_code_is_valid().
a #GstVideoTimeCode
Numerator of the frame rate
Denominator of the frame rate
The latest daily jam of the #GstVideoTimeCode
#GstVideoTimeCodeFlags
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
The resulting config->latest_daily_jam is set to midnight, and timecode is
set to the given time.
Will assert on invalid parameters, use gst_video_time_code_init_from_date_time_full()
for being able to handle invalid parameters.
an uninitialized #GstVideoTimeCode
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
The resulting config->latest_daily_jam is set to
midnight, and timecode is set to the given time.
%TRUE if @tc could be correctly initialized to a valid timecode
a #GstVideoTimeCode
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
whether @tc is a valid timecode (supported frame rate,
hours/minutes/seconds/frames not overflowing)
#GstVideoTimeCode to check
how many nsec have passed since the daily jam of @tc.
a valid #GstVideoTimeCode
The @tc.config->latest_daily_jam is required to be non-NULL.
the #GDateTime representation of @tc or %NULL if @tc
has no daily jam.
A valid #GstVideoTimeCode to convert
the SMPTE ST 2059-1:2015 string representation of @tc. That will
take the form hh:mm:ss:ff. The last separator (between seconds and frames)
may vary:
';' for drop-frame, non-interlaced content and for drop-frame interlaced
field 2
',' for drop-frame interlaced field 1
':' for non-drop-frame, non-interlaced content and for non-drop-frame
interlaced field 2
'.' for non-drop-frame interlaced field 1
A #GstVideoTimeCode to convert
Supported frame rates: 30000/1001, 60000/1001 (both with and without drop
frame), and integer frame rates e.g. 25/1, 30/1, 50/1, 60/1.
The configuration of the time code.
Numerator of the frame rate
Denominator of the frame rate
the corresponding #GstVideoTimeCodeFlags
The latest daily jam information, if present, or NULL
Flags related to the time code information.
For drop frame, only 30000/1001 and 60000/1001 frame rates are supported.
No flags
Whether we have drop frame rate
Whether we have interlaced video
A representation of a difference between two #GstVideoTimeCode instances.
Will not necessarily correspond to a real timecode (e.g. 00:00:10;00)
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
a new #GstVideoTimeCodeInterval with the given values.
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
@tc_inter_str must only have ":" as separators.
a new #GstVideoTimeCodeInterval from the given string
or %NULL if the string could not be passed.
The string that represents the #GstVideoTimeCodeInterval
Initializes @tc with empty/zero/NULL values.
a #GstVideoTimeCodeInterval
a new #GstVideoTimeCodeInterval with the same values as @tc.
a #GstVideoTimeCodeInterval
Frees @tc.
a #GstVideoTimeCodeInterval
Initializes @tc with the given values.
a #GstVideoTimeCodeInterval
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
Extra buffer metadata describing the GstVideoTimeCode of the frame.
Each frame is assumed to have its own timecode, i.e. they are not
automatically incremented/interpolated.
parent #GstMeta
the GstVideoTimeCode to attach
The video transfer function defines the formula for converting between
non-linear RGB (R'G'B') and linear RGB
unknown transfer function
linear RGB, gamma 1.0 curve
Gamma 1.8 curve
Gamma 2.0 curve
Gamma 2.2 curve
Gamma 2.2 curve with a linear segment in the lower
range, also ITU-R BT470M / ITU-R BT1700 625 PAL &
SECAM / ITU-R BT1361
Gamma 2.2 curve with a linear segment in the
lower range
Gamma 2.4 curve with a linear segment in the lower
range. IEC 61966-2-1 (sRGB or sYCC)
Gamma 2.8 curve, also ITU-R BT470BG
Logarithmic transfer characteristic
100:1 range
Logarithmic transfer characteristic
316.22777:1 range (100 * sqrt(10) : 1)
Gamma 2.2 curve with a linear segment in the lower
range. Used for BT.2020 with 12 bits per
component. Since: 1.6
Gamma 2.19921875. Since: 1.8
Rec. ITU-R BT.2020-2 with 10 bits per component.
(functionally the same as the values
GST_VIDEO_TRANSFER_BT709 and GST_VIDEO_TRANSFER_BT601).
Since: 1.18
SMPTE ST 2084 for 10, 12, 14, and 16-bit systems.
Known as perceptual quantization (PQ)
Since: 1.18
Association of Radio Industries and Businesses (ARIB)
STD-B67 and Rec. ITU-R BT.2100-1 hybrid loggamma (HLG) system
Since: 1.18
also known as SMPTE170M / ITU-R BT1358 525 or 625 / ITU-R BT1700 NTSC
Convert @val to its gamma decoded value. This is the inverse operation of
gst_video_color_transfer_encode().
For a non-linear value L' in the range [0..1], conversion to the linear
L is in general performed with a power function like:
|[
L = L' ^ gamma
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamma decoded value of @val
a #GstVideoTransferFunction
a value
Convert @val to its gamma encoded value.
For a linear value L in the range [0..1], conversion to the non-linear
(gamma encoded) L' is in general performed with a power function like:
|[
L' = L ^ (1 / gamma)
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamma encoded value of @val
a #GstVideoTransferFunction
a value
Converts the @value to the #GstVideoTransferFunction
The transfer characteristics (TransferCharacteristics) value is
defined by "ISO/IEC 23001-8 Section 7.2 Table 3"
and "ITU-T H.273 Table 3".
"H.264 Table E-4" and "H.265 Table E.4" share the identical values.
the matched #GstVideoTransferFunction
a ITU-T H.273 transfer characteristics value
Returns whether @from_func and @to_func are equivalent. There are cases
(e.g. BT601, BT709, and BT2020_10) where several functions are functionally
identical. In these cases, when doing conversion, we should consider them
as equivalent. Also, BT2020_12 is the same as the aforementioned three for
less than 12 bits per pixel.
TRUE if @from_func and @to_func can be considered equivalent.
#GstVideoTransferFunction to convert from
bits per pixel to convert from
#GstVideoTransferFunction to convert into
bits per pixel to convert into
Converts #GstVideoTransferFunction to the "transfer characteristics"
(TransferCharacteristics) value defined by "ISO/IEC 23001-8 Section 7.2 Table 3"
and "ITU-T H.273 Table 3".
"H.264 Table E-4" and "H.265 Table E.4" share the identical values.
The value of ISO/IEC 23001-8 transfer characteristics.
a #GstVideoTransferFunction
An encoder for writing ancillary data to the
Vertical Blanking Interval lines of component signals.
Create a new #GstVideoVBIEncoder for the specified @format and @pixel_width.
The new #GstVideoVBIEncoder or %NULL if the @format and/or @pixel_width
is not supported.
a #GstVideoFormat
The width in pixel to use
Stores Video Ancillary data, according to SMPTE-291M specification.
Note that the contents of the data are always read as 8bit data (i.e. do not contain
the parity check bits).
%TRUE if enough space was left in the current line, %FALSE
otherwise.
a #GstVideoVBIEncoder
%TRUE if composite ADF should be created, component otherwise
The Data Identifier
The Secondary Data Identifier (if type 2) or the Data
Block Number (if type 1)
The user data content of the Ancillary packet.
Does not contain the ADF, DID, SDID nor CS.
The amount of data (in bytes) in @data (max 255 bytes)
Frees the @encoder.
a #GstVideoVBIEncoder
A parser for detecting and extracting @GstVideoAncillary data from
Vertical Blanking Interval lines of component signals.
Create a new #GstVideoVBIParser for the specified @format and @pixel_width.
The new #GstVideoVBIParser or %NULL if the @format and/or @pixel_width
is not supported.
a #GstVideoFormat
The width in pixel to use
Provide a new line of data to the @parser. Call gst_video_vbi_parser_get_ancillary()
to get the Ancillary data that might be present on that line.
a #GstVideoVBIParser
The line of data to parse
Frees the @parser.
a #GstVideoVBIParser
Parse the line provided previously by gst_video_vbi_parser_add_line().
%GST_VIDEO_VBI_PARSER_RESULT_OK if ancillary data was found and
@anc was filled. %GST_VIDEO_VBI_PARSER_RESULT_DONE if there wasn't any
data.
a #GstVideoVBIParser
a #GstVideoAncillary to start the eventual ancillary data
Return values for #GstVideoVBIParser
No line were provided, or no more Ancillary data was found.
A #GstVideoAncillary was found.
An error occurred
Adds a new #GstAncillaryMeta to the @buffer. The caller is responsible for setting the appropriate
fields.
A new #GstAncillaryMeta, or %NULL if an error happened.
A #GstBuffer
Attaches #GstVideoAFDMeta metadata to @buffer with the given
parameters.
the #GstVideoAFDMeta on @buffer.
a #GstBuffer
0 for progressive or field 1 and 1 for field 2
#GstVideoAFDSpec that applies to AFD value
#GstVideoAFDValue AFD enumeration
Attaches GstVideoAffineTransformationMeta metadata to @buffer with
the given parameters.
the #GstVideoAffineTransformationMeta on @buffer.
a #GstBuffer
Attaches #GstVideoBarMeta metadata to @buffer with the given
parameters.
the #GstVideoBarMeta on @buffer.
See Table 6.11 Bar Data Syntax
https://www.atsc.org/wp-content/uploads/2015/03/a_53-Part-4-2009.pdf
a #GstBuffer
0 for progressive or field 1 and 1 for field 2
if true then bar data specifies letterbox, otherwise pillarbox
If @is_letterbox is true, then the value specifies the
last line of a horizontal letterbox bar area at top of reconstructed frame.
Otherwise, it specifies the last horizontal luminance sample of a vertical pillarbox
bar area at the left side of the reconstructed frame
If @is_letterbox is true, then the value specifies the
first line of a horizontal letterbox bar area at bottom of reconstructed frame.
Otherwise, it specifies the first horizontal
luminance sample of a vertical pillarbox bar area at the right side of the reconstructed frame.
Attaches #GstVideoCaptionMeta metadata to @buffer with the given
parameters.
the #GstVideoCaptionMeta on @buffer.
a #GstBuffer
The type of Closed Caption to add
The Closed Caption data
The size of @data in bytes
Attaches a #GstVideoCodecAlphaMeta metadata to @buffer with
the given alpha buffer.
the #GstVideoCodecAlphaMeta on @buffer.
a #GstBuffer
a #GstBuffer
Attaches GstVideoGLTextureUploadMeta metadata to @buffer with the given
parameters.
the #GstVideoGLTextureUploadMeta on @buffer.
a #GstBuffer
the #GstVideoGLTextureOrientation
the number of textures
array of #GstVideoGLTextureType
the function to upload the buffer to a specific texture ID
user data for the implementor of @upload
function to copy @user_data
function to free @user_data
Attaches GstVideoMeta metadata to @buffer with the given parameters and the
default offsets and strides for @format and @width x @height.
This function calculates the default offsets and strides and then calls
gst_buffer_add_video_meta_full() with them.
the #GstVideoMeta on @buffer.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
Attaches GstVideoMeta metadata to @buffer with the given parameters.
the #GstVideoMeta on @buffer.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
number of planes
offset of each plane
stride of each plane
Sets an overlay composition on a buffer. The buffer will obtain its own
reference to the composition, meaning this function does not take ownership
of @comp.
a #GstVideoOverlayCompositionMeta
a #GstBuffer
a #GstVideoOverlayComposition
Attaches #GstVideoRegionOfInterestMeta metadata to @buffer with the given
parameters.
the #GstVideoRegionOfInterestMeta on @buffer.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoRegionOfInterestMeta metadata to @buffer with the given
parameters.
the #GstVideoRegionOfInterestMeta on @buffer.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoSEIUserDataUnregisteredMeta metadata to @buffer with the given
parameters.
the #GstVideoSEIUserDataUnregisteredMeta on @buffer.
a #GstBuffer
User Data Unregistered UUID
SEI User Data Unregistered buffer
size of the data buffer
Attaches #GstVideoTimeCodeMeta metadata to @buffer with the given
parameters.
the #GstVideoTimeCodeMeta on @buffer, or
(since 1.16) %NULL if the timecode was invalid.
a #GstBuffer
a #GstVideoTimeCode
Attaches #GstVideoTimeCodeMeta metadata to @buffer with the given
parameters.
the #GstVideoTimeCodeMeta on @buffer, or
(since 1.16) %NULL if the timecode was invalid.
a #GstBuffer
framerate numerator
framerate denominator
a #GDateTime for the latest daily jam
a #GstVideoTimeCodeFlags
hours since the daily jam
minutes since the daily jam
seconds since the daily jam
frames since the daily jam
fields since the daily jam
Gets the #GstAncillaryMeta that might be present on @b.
Note: It is quite likely that there might be more than one ancillary meta on
a given buffer. This function will only return the first one. See gst_buffer_iterate_ancillary_meta() for a way to iterate over all ancillary metas of the buffer.
A #GstBuffer
Gets the #GstVideoAFDMeta that might be present on @b.
Note: there may be two #GstVideoAFDMeta structs for interlaced video.
A #GstBuffer
Gets the #GstVideoBarMeta that might be present on @b.
A #GstBuffer
Gets the #GstVideoCaptionMeta that might be present on @b.
A #GstBuffer
Helper macro to get #GstVideoCodecAlphaMeta from an existing #GstBuffer.
A #GstBuffer pointer, must be writable.
Find the #GstVideoMeta on @buffer with the lowest @id.
Buffers can contain multiple #GstVideoMeta metadata items when dealing with
multiview buffers.
the #GstVideoMeta with lowest id (usually 0) or %NULL when there
is no such metadata on @buffer.
a #GstBuffer
Find the #GstVideoMeta on @buffer with the given @id.
Buffers can contain multiple #GstVideoMeta metadata items when dealing with
multiview buffers.
the #GstVideoMeta with @id or %NULL when there is no such metadata
on @buffer.
a #GstBuffer
a metadata id
Find the #GstVideoRegionOfInterestMeta on @buffer with the given @id.
Buffers can contain multiple #GstVideoRegionOfInterestMeta metadata items if
multiple regions of interests are marked on a frame.
the #GstVideoRegionOfInterestMeta with @id or %NULL when there is
no such metadata on @buffer.
a #GstBuffer
a metadata id
Gets the GstVideoSEIUserDataUnregisteredMeta that might be present on @b.
A #GstBuffer
Retrieves the next #GstAncillaryMeta after the current one according to
@s. If @s points to %NULL, the first #GstAncillaryMeta will be returned (if
any).
@s will be updated with an opaque state pointer.
A #GstBuffer
An opaque state pointer
Get the video alignment from the bufferpool configuration @config in
in @align
%TRUE if @config could be parsed correctly.
a #GstStructure
a #GstVideoAlignment
Set the video alignment in @align to the bufferpool configuration
@config
a #GstStructure
a #GstVideoAlignment
This library contains some helper functions and includes the
videosink and videofilter base classes.
A collection of objects and methods to assist with handling Ancillary Data
present in Vertical Blanking Interval as well as Closed Caption.
The functions gst_video_chroma_from_string() and gst_video_chroma_to_string() convert
between #GstVideoChromaSite and string descriptions.
#GstVideoChromaResample is a utility object for resampling chroma planes
and converting between different chroma sampling sitings.
Special GstBufferPool subclass for raw video buffers.
Allows configuration of video-specific requirements such as
stride alignments or pixel padding, and can also be configured
to automatically add #GstVideoMeta to the buffers.
A collection of objects and methods to assist with SEI User Data Unregistered
metadata in H.264 and H.265 streams.
Convenience function to check if the given message is a
"prepare-window-handle" message from a #GstVideoOverlay.
whether @msg is a "prepare-window-handle" message
a #GstMessage
Try to retrieve x and y coordinates of a #GstNavigation event.
A boolean indicating success.
The #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
navigation event.
Pointer to a gdouble to receive the y coordinate of the
navigation event.
Inspect a #GstEvent and return the #GstNavigationEventType of the event, or
#GST_NAVIGATION_EVENT_INVALID if the event is not a #GstNavigation event.
A #GstEvent to inspect.
Create a new navigation event given navigation command..
a new #GstEvent
The navigation command to use.
Create a new navigation event for the given key press.
a new #GstEvent
A string identifying the key press.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key release.
a new #GstEvent
A string identifying the released key.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key mouse button press.
a new #GstEvent
The number of the pressed mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the given key mouse button release.
a new #GstEvent
The number of the released mouse button.
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the new mouse location.
a new #GstEvent
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for the mouse scroll.
a new #GstEvent
The x coordinate of the mouse cursor.
The y coordinate of the mouse cursor.
The x component of the scroll movement.
The y component of the scroll movement.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event signalling that all currently active touch
points are cancelled and should be discarded. For example, under Wayland
this event might be sent when a swipe passes the threshold to be recognized
as a gesture by the compositor.
a new #GstEvent
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for an added touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must stay
unique to this touch point at least until an up event is sent for
the same identifier, or all touch points are cancelled.
The x coordinate of the new touch point.
The y coordinate of the new touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no
data is available.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event signalling the end of a touch frame. Touch
frames signal that all previous down, motion and up events not followed by
another touch frame event already should be considered simultaneous.
a new #GstEvent
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for a moved touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must
correlate to exactly one previous touch_start event.
The x coordinate of the touch point.
The y coordinate of the touch point.
Pressure data of the touch point, from 0.0 to 1.0, or NaN if no
data is available.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Create a new navigation event for a removed touch point.
a new #GstEvent
A number uniquely identifying this touch point. It must
correlate to exactly one previous down event, but can be reused
after sending this event.
The x coordinate of the touch point.
The y coordinate of the touch point.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Inspect a #GstNavigation command event and retrieve the enum value of the
associated command.
TRUE if the navigation command could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to GstNavigationCommand to receive the
type of the navigation event.
Note: Modifier keys (as defined in #GstNavigationModifierType)
[press](GST_NAVIGATION_EVENT_KEY_PRESS) and
[release](GST_NAVIGATION_KEY_PRESS) events are generated even if those states are
present on all other related events
A #GstEvent to inspect.
A pointer to a location to receive
the string identifying the key press. The returned string is owned by the
event, and valid only until the event is unreffed.
TRUE if the event is a #GstNavigation event with associated
modifiers state, otherwise FALSE.
The #GstEvent to modify.
a bit-mask representing the state of the modifier keys (e.g. Control,
Shift and Alt).
Retrieve the details of either a #GstNavigation mouse button press event or
a mouse button release event. Determine which type the event is using
gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.
TRUE if the button number and both coordinates could be extracted,
otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gint that will receive the button
number associated with the event.
Pointer to a gdouble to receive the x coordinate of the
mouse button event.
Pointer to a gdouble to receive the y coordinate of the
mouse button event.
Inspect a #GstNavigation mouse movement event and extract the coordinates
of the event.
TRUE if both coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Inspect a #GstNavigation mouse scroll event and extract the coordinates
of the event.
TRUE if all coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Pointer to a gdouble to receive the delta_x coordinate of the
mouse movement.
Pointer to a gdouble to receive the delta_y coordinate of the
mouse movement.
Retrieve the details of a #GstNavigation touch-down or touch-motion event.
Determine which type the event is using gst_navigation_event_get_type()
to retrieve the #GstNavigationEventType.
TRUE if all details could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a guint that will receive the
identifier unique to this touch point.
Pointer to a gdouble that will receive the x
coordinate of the touch event.
Pointer to a gdouble that will receive the y
coordinate of the touch event.
Pointer to a gdouble that will receive the
force of the touch event, in the range from 0.0 to 1.0. If pressure
data is not available, NaN will be set instead.
Retrieve the details of a #GstNavigation touch-up event.
TRUE if all details could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a guint that will receive the
identifier unique to this touch point.
Pointer to a gdouble that will receive the x
coordinate of the touch event.
Pointer to a gdouble that will receive the y
coordinate of the touch event.
Try to set x and y coordinates on a #GstNavigation event. The event must
be writable.
A boolean indicating success.
The #GstEvent to modify.
The x coordinate to set.
The y coordinate to set.
Check a bus message to see if it is a #GstNavigation event, and return
the #GstNavigationMessageType identifying the type of the message if so.
The type of the #GstMessage, or
#GST_NAVIGATION_MESSAGE_INVALID if the message is not a #GstNavigation
notification.
A #GstMessage to inspect.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application
that the current angle, or current number of angles available in a
multiangle video has changed.
The new #GstMessage.
A #GstObject to set as source of the new message.
The currently selected angle.
The number of viewing angles now available.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_COMMANDS_CHANGED
The new #GstMessage.
A #GstObject to set as source of the new message.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_EVENT.
The new #GstMessage.
A #GstObject to set as source of the new message.
A navigation #GstEvent
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_MOUSE_OVER.
The new #GstMessage.
A #GstObject to set as source of the new message.
%TRUE if the mouse has entered a clickable area of the display.
%FALSE if it over a non-clickable area.
Parse a #GstNavigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED
and extract the @cur_angle and @n_angles parameters.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a #guint to receive the new
current angle number, or NULL
A pointer to a #guint to receive the new angle
count, or NULL.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_EVENT
and extract contained #GstEvent. The caller must unref the @event when done
with it.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
a pointer to a #GstEvent to receive
the contained navigation event.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_MOUSE_OVER
and extract the active/inactive flag. If the mouse over event is marked
active, it indicates that the mouse is over a clickable area.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a gboolean to receive the
active/inactive state, or NULL.
Inspect a #GstQuery and return the #GstNavigationQueryType associated with
it if it is a #GstNavigation query.
The #GstNavigationQueryType of the query, or
#GST_NAVIGATION_QUERY_INVALID
The query to inspect
Create a new #GstNavigation angles query. When executed, it will
query the pipeline for the set of currently available angles, which may be
greater than one in a multiangle video.
The new query.
Create a new #GstNavigation commands query. When executed, it will
query the pipeline for the set of currently available commands.
The new query.
Parse the current angle number in the #GstNavigation angles @query into the
#guint pointed to by the @cur_angle variable, and the number of available
angles into the #guint pointed to by the @n_angles variable.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
Pointer to a #guint into which to store the
currently selected angle value from the query, or NULL
Pointer to a #guint into which to store the
number of angles value from the query, or NULL
Parse the number of commands in the #GstNavigation commands @query.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the number of commands in this query.
Parse the #GstNavigation command query and retrieve the @nth command from
it into @cmd. If the list contains less elements than @nth, @cmd will be
set to #GST_NAVIGATION_COMMAND_INVALID.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the nth command to retrieve.
a pointer to store the nth command into.
Set the #GstNavigation angles query result field in @query.
a #GstQuery
the current viewing angle to set.
the number of viewing angles to set.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
An array containing @n_cmds
@GstNavigationCommand values.
Lets you blend the @src image into the @dest image
The #GstVideoFrame where to blend @src in
the #GstVideoFrame that we want to blend into
The x offset in pixel where the @src image should be blended
the y offset in pixel where the @src image should be blended
the global_alpha each per-pixel alpha value is multiplied
with
Scales a buffer containing RGBA (or AYUV) video. This is an internal
helper function which is used to scale subtitle overlays, and may be
deprecated in the near future. Use #GstVideoScaler to scale video buffers
instead.
the #GstVideoInfo describing the video data in @src_buffer
the source buffer containing video pixels to scale
the height in pixels to scale the video data in @src_buffer to
the width in pixels to scale the video data in @src_buffer to
pointer to a #GstVideoInfo structure that will be filled in
with the details for @dest_buffer
a pointer to a #GstBuffer variable, which will be
set to a newly-allocated buffer containing the scaled pixels.
Given the Pixel Aspect Ratio and size of an input video frame, and the
pixel aspect ratio of the intended display device, calculates the actual
display ratio the video will be rendered with.
A boolean indicating success and a calculated Display Ratio in the
dar_n and dar_d parameters.
The return value is FALSE in the case of integer overflow or other error.
Numerator of the calculated display_ratio
Denominator of the calculated display_ratio
Width of the video frame in pixels
Height of the video frame in pixels
Numerator of the pixel aspect ratio of the input video.
Denominator of the pixel aspect ratio of the input video.
Numerator of the pixel aspect ratio of the display device
Denominator of the pixel aspect ratio of the display device
Parses fixed Closed Caption #GstCaps and returns the corresponding caption
type, or %GST_VIDEO_CAPTION_TYPE_UNKNOWN.
#GstVideoCaptionType.
Fixed #GstCaps to parse
Creates new caps corresponding to @type.
new #GstCaps
#GstVideoCaptionType
Takes @src rectangle and position it at the center of @dst rectangle with or
without @scaling. It handles clipping if the @src rectangle is bigger than
the @dst one and @scaling is set to FALSE.
a pointer to #GstVideoRectangle describing the source area
a pointer to #GstVideoRectangle describing the destination area
a pointer to a #GstVideoRectangle which will receive the result area
a #gboolean indicating if scaling should be applied or not
Convert @s to a #GstVideoChromaSite
Use gst_video_chroma_site_from_string() instead.
a #GstVideoChromaSite or %GST_VIDEO_CHROMA_SITE_UNKNOWN when @s does
not contain a valid chroma description.
a chromasite string
Perform resampling of @width chroma pixels in @lines.
a #GstVideoChromaResample
pixel lines
the number of pixels on one line
Create a new resampler object for the given parameters. When @h_factor or
@v_factor is > 0, upsampling will be used, otherwise subsampling is
performed.
a new #GstVideoChromaResample that should be freed with
gst_video_chroma_resample_free() after usage.
a #GstVideoChromaMethod
a #GstVideoChromaSite
#GstVideoChromaFlags
the #GstVideoFormat
horizontal resampling factor
vertical resampling factor
Convert @s to a #GstVideoChromaSite
a #GstVideoChromaSite or %GST_VIDEO_CHROMA_SITE_UNKNOWN when @s does
not contain a valid chroma-site description.
a chromasite string
Converts @site to its string representation.
a string representation of @site
or %NULL if @site contains undefined value or
is equal to %GST_VIDEO_CHROMA_SITE_UNKNOWN
a #GstVideoChromaSite
Converts @site to its string representation.
Use gst_video_chroma_site_to_string() instead.
a string describing @site.
a #GstVideoChromaSite
#GType for the #GstVideoCodecAlphaMeta structure.
#GstMetaInfo pointer that describes #GstVideoCodecAlphaMeta.
Converts the @value to the #GstVideoColorMatrix
The matrix coefficients (MatrixCoefficients) value is
defined by "ISO/IEC 23001-8 Section 7.3 Table 4"
and "ITU-T H.273 Table 4".
"H.264 Table E-5" and "H.265 Table E.5" share the identical values.
the matched #GstVideoColorMatrix
a ITU-T H.273 matrix coefficients value
Get the coefficients used to convert between Y'PbPr and R'G'B' using @matrix.
When:
|[
0.0 <= [Y',R',G',B'] <= 1.0)
(-0.5 <= [Pb,Pr] <= 0.5)
]|
the general conversion is given by:
|[
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))
]|
and the other way around:
|[
R' = Y' + Cr*2*(1-Kr)
G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb)
B' = Y' + Cb*2*(1-Kb)
]|
TRUE if @matrix was a YUV color format and @Kr and @Kb contain valid
values.
a #GstVideoColorMatrix
result red channel coefficient
result blue channel coefficient
Converts #GstVideoColorMatrix to the "matrix coefficients"
(MatrixCoefficients) value defined by "ISO/IEC 23001-8 Section 7.3 Table 4"
and "ITU-T H.273 Table 4".
"H.264 Table E-5" and "H.265 Table E.5" share the identical values.
The value of ISO/IEC 23001-8 matrix coefficients.
a #GstVideoColorMatrix
Converts the @value to the #GstVideoColorPrimaries
The colour primaries (ColourPrimaries) value is
defined by "ISO/IEC 23001-8 Section 7.1 Table 2" and "ITU-T H.273 Table 2".
"H.264 Table E-3" and "H.265 Table E.3" share the identical values.
the matched #GstVideoColorPrimaries
a ITU-T H.273 colour primaries value
Get information about the chromaticity coordinates of @primaries.
a #GstVideoColorPrimariesInfo for @primaries.
a #GstVideoColorPrimaries
Checks whether @primaries and @other are functionally equivalent
TRUE if @primaries and @other can be considered equivalent.
a #GstVideoColorPrimaries
another #GstVideoColorPrimaries
Converts #GstVideoColorPrimaries to the "colour primaries" (ColourPrimaries)
value defined by "ISO/IEC 23001-8 Section 7.1 Table 2"
and "ITU-T H.273 Table 2".
"H.264 Table E-3" and "H.265 Table E.3" share the identical values.
The value of ISO/IEC 23001-8 colour primaries.
a #GstVideoColorPrimaries
Compute the offset and scale values for each component of @info. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
@info and @range.
a #GstVideoColorRange
a #GstVideoFormatInfo
output offsets
output scale
Use gst_video_transfer_function_decode() instead.
a #GstVideoTransferFunction
a value
Use gst_video_transfer_function_encode() instead.
a #GstVideoTransferFunction
a value
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
The converted #GstSample, or %NULL if an error happened (in which case @err
will point to the #GError).
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
@callback will be called after conversion, when an error occurred or if conversion didn't
finish after @timeout. @callback will always be called from the thread default
%GMainContext, see g_main_context_get_thread_default(). If GLib before 2.22 is used,
this will always be the global default main context.
@destroy_notify will be called after the callback was called and @user_data is not needed
anymore.
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
%GstVideoConvertSampleCallback that will be called after conversion.
extra data that will be passed to the @callback
%GDestroyNotify to be called after @user_data is not needed anymore
Create a new converter object to convert between @in_info and @out_info
with @config.
Returns (nullable): a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
Create a new converter object to convert between @in_info and @out_info
with @config.
The optional @pool can be used to spawn threads, this is useful when
creating new converters rapidly, for example when updating cropping.
Returns (nullable): a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
a #GstTaskPool to spawn threads from
Make a new dither object for dithering lines of @format using the
algorithm described by @method.
Each component will be quantized to a multiple of @quantizer. Better
performance is achieved when @quantizer is a power of 2.
@width is the width of the lines that this ditherer will handle.
a new #GstVideoDither
a #GstVideoDitherMethod
a #GstVideoDitherFlags
a #GstVideoFormat
quantizer
the width of the lines
Converting the video format into dma drm fourcc. If no
matching fourcc found, then DRM_FORMAT_INVALID is returned.
the DRM_FORMAT_* corresponding to the @format.
a #GstVideoFormat
Convert the @format_str string into the drm fourcc value. The @modifier is
also parsed if we want. Please note that the @format_str should follow the
fourcc:modifier kind style, such as NV12:0x0100000000000002
The drm fourcc value or DRM_FORMAT_INVALID if @format_str is
invalid.
a drm format string
Return the modifier in @format or %NULL
to ignore.
Converting a dma drm fourcc into the video format. If no matching
video format found, then GST_VIDEO_FORMAT_UNKNOWN is returned.
the GST_VIDEO_FORMAT_* corresponding to the @fourcc.
the dma drm value.
Returns a string containing drm kind format, such as
NV12:0x0100000000000002, or NULL otherwise.
the drm kind string composed
of to @fourcc and @modifier.
a drm fourcc value.
the associated modifier value.
Checks if an event is a force key unit event. Returns true for both upstream
and downstream force key unit events.
%TRUE if the event is a valid force key unit event
A #GstEvent to check
Creates a new downstream force key unit event. A downstream force key unit
event can be sent down the pipeline to request downstream elements to produce
a key unit. A downstream force key unit event must also be sent when handling
an upstream force key unit event to notify downstream that the latter has been
handled.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use
gst_video_event_parse_downstream_force_key_unit().
The new GstEvent
the timestamp of the buffer that starts a new key unit
the stream_time of the buffer that starts a new key unit
the running_time of the buffer that starts a new key unit
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Creates a new Still Frame event. If @in_still is %TRUE, then the event
represents the start of a still frame sequence. If it is %FALSE, then
the event ends a still frame sequence.
To parse an event created by gst_video_event_new_still_frame() use
gst_video_event_parse_still_frame().
The new GstEvent
boolean value for the still-frame state of the event.
Creates a new upstream force key unit event. An upstream force key unit event
can be sent to request upstream elements to produce a key unit.
@running_time can be set to request a new key unit at a specific
running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a
new key unit as soon as possible.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use
gst_video_event_parse_downstream_force_key_unit().
The new GstEvent
the running_time at which a new key unit should be produced
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Get timestamp, stream-time, running-time, all-headers and count in the force
key unit event. See gst_video_event_new_downstream_force_key_unit() for a
full description of the downstream force key unit event.
@running_time will be adjusted for any pad offsets of pads it was passing through.
%TRUE if the event is a valid downstream force key unit event.
A #GstEvent to parse
A pointer to the timestamp in the event
A pointer to the stream-time in the event
A pointer to the running-time in the event
A pointer to the all_headers flag in the event
A pointer to the count field of the event
Parse a #GstEvent, identify if it is a Still Frame event, and
return the still-frame state from the event if it is.
If the event represents the start of a still frame, the in_still
variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the
in_still variable order to just check whether the event is a valid still-frame
event.
Create a still frame event using gst_video_event_new_still_frame()
%TRUE if the event is a valid still-frame event. %FALSE if not
A #GstEvent to parse
A boolean to receive the still-frame status from the event, or NULL
Get running-time, all-headers and count in the force key unit event. See
gst_video_event_new_upstream_force_key_unit() for a full description of the
upstream force key unit event.
Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()
@running_time will be adjusted for any pad offsets of pads it was passing through.
%TRUE if the event is a valid upstream force-key-unit event. %FALSE if not
A #GstEvent to parse
A pointer to the running_time in the event
A pointer to the all_headers flag in the event
A pointer to the count field in the event
Convert @order to a #GstVideoFieldOrder
the #GstVideoFieldOrder of @order or
#GST_VIDEO_FIELD_ORDER_UNKNOWN when @order is not a valid
string representation for a #GstVideoFieldOrder.
a field order
Convert @order to its string representation.
@order as a string.
a #GstVideoFieldOrder
Converts a FOURCC value into the corresponding #GstVideoFormat.
If the FOURCC cannot be represented by #GstVideoFormat,
#GST_VIDEO_FORMAT_UNKNOWN is returned.
the #GstVideoFormat describing the FOURCC value
a FOURCC value representing raw YUV video
Find the #GstVideoFormat for the given parameters.
a #GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to
not specify a known format.
the amount of bits used for a pixel
the amount of bits used to store a pixel. This value is bigger than
@depth
the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN
the red mask
the green mask
the blue mask
the alpha mask, or 0 if no alpha mask
Convert the @format string to its #GstVideoFormat.
the #GstVideoFormat for @format or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
a format string
Get the #GstVideoFormatInfo for @format
The #GstVideoFormatInfo for @format.
a #GstVideoFormat
Get the default palette of @format. This the palette used in the pack
function for paletted formats.
the default palette of @format or %NULL when
@format does not have a palette.
a #GstVideoFormat
size of the palette in bytes
Converts a #GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If @format has
no corresponding FOURCC value, 0 is returned.
the FOURCC corresponding to @format
a #GstVideoFormat video format
Returns a string containing a descriptive name for
the #GstVideoFormat if there is one, or NULL otherwise.
the name corresponding to @format
a #GstVideoFormat video format
Return all the raw video formats supported by GStreamer including
special opaque formats such as %GST_VIDEO_FORMAT_DMA_DRM for which
no software conversion exists. This should be use for passthrough
template cpas.
an array of #GstVideoFormat
the number of elements in the returned array
Return all the raw video formats supported by GStreamer.
an array of #GstVideoFormat
the number of elements in the returned array
Use @info and @buffer to fill in the values of @frame. @frame is usually
allocated on the stack, and you will pass the address to the #GstVideoFrame
structure allocated on the stack; gst_video_frame_map() will then fill in
the structures with the various video-specific information you need to access
the pixels of the video buffer. You can then use accessor macros such as
GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(),
GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc.
to get to the pixels.
|[<!-- language="C" -->
GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);
for (h = 0; h < height; ++h) {
for (w = 0; w < width; ++w) {
guint8 *pixel = pixels + h * stride + w * pixel_stride;
memset (pixel, 0, pixel_stride);
}
}
gst_video_frame_unmap (&vframe);
}
...
]|
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
The purpose of this function is to make it easy for you to get to the video
pixels in a generic way, without you having to worry too much about details
such as whether the video data is allocated in one contiguous memory chunk
or multiple memory chunks (e.g. one for each plane); or if custom strides
and custom plane offsets are used or not (as signalled by GstVideoMeta on
each buffer). This function will just fill the #GstVideoFrame structure
with the right values and if you use the accessor macros everything will
just work and you can access the data easily. It also maps the underlying
memory chunks for you.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
#GstMapFlags
Use @info and @buffer to fill in the values of @frame with the video frame
information of frame @id.
When @id is -1, the default frame is mapped. When @id != -1, this function
will return %FALSE when there is no GstVideoMeta with that id.
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
the frame id to map
#GstMapFlags
Given the nominal duration of one video frame,
this function will check some standard framerates for
a close match (within 0.1%) and return one if possible,
It will calculate an arbitrary framerate if no close
match was found, and return %FALSE.
It returns %FALSE if a duration of 0 is passed.
%TRUE if a close "standard" framerate was
recognised, and %FALSE otherwise.
Nominal duration of one frame
Numerator of the calculated framerate
Denominator of the calculated framerate
Parse @caps and update @info. Please note that the @caps should be
a dma drm caps. The gst_video_is_dma_drm_caps() can be used to verify
it before calling this function.
TRUE if @caps could be parsed
#GstVideoInfoDmaDrm
a #GstCaps
Fills @drm_info if @info's format has a valid drm format and @modifier is also
valid
%TRUE if @drm_info is filled correctly.
#GstVideoInfoDmaDrm
a #GstVideoInfo
the associated modifier value.
Initialize @drm_info with default values.
a #GstVideoInfoDmaDrm
Parse @caps and update @info.
TRUE if @caps could be parsed
#GstVideoInfo
a #GstCaps
Initialize @info with default values.
a #GstVideoInfo
Convert @mode to a #GstVideoInterlaceMode
the #GstVideoInterlaceMode of @mode or
#GST_VIDEO_INTERLACE_MODE_PROGRESSIVE when @mode is not a valid
string representation for a #GstVideoInterlaceMode.
a mode
Convert @mode to its string representation.
@mode as a string.
a #GstVideoInterlaceMode
Given a frame's dimensions and pixel aspect ratio, this function will
calculate the frame's aspect ratio and compare it against a set of
common well-known "standard" aspect ratios.
%TRUE if a known "standard" aspect ratio was
recognised, and %FALSE otherwise.
Width of the video frame
Height of the video frame
Pixel aspect ratio numerator
Pixel aspect ratio denominator
Check whether the @caps is a dma drm kind caps. Please note that
the caps should be fixed.
%TRUE if the caps is a dma drm caps.
a #GstCaps
Return a generic raw video caps for formats defined in @formats.
If @formats is %NULL returns a caps for all the supported raw video formats,
see gst_video_formats_raw().
a video @GstCaps
an array of raw #GstVideoFormat, or %NULL
the size of @formats
Return a generic raw video caps for formats defined in @formats with features
@features.
If @formats is %NULL returns a caps for all the supported video formats,
see gst_video_formats_raw().
a video @GstCaps
an array of raw #GstVideoFormat, or %NULL
the size of @formats
the #GstCapsFeatures to set on the caps
Extract #GstVideoMasteringDisplayInfo from @mastering
%TRUE if @minfo was filled with @mastering
a #GstVideoMasteringDisplayInfo
a #GstStructure representing #GstVideoMasteringDisplayInfo
Get the #GQuark for the "gst-video-scale" metadata transform operation.
a #GQuark
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed stereo
video modes with double the height of a single view for use in
caps negotiations. Currently this is top-bottom and row-interleaved.
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed
stereo video modes that have double the width/height of a single
view for use in caps negotiation. Currently this is just
'checkerboard' layout.
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed stereo
video modes with double the width of a single view for use in
caps negotiations. Currently this is side-by-side, side-by-side-quincunx
and column-interleaved.
A const #GValue containing a list of mono video modes
Utility function that returns a #GValue with a GstList of mono video
modes (mono/left/right) for use in caps negotiations.
A const #GValue containing a list of 'unpacked' stereo video modes
Utility function that returns a #GValue with a GstList of unpacked
stereo video modes (separated/frame-by-frame/frame-by-frame-multiview)
for use in caps negotiations.
A boolean indicating whether the
#GST_VIDEO_MULTIVIEW_FLAGS_HALF_ASPECT flag should be set.
Utility function that heuristically guess whether a
frame-packed stereoscopic video contains half width/height
encoded views, or full-frame views by looking at the
overall display aspect ratio.
A #GstVideoMultiviewMode
Video frame width in pixels
Video frame height in pixels
Numerator of the video pixel-aspect-ratio
Denominator of the video pixel-aspect-ratio
The #GstVideoMultiviewMode value
Given a string from a caps multiview-mode field,
output the corresponding #GstVideoMultiviewMode
or #GST_VIDEO_MULTIVIEW_MODE_NONE
multiview-mode field string from caps
Given a #GstVideoMultiviewMode returns the multiview-mode caps string
for insertion into a caps structure
The caps string representation of the mode, or NULL if invalid.
A #GstVideoMultiviewMode value
Utility function that transforms the width/height/PAR
and multiview mode and flags of a #GstVideoInfo into
the requested mode.
A #GstVideoInfo structure to operate on
A #GstVideoMultiviewMode value
A set of #GstVideoMultiviewFlags
Parses the "image-orientation" tag and transforms it into the
#GstVideoOrientationMethod enum.
TRUE if there was a valid "image-orientation" tag in the taglist.
A #GstTagList
The location where to return the orientation.
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will install "render-rectangle" property into the
class.
The class on which the properties will be installed
The first free property ID to use
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will parse and set the render rectangle calling
gst_video_overlay_set_render_rectangle().
%TRUE if the @property_id matches the GstVideoOverlay property
The instance on which the property is set
The highest property ID.
The property ID
The #GValue to be set
Make a new @method video scaler. @in_size source lines/pixels will
be scaled to @out_size destination lines/pixels.
@n_taps specifies the amount of pixels to use from the source for one output
pixel. If n_taps is 0, this function chooses a good value automatically based
on the @method and @in_size/@out_size.
a #GstVideoScaler
a #GstVideoResamplerMethod
#GstVideoScalerFlags
number of taps to use
number of source elements
number of destination elements
extra options
#GType for the #GstVideoSEIUserDataUnregisteredMeta structure.
#GstMetaInfo pointer that describes #GstVideoSEIUserDataUnregisteredMeta.
Parses and returns the Precision Time Stamp (ST 0604) from the SEI User Data Unregistered buffer
True if data is a Precision Time Stamp and it was parsed correctly
a #GstVideoSEIUserDataUnregisteredMeta
User Data Unregistered UUID
The parsed Precision Time Stamp SEI
Get the tile index of the tile at coordinates @x and @y in the tiled
image of @x_tiles by @y_tiles.
Use this method when @mode is of type %GST_VIDEO_TILE_TYPE_INDEXED.
the index of the tile at @x and @y in the tiled image of
@x_tiles by @y_tiles.
a #GstVideoTileMode
x coordinate
y coordinate
number of horizintal tiles
number of vertical tiles
Convert @val to its gamma decoded value. This is the inverse operation of
gst_video_color_transfer_encode().
For a non-linear value L' in the range [0..1], conversion to the linear
L is in general performed with a power function like:
|[
L = L' ^ gamma
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamma decoded value of @val
a #GstVideoTransferFunction
a value
Convert @val to its gamma encoded value.
For a linear value L in the range [0..1], conversion to the non-linear
(gamma encoded) L' is in general performed with a power function like:
|[
L' = L ^ (1 / gamma)
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamma encoded value of @val
a #GstVideoTransferFunction
a value
Converts the @value to the #GstVideoTransferFunction
The transfer characteristics (TransferCharacteristics) value is
defined by "ISO/IEC 23001-8 Section 7.2 Table 3"
and "ITU-T H.273 Table 3".
"H.264 Table E-4" and "H.265 Table E.4" share the identical values.
the matched #GstVideoTransferFunction
a ITU-T H.273 transfer characteristics value
Returns whether @from_func and @to_func are equivalent. There are cases
(e.g. BT601, BT709, and BT2020_10) where several functions are functionally
identical. In these cases, when doing conversion, we should consider them
as equivalent. Also, BT2020_12 is the same as the aforementioned three for
less than 12 bits per pixel.
TRUE if @from_func and @to_func can be considered equivalent.
#GstVideoTransferFunction to convert from
bits per pixel to convert from
#GstVideoTransferFunction to convert into
bits per pixel to convert into
Converts #GstVideoTransferFunction to the "transfer characteristics"
(TransferCharacteristics) value defined by "ISO/IEC 23001-8 Section 7.2 Table 3"
and "ITU-T H.273 Table 3".
"H.264 Table E-4" and "H.265 Table E.4" share the identical values.
The value of ISO/IEC 23001-8 transfer characteristics.
a #GstVideoTransferFunction
This object is used to convert video frames from one format to another.
The object can perform conversion of:
* video format
* video colorspace
* chroma-siting
* video size