A bufferpool option to enable extra padding. When a bufferpool supports this
option, gst_buffer_pool_config_set_video_alignment() can be called.
When this option is enabled on the bufferpool,
#GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.
An option that can be activated on a bufferpool to request gl texture upload
meta on buffers from the pool.
When this option is enabled on the bufferpool,
@GST_BUFFER_POOL_OPTION_VIDEO_META should also be enabled.
An option that can be activated on bufferpool to request video metadata
on buffers from the pool.
This interface is implemented by elements which can perform some color
balance operation on video frames they process. For example, modifying
the brightness, contrast, hue or saturation.
Example elements are 'xvimagesink' and 'colorbalance'
Get the #GstColorBalanceType of this implementation.
A the #GstColorBalanceType.
The #GstColorBalance implementation
Retrieve the current value of the indicated channel, between min_value
and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
Retrieve a list of the available channels.
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
Sets the current value of the channel to the passed value, which must
be between min_value and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
A helper function called by implementations of the GstColorBalance
interface. It fires the #GstColorBalance::value-changed signal on the
instance, and the #GstColorBalanceChannel::value-changed signal on the
channel object.
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
Get the #GstColorBalanceType of this implementation.
A the #GstColorBalanceType.
The #GstColorBalance implementation
Retrieve the current value of the indicated channel, between min_value
and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
Retrieve a list of the available channels.
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
Sets the current value of the channel to the passed value, which must
be between min_value and max_value.
See Also: The #GstColorBalanceChannel.min_value and
#GstColorBalanceChannel.max_value members of the
#GstColorBalanceChannel object.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
A helper function called by implementations of the GstColorBalance
interface. It fires the #GstColorBalance::value-changed signal on the
instance, and the #GstColorBalanceChannel::value-changed signal on the
channel object.
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
Fired when the value of the indicated channel has changed.
The #GstColorBalanceChannel
The new value
The #GstColorBalanceChannel object represents a parameter
for modifying the color balance implemented by an element providing the
#GstColorBalance interface. For example, Hue or Saturation.
A string containing a descriptive name for this channel
The minimum valid value for this channel.
The maximum valid value for this channel.
Fired when the value of the indicated channel has changed.
The new value
Color-balance channel class.
the parent class
Color-balance interface.
the parent interface
A
GList containing pointers to #GstColorBalanceChannel
objects. The list is owned by the #GstColorBalance
instance and must not be freed.
A #GstColorBalance instance
A #GstColorBalance instance
A #GstColorBalanceChannel instance
The new value for the channel.
The current value of the channel.
A #GstColorBalance instance
A #GstColorBalanceChannel instance
A the #GstColorBalanceType.
The #GstColorBalance implementation
A #GstColorBalance instance
A #GstColorBalanceChannel whose value has changed
The new value of the channel
An enumeration indicating whether an element implements color balancing
operations in software or in dedicated hardware. In general, dedicated
hardware implementations (such as those provided by xvimagesink) are
preferred.
Color balance is implemented with dedicated
hardware.
Color balance is implemented via software
processing.
This metadata stays relevant as long as video colorspace is unchanged.
This metadata stays relevant as long as video orientation is unchanged.
This metadata stays relevant as long as video size is unchanged.
This metadata is relevant for video streams.
The Navigation interface is used for creating and injecting navigation related
events such as mouse button presses, cursor motion and key presses. The associated
library also provides methods for parsing received events, and for sending and
receiving navigation related bus events. One main usecase is DVD menu navigation.
The main parts of the API are:
* The GstNavigation interface, implemented by elements which provide an application
with the ability to create and inject navigation events into the pipeline.
* GstNavigation event handling API. GstNavigation events are created in response to
calls on a GstNavigation interface implementation, and sent in the pipeline. Upstream
elements can use the navigation event API functions to parse the contents of received
messages.
* GstNavigation message handling API. GstNavigation messages may be sent on the message
bus to inform applications of navigation related changes in the pipeline, such as the
mouse moving over a clickable region, or the set of available angles changing.
The GstNavigation message functions provide functions for creating and parsing
custom bus messages for signaling GstNavigation changes.
Inspect a #GstEvent and return the #GstNavigationEventType of the event, or
#GST_NAVIGATION_EVENT_INVALID if the event is not a #GstNavigation event.
A #GstEvent to inspect.
Inspect a #GstNavigation command event and retrieve the enum value of the
associated command.
TRUE if the navigation command could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to GstNavigationCommand to receive the
type of the navigation event.
A #GstEvent to inspect.
A pointer to a location to receive
the string identifying the key press. The returned string is owned by the
event, and valid only until the event is unreffed.
Retrieve the details of either a #GstNavigation mouse button press event or
a mouse button release event. Determine which type the event is using
gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.
TRUE if the button number and both coordinates could be extracted,
otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gint that will receive the button
number associated with the event.
Pointer to a gdouble to receive the x coordinate of the
mouse button event.
Pointer to a gdouble to receive the y coordinate of the
mouse button event.
Inspect a #GstNavigation mouse movement event and extract the coordinates
of the event.
TRUE if both coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Check a bus message to see if it is a #GstNavigation event, and return
the #GstNavigationMessageType identifying the type of the message if so.
The type of the #GstMessage, or
#GST_NAVIGATION_MESSAGE_INVALID if the message is not a #GstNavigation
notification.
A #GstMessage to inspect.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application
that the current angle, or current number of angles available in a
multiangle video has changed.
The new #GstMessage.
A #GstObject to set as source of the new message.
The currently selected angle.
The number of viewing angles now available.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_COMMANDS_CHANGED
The new #GstMessage.
A #GstObject to set as source of the new message.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_EVENT.
The new #GstMessage.
A #GstObject to set as source of the new message.
A navigation #GstEvent
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_MOUSE_OVER.
The new #GstMessage.
A #GstObject to set as source of the new message.
%TRUE if the mouse has entered a clickable area of the display.
%FALSE if it over a non-clickable area.
Parse a #GstNavigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED
and extract the @cur_angle and @n_angles parameters.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a #guint to receive the new
current angle number, or NULL
A pointer to a #guint to receive the new angle
count, or NULL.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_EVENT
and extract contained #GstEvent. The caller must unref the @event when done
with it.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
a pointer to a #GstEvent to receive
the contained navigation event.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_MOUSE_OVER
and extract the active/inactive flag. If the mouse over event is marked
active, it indicates that the mouse is over a clickable area.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a gboolean to receive the
active/inactive state, or NULL.
Inspect a #GstQuery and return the #GstNavigationQueryType associated with
it if it is a #GstNavigation query.
The #GstNavigationQueryType of the query, or
#GST_NAVIGATION_QUERY_INVALID
The query to inspect
Create a new #GstNavigation angles query. When executed, it will
query the pipeline for the set of currently available angles, which may be
greater than one in a multiangle video.
The new query.
Create a new #GstNavigation commands query. When executed, it will
query the pipeline for the set of currently available commands.
The new query.
Parse the current angle number in the #GstNavigation angles @query into the
#guint pointed to by the @cur_angle variable, and the number of available
angles into the #guint pointed to by the @n_angles variable.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
Pointer to a #guint into which to store the
currently selected angle value from the query, or NULL
Pointer to a #guint into which to store the
number of angles value from the query, or NULL
Parse the number of commands in the #GstNavigation commands @query.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the number of commands in this query.
Parse the #GstNavigation command query and retrieve the @nth command from
it into @cmd. If the list contains less elements than @nth, @cmd will be
set to #GST_NAVIGATION_COMMAND_INVALID.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the nth command to retrieve.
a pointer to store the nth command into.
Set the #GstNavigation angles query result field in @query.
a #GstQuery
the current viewing angle to set.
the number of viewing angles to set.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
A list of @GstNavigationCommand values, @n_cmds entries long.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
An array containing @n_cmds
@GstNavigationCommand values.
Sends the indicated command to the navigation interface.
The navigation interface instance
The command to issue
The navigation interface instance
The type of the key event. Recognised values are "key-press" and
"key-release"
Character representation of the key. This is typically as produced
by XKeysymToString.
Sends a mouse event to the navigation interface. Mouse event coordinates
are sent relative to the display space of the related output area. This is
usually the size in pixels of the window associated with the element
implementing the #GstNavigation interface.
The navigation interface instance
The type of mouse event, as a text string. Recognised values are
"mouse-button-press", "mouse-button-release" and "mouse-move".
The button number of the button being pressed or released. Pass 0
for mouse-move events.
The x coordinate of the mouse event.
The y coordinate of the mouse event.
A set of commands that may be issued to an element providing the
#GstNavigation interface. The available commands can be queried via
the gst_navigation_query_new_commands() query.
For convenience in handling DVD navigation, the MENU commands are aliased as:
GST_NAVIGATION_COMMAND_DVD_MENU = @GST_NAVIGATION_COMMAND_MENU1
GST_NAVIGATION_COMMAND_DVD_TITLE_MENU = @GST_NAVIGATION_COMMAND_MENU2
GST_NAVIGATION_COMMAND_DVD_ROOT_MENU = @GST_NAVIGATION_COMMAND_MENU3
GST_NAVIGATION_COMMAND_DVD_SUBPICTURE_MENU = @GST_NAVIGATION_COMMAND_MENU4
GST_NAVIGATION_COMMAND_DVD_AUDIO_MENU = @GST_NAVIGATION_COMMAND_MENU5
GST_NAVIGATION_COMMAND_DVD_ANGLE_MENU = @GST_NAVIGATION_COMMAND_MENU6
GST_NAVIGATION_COMMAND_DVD_CHAPTER_MENU = @GST_NAVIGATION_COMMAND_MENU7
An invalid command entry
Execute navigation menu command 1. For DVD,
this enters the DVD root menu, or exits back to the title from the menu.
Execute navigation menu command 2. For DVD,
this jumps to the DVD title menu.
Execute navigation menu command 3. For DVD,
this jumps into the DVD root menu.
Execute navigation menu command 4. For DVD,
this jumps to the Subpicture menu.
Execute navigation menu command 5. For DVD,
the jumps to the audio menu.
Execute navigation menu command 6. For DVD,
this jumps to the angles menu.
Execute navigation menu command 7. For DVD,
this jumps to the chapter menu.
Select the next button to the left in a menu,
if such a button exists.
Select the next button to the right in a menu,
if such a button exists.
Select the button above the current one in a
menu, if such a button exists.
Select the button below the current one in a
menu, if such a button exists.
Activate (click) the currently selected
button in a menu, if such a button exists.
Switch to the previous angle in a
multiangle feature.
Switch to the next angle in a multiangle
feature.
Enum values for the various events that an element implementing the
GstNavigation interface might send up the pipeline.
Returned from
gst_navigation_event_get_type() when the passed event is not a navigation event.
A key press event. Use
gst_navigation_event_parse_key_event() to extract the details from the event.
A key release event. Use
gst_navigation_event_parse_key_event() to extract the details from the event.
A mouse button press event. Use
gst_navigation_event_parse_mouse_button_event() to extract the details from the
event.
A mouse button release event. Use
gst_navigation_event_parse_mouse_button_event() to extract the details from the
event.
A mouse movement event. Use
gst_navigation_event_parse_mouse_move_event() to extract the details from the
event.
A navigation command event. Use
gst_navigation_event_parse_command() to extract the details from the event.
Navigation interface.
the parent interface
A set of notifications that may be received on the bus when navigation
related status changes.
Returned from
gst_navigation_message_get_type() when the passed message is not a
navigation message.
Sent when the mouse moves over or leaves a
clickable region of the output, such as a DVD menu button.
Sent when the set of available commands
changes and should re-queried by interested applications.
Sent when display angles in a multi-angle
feature (such as a multiangle DVD) change - either angles have appeared or
disappeared.
Sent when a navigation event was not handled
by any element in the pipeline (Since 1.6)
Tyoes of navigation interface queries.
invalid query
command query
viewing angle query
#GST_TYPE_VIDEO_ALPHA_MODE, the alpha mode to use.
Default is #GST_VIDEO_ALPHA_MODE_COPY.
#G_TYPE_DOUBLE, the alpha color value to use.
Default to 1.0
#G_TYPE_UINT, the border color to use if #GST_VIDEO_CONVERTER_OPT_FILL_BORDER
is set to %TRUE. The color is in ARGB format.
Default 0xff000000
#GST_TYPE_VIDEO_CHROMA_MODE, set the chroma resample mode subsampled
formats. Default is #GST_VIDEO_CHROMA_MODE_FULL.
#GST_TYPE_RESAMPLER_METHOD, The resampler method to use for
chroma resampling. Other options for the resampler can be used, see
the #GstResampler. Default is #GST_RESAMPLER_METHOD_LINEAR
#G_TYPE_INT, height in the destination frame, default destination height
#G_TYPE_INT, width in the destination frame, default destination width
#G_TYPE_INT, x position in the destination frame, default 0
#G_TYPE_INT, y position in the destination frame, default 0
#GST_TYPE_VIDEO_DITHER_METHOD, The dither method to use when
changing bit depth.
Default is #GST_VIDEO_DITHER_BAYER.
#G_TYPE_UINT, The quantization amount to dither to. Components will be
quantized to multiples of this value.
Default is 1
#G_TYPE_BOOLEAN, if the destination rectangle does not fill the complete
destination image, render a border with
#GST_VIDEO_CONVERTER_OPT_BORDER_ARGB. Otherwise the unusded pixels in the
destination are untouched. Default %TRUE.
#GST_TYPE_VIDEO_GAMMA_MODE, set the gamma mode.
Default is #GST_VIDEO_GAMMA_MODE_NONE.
#GST_TYPE_VIDEO_MATRIX_MODE, set the color matrix conversion mode for
converting between Y'PbPr and non-linear RGB (R'G'B').
Default is #GST_VIDEO_MATRIX_MODE_FULL.
#GST_TYPE_VIDEO_PRIMARIES_MODE, set the primaries conversion mode.
Default is #GST_VIDEO_PRIMARIES_MODE_NONE.
#GST_TYPE_RESAMPLER_METHOD, The resampler method to use for
resampling. Other options for the resampler can be used, see
the #GstResampler. Default is #GST_RESAMPLER_METHOD_CUBIC
#G_TYPE_UINT, The number of taps for the resampler.
Default is 0: let the resampler choose a good value.
#G_TYPE_INT, source height to convert, default source height
#G_TYPE_INT, source width to convert, default source width
#G_TYPE_INT, source x position to start conversion, default 0
#G_TYPE_INT, source y position to start conversion, default 0
#G_TYPE_UINT, maximum number of threads to use. Default 1, 0 for the number
of cores.
Default maximum number of errors tolerated before signaling error.
The name of the templates for the sink pad.
The name of the templates for the source pad.
The name of the templates for the sink pad.
The name of the templates for the source pad.
Video formats supported by gst_video_overlay_composition_blend(), for
use in overlay elements' pad template caps.
G_TYPE_DOUBLE, B parameter of the cubic filter. The B
parameter controls the bluriness. Values between 0.0 and
2.0 are accepted. 1/3 is the default.
Below are some values of popular filters:
B C
Hermite 0.0 0.0
Spline 1.0 0.0
Catmull-Rom 0.0 1/2
Mitchell 1/3 1/3
Robidoux 0.3782 0.3109
Robidoux
Sharp 0.2620 0.3690
Robidoux
Soft 0.6796 0.1602
G_TYPE_DOUBLE, C parameter of the cubic filter. The C
parameter controls the Keys alpha value. Values between 0.0 and
2.0 are accepted. 1/3 is the default.
See #GST_VIDEO_RESAMPLER_OPT_CUBIC_B for some more common values
G_TYPE_DOUBLE, specifies the size of filter envelope for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
1.0 and 5.0. 2.0 is the default.
G_TYPE_INT, limits the maximum number of taps to use.
16 is the default.
G_TYPE_DOUBLE, specifies sharpening of the filter for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
0.0 and 1.0. 0.0 is the default.
G_TYPE_DOUBLE, specifies sharpness of the filter for
@GST_VIDEO_RESAMPLER_METHOD_LANCZOS. values are clamped between
0.5 and 1.5. 1.0 is the default.
#GST_TYPE_VIDEO_DITHER_METHOD, The dither method to use for propagating
quatization errors.
Extra buffer metadata for performing an affine transformation using a 4x4
matrix. The transformation matrix can be composed with
gst_video_affine_transformation_meta_apply_matrix().
The vertices operated on are all in the range 0 to 1, not in
Normalized Device Coordinates (-1 to +1). Transforming points in this space
are assumed to have an origin at (0.5, 0.5, 0.5) in a left-handed coordinate
system with the x-axis moving horizontally (positive values to the right),
the y-axis moving vertically (positive values up the screen) and the z-axis
perpendicular to the screen (positive values into the screen).
parent #GstMeta
the column-major 4x4 transformation matrix
Apply a transformation using the given 4x4 transformation matrix.
Performs the multiplication, meta->matrix X matrix.
a #GstVideoAffineTransformationMeta
a 4x4 transformation matrix to be applied
Extra alignment parameters for the memory of video buffers. This
structure is usually used to configure the bufferpool if it supports the
#GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT.
extra pixels on the top
extra pixels on the bottom
extra pixels on the left side
extra pixels on the right side
array with extra alignment requirements for the strides
Set @align to its default values with no padding and no alignment.
a #GstVideoAlignment
Different alpha modes.
When input and output have alpha, it will be copied.
When the input has no alpha, alpha will be set to
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
set all alpha to
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
multiply all alpha with
#GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE.
When the input format has no alpha but the output format has, the
alpha value will be set to #GST_VIDEO_CONVERTER_OPT_ALPHA_VALUE
Additional video buffer flags. These flags can potentially be used on any
buffers carrying video data - even encoded data.
Note that these are only valid for #GstCaps of type: video/...
They can conflict with other extended buffer flags.
If the #GstBuffer is interlaced. In mixed
interlace-mode, this flags specifies if the frame is
interlaced or progressive.
If the #GstBuffer is interlaced, then the first field
in the video frame is the top field. If unset, the
bottom field is first.
If the #GstBuffer is interlaced, then the first field
(as defined by the %GST_VIDEO_BUFFER_TFF flag setting)
is repeated.
If the #GstBuffer is interlaced, then only the
first field (as defined by the %GST_VIDEO_BUFFER_TFF
flag setting) is to be displayed.
The #GstBuffer contains one or more specific views,
such as left or right eye view. This flags is set on
any buffer that contains non-mono content - even for
streams that contain only a single viewpoint. In mixed
mono / non-mono streams, the absense of the flag marks
mono buffers.
When conveying stereo/multiview content with
frame-by-frame methods, this flag marks the first buffer
in a bundle of frames that belong together.
Offset to define more flags
Create a new bufferpool that can allocate video frames. This bufferpool
supports all the video bufferpool options.
a new #GstBufferPool to allocate video frames
Extra flags that influence the result from gst_video_chroma_resample_new().
no flags
the input is interlaced
Different subsampling and upsampling methods
Duplicates the chroma samples when
upsampling and drops when subsampling
Uses linear interpolation to reconstruct
missing chroma and averaging to subsample
Different chroma downsampling and upsampling modes
do full chroma up and down sampling
only perform chroma upsampling
only perform chroma downsampling
disable chroma resampling
Perform resampling of @width chroma pixels in @lines.
a #GstVideoChromaResample
pixel lines
the number of pixels on one line
Free @resample
a #GstVideoChromaResample
The resampler must be fed @n_lines at a time. The first line should be
at @offset.
a #GstVideoChromaResample
the number of input lines
the first line
Create a new resampler object for the given parameters. When @h_factor or
@v_factor is > 0, upsampling will be used, otherwise subsampling is
performed.
a new #GstVideoChromaResample that should be freed with
gst_video_chroma_resample_free() after usage.
a #GstVideoChromaMethod
a #GstVideoChromaSite
#GstVideoChromaFlags
the #GstVideoFormat
horizontal resampling factor
vertical resampling factor
Various Chroma sitings.
unknown cositing
no cositing
chroma is horizontally cosited
chroma is vertically cosited
choma samples are sited on alternate lines
chroma samples cosited with luma samples
jpeg style cositing, also for mpeg1 and mjpeg
mpeg2 style cositing
DV style cositing
A #GstVideoCodecFrame represents a video frame both in raw and
encoded form.
Unique identifier for the frame. Use this if you need
to get hold of the frame later (like when data is being decoded).
Typical usage in decoders is to set this on the opaque value provided
to the library and get back the frame using gst_video_decoder_get_frame()
Decoding timestamp
Presentation timestamp
Duration of the frame
Distance in frames from the last synchronization point.
the input #GstBuffer that created this frame. The buffer is owned
by the frame and references to the frame instead of the buffer should
be kept.
the output #GstBuffer. Implementations should set this either
directly, or by using the
@gst_video_decoder_allocate_output_frame() or
@gst_video_decoder_allocate_output_buffer() methods. The buffer is
owned by the frame and references to the frame instead of the
buffer should be kept.
Running time when the frame will be used.
Gets private data set on the frame by the subclass via
gst_video_codec_frame_set_user_data() previously.
The previously set user_data
a #GstVideoCodecFrame
Increases the refcount of the given frame by one.
@buf
a #GstVideoCodecFrame
Sets @user_data on the frame and the #GDestroyNotify that will be called when
the frame is freed. Allows to attach private data by the subclass to frames.
If a @user_data was previously set, then the previous set @notify will be called
before the @user_data is replaced.
a #GstVideoCodecFrame
private data
a #GDestroyNotify
Decreases the refcount of the frame. If the refcount reaches 0, the frame
will be freed.
a #GstVideoCodecFrame
Flags for #GstVideoCodecFrame
is the frame only meant to be decoded
is the frame a synchronization point (keyframe)
should the output frame be made a keyframe
should the encoder output stream headers
Structure representing the state of an incoming or outgoing video
stream for encoders and decoders.
Decoders and encoders will receive such a state through their
respective @set_format vmethods.
Decoders and encoders can set the downstream state, by using the
@gst_video_decoder_set_output_state() or
@gst_video_encoder_set_output_state() methods.
The #GstVideoInfo describing the stream
The #GstCaps used in the caps negotiation of the pad.
a #GstBuffer corresponding to the
'codec_data' field of a stream, or NULL.
The #GstCaps for allocation query and pool
negotiation. Since: 1.10
Increases the refcount of the given state by one.
@buf
a #GstVideoCodecState
Decreases the refcount of the state. If the refcount reaches 0, the state
will be freed.
a #GstVideoCodecState
The color matrix is used to convert between Y'PbPr and
non-linear RGB (R'G'B')
unknown matrix
identity matrix
FCC color matrix
ITU-R BT.709 color matrix
ITU-R BT.601 color matrix
SMPTE 240M color matrix
ITU-R BT.2020 color matrix. Since: 1.6
Get the coefficients used to convert between Y'PbPr and R'G'B' using @matrix.
When:
|[
0.0 <= [Y',R',G',B'] <= 1.0)
(-0.5 <= [Pb,Pr] <= 0.5)
]|
the general conversion is given by:
|[
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))
]|
and the other way around:
|[
R' = Y' + Cr*2*(1-Kr)
G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb)
B' = Y' + Cb*2*(1-Kb)
]|
TRUE if @matrix was a YUV color format and @Kr and @Kb contain valid
values.
a #GstVideoColorMatrix
result red channel coefficient
result blue channel coefficient
The color primaries define the how to transform linear RGB values to and from
the CIE XYZ colorspace.
unknown color primaries
BT709 primaries
BT470M primaries
BT470BG primaries
SMPTE170M primaries
SMPTE240M primaries
Generic film
BT2020 primaries. Since: 1.6
Adobe RGB primaries. Since: 1.8
Get information about the chromaticity coordinates of @primaries.
a #GstVideoColorPrimariesInfo for @primaries.
a #GstVideoColorPrimaries
Structure describing the chromaticity coordinates of an RGB system. These
values can be used to construct a matrix to transform RGB to and from the
XYZ colorspace.
a #GstVideoColorPrimaries
reference white x coordinate
reference white y coordinate
red x coordinate
red y coordinate
green x coordinate
green y coordinate
blue x coordinate
blue y coordinate
Possible color range values. These constants are defined for 8 bit color
values and can be scaled for other bit depths.
unknown range
[0..255] for 8 bit components
[16..235] for 8 bit components. Chroma has
[16..240] range.
Compute the offset and scale values for each component of @info. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
@info and @range.
a #GstVideoColorRange
a #GstVideoFormatInfo
output offsets
output scale
Structure describing the color info.
the color range. This is the valid range for the samples.
It is used to convert the samples to Y'PbPr values.
the color matrix. Used to convert between Y'PbPr and
non-linear RGB (R'G'B')
the transfer function. used to convert between R'G'B' and RGB
color primaries. used to convert between R'G'B' and CIE XYZ
Parse the colorimetry string and update @cinfo with the parsed
values.
%TRUE if @color points to valid colorimetry info.
a #GstVideoColorimetry
a colorimetry string
Compare the 2 colorimetry sets for equality
%TRUE if @cinfo and @other are equal.
a #GstVideoColorimetry
another #GstVideoColorimetry
Check if the colorimetry information in @info matches that of the
string @color.
%TRUE if @color conveys the same colorimetry info as the color
information in @info.
a #GstVideoInfo
a colorimetry string
Make a string representation of @cinfo.
a string representation of @cinfo.
a #GstVideoColorimetry
Convert the pixels of @src into @dest using @convert.
a #GstVideoConverter
a #GstVideoFrame
a #GstVideoFrame
Free @convert
a #GstVideoConverter
Get the current configuration of @convert.
a #GstStructure that remains valid for as long as @convert is valid
or until gst_video_converter_set_config() is called.
a #GstVideoConverter
Set @config as extra configuraion for @convert.
If the parameters in @config can not be set exactly, this function returns
%FALSE and will try to update as much state as possible. The new state can
then be retrieved and refined with gst_video_converter_get_config().
Look at the #GST_VIDEO_CONVERTER_OPT_* fields to check valid configuration
option and values.
%TRUE when @config could be set.
a #GstVideoConverter
a #GstStructure
Create a new converter object to convert between @in_info and @out_info
with @config.
a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
Extra buffer metadata describing image cropping.
parent #GstMeta
the horizontal offset
the vertical offset
the cropped width
the cropped height
This base class is for video decoders turning encoded data into raw video
frames.
The GstVideoDecoder base class and derived subclasses should cooperate as
follows:
## Configuration
* Initially, GstVideoDecoder calls @start when the decoder element
is activated, which allows the subclass to perform any global setup.
* GstVideoDecoder calls @set_format to inform the subclass of caps
describing input video data that it is about to receive, including
possibly configuration data.
While unlikely, it might be called more than once, if changing input
parameters require reconfiguration.
* Incoming data buffers are processed as needed, described in Data
Processing below.
* GstVideoDecoder calls @stop at end of all processing.
## Data processing
* The base class gathers input data, and optionally allows subclass
to parse this into subsequently manageable chunks, typically
corresponding to and referred to as 'frames'.
* Each input frame is provided in turn to the subclass' @handle_frame
callback.
The ownership of the frame is given to the @handle_frame callback.
* If codec processing results in decoded data, the subclass should call
@gst_video_decoder_finish_frame to have decoded data pushed.
downstream. Otherwise, the subclass must call
@gst_video_decoder_drop_frame, to allow the base class to do timestamp
and offset tracking, and possibly to requeue the frame for a later
attempt in the case of reverse playback.
## Shutdown phase
* The GstVideoDecoder class calls @stop to inform the subclass that data
parsing will be stopped.
## Additional Notes
* Seeking/Flushing
* When the pipeline is seeked or otherwise flushed, the subclass is
informed via a call to its @reset callback, with the hard parameter
set to true. This indicates the subclass should drop any internal data
queues and timestamps and prepare for a fresh set of buffers to arrive
for parsing and decoding.
* End Of Stream
* At end-of-stream, the subclass @parse function may be called some final
times with the at_eos parameter set to true, indicating that the element
should not expect any more data to be arriving, and it should parse and
remaining frames and call gst_video_decoder_have_frame() if possible.
The subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It also
needs to provide information about the ouptput caps, when they are known.
This may be when the base class calls the subclass' @set_format function,
though it might be during decoding, before calling
@gst_video_decoder_finish_frame. This is done via
@gst_video_decoder_set_output_state
The subclass is also responsible for providing (presentation) timestamps
(likely based on corresponding input ones). If that is not applicable
or possible, the base class provides limited framerate based interpolation.
Similarly, the base class provides some limited (legacy) seeking support
if specifically requested by the subclass, as full-fledged support
should rather be left to upstream demuxer, parser or alike. This simple
approach caters for seeking and duration reporting using estimated input
bitrates. To enable it, a subclass should call
@gst_video_decoder_set_estimate_rate to enable handling of incoming
byte-streams.
The base class provides some support for reverse playback, in particular
in case incoming data is not packetized or upstream does not provide
fragments on keyframe boundaries. However, the subclass should then be
prepared for the parsing and frame processing stage to occur separately
(in normal forward processing, the latter immediately follows the former),
The subclass also needs to ensure the parsing stage properly marks
keyframes, unless it knows the upstream elements will do so properly for
incoming data.
The bare minimum that a functional subclass needs to implement is:
* Provide pad templates
* Inform the base class of output caps via
@gst_video_decoder_set_output_state
* Parse input data, if it is not considered packetized from upstream
Data will be provided to @parse which should invoke
@gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to
separate the data belonging to each video frame.
* Accept data in @handle_frame and provide decoded results to
@gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
Removes next @n_bytes of input data and adds it to currently parsed frame.
a #GstVideoDecoder
the number of bytes to add
Helper function that allocates a buffer to hold a video frame for @decoder's
current #GstVideoCodecState.
You should use gst_video_decoder_allocate_output_frame() instead of this
function, if possible at all.
allocated buffer, or NULL if no buffer could be
allocated (e.g. when downstream is flushing or shutting down)
a #GstVideoDecoder
Helper function that allocates a buffer to hold a video frame for @decoder's
current #GstVideoCodecState. Subclass should already have configured video
state and set src pad caps.
The buffer allocated here is owned by the frame and you should only
keep references to the frame, not the buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoDecoder
a #GstVideoCodecFrame
Same as #gst_video_decoder_allocate_output_frame except it allows passing
#GstBufferPoolAcquireParams to the sub call gst_buffer_pool_acquire_buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoDecoder
a #GstVideoCodecFrame
a #GstBufferPoolAcquireParams
Similar to gst_video_decoder_finish_frame(), but drops @frame in any
case and posts a QoS message with the frame's details on the bus.
In any case, the frame is considered finished and released.
a #GstFlowReturn, usually GST_FLOW_OK.
a #GstVideoDecoder
the #GstVideoCodecFrame to drop
@frame should have a valid decoded data buffer, whose metadata fields
are then appropriately set according to frame data and pushed downstream.
If no output data is provided, @frame is considered skipped.
In any case, the frame is considered finished and released.
After calling this function the output buffer of the frame is to be
considered read-only. This function will also change the metadata
of the buffer.
a #GstFlowReturn resulting from sending data downstream
a #GstVideoDecoder
a decoded #GstVideoCodecFrame
Lets #GstVideoDecoder sub-classes to know the memory @allocator
used by the base class and its @params.
Unref the @allocator after use it.
a #GstVideoDecoder
the #GstAllocator
used
the
#GstAllocatorParams of @allocator
the instance of the #GstBufferPool used
by the decoder; free it after use it
a #GstVideoDecoder
currently configured byte to time conversion setting
a #GstVideoDecoder
Get a pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame identified by @frame_number.
a #GstVideoDecoder
system_frame_number of a frame
Get all pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame.
a #GstVideoDecoder
Query the configured decoder latency. Results will be returned via
@min_latency and @max_latency.
a #GstVideoDecoder
address of variable in which to store the
configured minimum latency, or %NULL
address of variable in which to store the
configured mximum latency, or %NULL
Determines maximum possible decoding time for @frame that will
allow it to decode and arrive in time (as determined by QoS events).
In particular, a negative result means decoding in time is no longer possible
and should therefore occur as soon/skippy as possible.
max decoding time.
a #GstVideoDecoder
a #GstVideoCodecFrame
currently configured decoder tolerated error count.
a #GstVideoDecoder
Queries decoder required format handling.
%TRUE if required format handling is enabled.
a #GstVideoDecoder
Get the oldest pending unfinished #GstVideoCodecFrame
oldest pending unfinished #GstVideoCodecFrame.
a #GstVideoDecoder
Get the #GstVideoCodecState currently describing the output stream.
#GstVideoCodecState describing format of video data.
a #GstVideoDecoder
Queries whether input data is considered packetized or not by the
base class.
TRUE if input data is considered packetized.
a #GstVideoDecoder
Returns the number of bytes previously added to the current frame
by calling gst_video_decoder_add_to_frame().
The number of bytes pending for the current frame
a #GstVideoDecoder
The current QoS proportion.
a #GstVideoDecoder
current QoS proportion, or %NULL
Gathers all data collected for currently parsed frame, gathers corresponding
metadata and passes it along for further processing, i.e. @handle_frame.
a #GstFlowReturn
a #GstVideoDecoder
Sets the audio decoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with gst_audio_decoder_merge_tags().
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
MT safe.
a #GstVideoDecoder
a #GstTagList to merge, or NULL to unset
previously-set tags
the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
Returns caps that express @caps (or sink template caps if @caps == NULL)
restricted to resolution/format/... combinations supported by downstream
elements.
a #GstCaps owned by caller
a #GstVideoDecoder
initial caps
filter caps
Similar to gst_video_decoder_drop_frame(), but simply releases @frame
without any processing other than removing it from list of pending frames,
after which it is considered finished and released.
a #GstVideoDecoder
the #GstVideoCodecFrame to release
Allows baseclass to perform byte to time estimated conversion.
a #GstVideoDecoder
whether to enable byte to time conversion
Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder
latency is. Will also post a LATENCY message on the bus so the pipeline
can reconfigure its global latency.
a #GstVideoDecoder
minimum latency
maximum latency
Sets numbers of tolerated decoder errors, where a tolerated one is then only
warned about, but more than tolerated will lead to fatal error. You can set
-1 for never returning fatal errors. Default is set to
GST_VIDEO_DECODER_MAX_ERRORS.
The '-1' option was added in 1.4
a #GstVideoDecoder
max tolerated errors
Configures decoder format needs. If enabled, subclass needs to be
negotiated with format caps before it can process any data. It will then
never be handed any data before it has been configured.
Otherwise, it might be handed data without having been configured and
is then expected being able to do so either by default
or based on the input data.
a #GstVideoDecoder
new state
Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
as the output state for the decoder.
Any previously set output state on @decoder will be replaced by the newly
created one.
If the subclass wishes to copy over existing fields (like pixel aspec ratio,
or framerate) from an existing #GstVideoCodecState, it can be provided as a
@reference.
If the subclass wishes to override some fields from the output state (like
pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
The new output state will only take effect (set on pads and buffers) starting
from the next call to #gst_video_decoder_finish_frame().
the newly configured output state.
a #GstVideoDecoder
a #GstVideoFormat
The width in pixels
The height in pixels
An optional reference #GstVideoCodecState
Allows baseclass to consider input data as packetized or not. If the
input is packetized, then the @parse method will not be called.
a #GstVideoDecoder
whether the input data should be considered as packetized.
Lets #GstVideoDecoder sub-classes decide if they want the sink pad
to use the default pad query handler to reply to accept-caps queries.
By setting this to true it is possible to further customize the default
handler with %GST_PAD_SET_ACCEPT_INTERSECT and
%GST_PAD_SET_ACCEPT_TEMPLATE
a #GstVideoDecoder
if the default pad accept-caps query handling should be used
Subclasses can override any of the available virtual methods or not, as
needed. At minimum @handle_frame needs to be overridden, and @set_format
and likely as well. If non-packetized input is supported or expected,
@parse needs to be overridden as well.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoDecoder
The interface allows unified access to control flipping and rotation
operations of video-sources or operators.
#GstVideoDirectionInterface interface.
parent interface type.
GstVideoDither provides implementations of several dithering algorithms
that can be applied to lines of video pixels to quantize and dither them.
Free @dither
a #GstVideoDither
Dither @width pixels starting from offset @x in @line using @dither.
@y is the line number of @line in the output image.
a #GstVideoDither
pointer to the pixels of the line
x coordinate
y coordinate
the width
Make a new dither object for dithering lines of @format using the
algorithm described by @method.
Each component will be quantized to a multiple of @quantizer. Better
performance is achived when @quantizer is a power of 2.
@width is the width of the lines that this ditherer will handle.
a new #GstVideoDither
a #GstVideoDitherMethod
a #GstVideoDitherFlags
a #GstVideoFormat
quantizer
the width of the lines
Extra flags that influence the result from gst_video_chroma_resample_new().
no flags
the input is interlaced
quantize values in addition to adding dither.
Different dithering methods to use.
no dithering
propagate rounding errors downwards
Dither with floyd-steinberg error diffusion
Dither with Sierra Lite error diffusion
ordered dither using a bayer pattern
This base class is for video encoders turning raw video into
encoded video data.
GstVideoEncoder and subclass should cooperate as follows.
## Configuration
* Initially, GstVideoEncoder calls @start when the encoder element
is activated, which allows subclass to perform any global setup.
* GstVideoEncoder calls @set_format to inform subclass of the format
of input video data that it is about to receive. Subclass should
setup for encoding and configure base class as appropriate
(e.g. latency). While unlikely, it might be called more than once,
if changing input parameters require reconfiguration. Baseclass
will ensure that processing of current configuration is finished.
* GstVideoEncoder calls @stop at end of all processing.
## Data processing
* Base class collects input data and metadata into a frame and hands
this to subclass' @handle_frame.
* If codec processing results in encoded data, subclass should call
@gst_video_encoder_finish_frame to have encoded data pushed
downstream.
* If implemented, baseclass calls subclass @pre_push just prior to
pushing to allow subclasses to modify some metadata on the buffer.
If it returns GST_FLOW_OK, the buffer is pushed downstream.
* GstVideoEncoderClass will handle both srcpad and sinkpad events.
Sink events will be passed to subclass if @event callback has been
provided.
## Shutdown phase
* GstVideoEncoder class calls @stop to inform the subclass that data
parsing will be stopped.
Subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It should
also be able to provide fixed src pad caps in @getcaps by the time it calls
@gst_video_encoder_finish_frame.
Things that subclass need to take care of:
* Provide pad templates
* Provide source pad caps before pushing the first buffer
* Accept data in @handle_frame and provide encoded results to
@gst_video_encoder_finish_frame.
The #GstVideoEncoder:qos property will enable the Quality-of-Service
features of the encoder which gather statistics about the real-time
performance of the downstream elements. If enabled, subclasses can
use gst_video_encoder_get_max_encode_time() to check if input frames
are already late and drop them right away to give a chance to the
pipeline to catch up.
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Helper function that allocates a buffer to hold an encoded video frame
for @encoder's current #GstVideoCodecState.
allocated buffer
a #GstVideoEncoder
size of the buffer
Helper function that allocates a buffer to hold an encoded video frame for @encoder's
current #GstVideoCodecState. Subclass should already have configured video
state and set src pad caps.
The buffer allocated here is owned by the frame and you should only
keep references to the frame, not the buffer.
%GST_FLOW_OK if an output buffer could be allocated
a #GstVideoEncoder
a #GstVideoCodecFrame
size of the buffer
@frame must have a valid encoded data buffer, whose metadata fields
are then appropriately set according to frame data or no buffer at
all if the frame should be dropped.
It is subsequently pushed downstream or provided to @pre_push.
In any case, the frame is considered finished and released.
After calling this function the output buffer of the frame is to be
considered read-only. This function will also change the metadata
of the buffer.
a #GstFlowReturn resulting from sending data downstream
a #GstVideoEncoder
an encoded #GstVideoCodecFrame
Lets #GstVideoEncoder sub-classes to know the memory @allocator
used by the base class and its @params.
Unref the @allocator after use it.
a #GstVideoEncoder
the #GstAllocator
used
the
#GstAllocatorParams of @allocator
Get a pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame identified by @frame_number.
a #GstVideoEnccoder
system_frame_number of a frame
Get all pending unfinished #GstVideoCodecFrame
pending unfinished #GstVideoCodecFrame.
a #GstVideoEncoder
Query the configured encoding latency. Results will be returned via
@min_latency and @max_latency.
a #GstVideoEncoder
address of variable in which to store the
configured minimum latency, or %NULL
address of variable in which to store the
configured maximum latency, or %NULL
Determines maximum possible encoding time for @frame that will
allow it to encode and arrive in time (as determined by QoS events).
In particular, a negative result means encoding in time is no longer possible
and should therefore occur as soon/skippy as possible.
If no QoS events have been received from downstream, or if
#GstVideoEncoder:qos is disabled this function returns #G_MAXINT64.
max decoding time.
a #GstVideoEncoder
a #GstVideoCodecFrame
Get the oldest unfinished pending #GstVideoCodecFrame
oldest unfinished pending #GstVideoCodecFrame
a #GstVideoEncoder
Get the current #GstVideoCodecState
#GstVideoCodecState describing format of video data.
a #GstVideoEncoder
Checks if @encoder is currently configured to handle Quality-of-Service
events from downstream.
%TRUE if the encoder is configured to perform Quality-of-Service.
the encoder
Sets the video encoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with gst_video_encoder_merge_tags().
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
MT safe.
a #GstVideoEncoder
a #GstTagList to merge, or NULL to unset
previously-set tags
the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
Negotiate with downstream elements to currently configured #GstVideoCodecState.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Returns caps that express @caps (or sink template caps if @caps == NULL)
restricted to resolution/format/... combinations supported by downstream
elements (e.g. muxers).
a #GstCaps owned by caller
a #GstVideoEncoder
initial caps
filter caps
Set the codec headers to be sent downstream whenever requested.
a #GstVideoEncoder
a list of #GstBuffer containing the codec header
Informs baseclass of encoding latency.
a #GstVideoEncoder
minimum latency
maximum latency
Request minimal value for PTS passed to handle_frame.
For streams with reordered frames this can be used to ensure that there
is enough time to accomodate first DTS, which may be less than first PTS
Since 1.6
a #GstVideoEncoder
minimal PTS that will be passed to handle_frame
Creates a new #GstVideoCodecState with the specified caps as the output state
for the encoder.
Any previously set output state on @encoder will be replaced by the newly
created one.
The specified @caps should not contain any resolution, pixel-aspect-ratio,
framerate, codec-data, .... Those should be specified instead in the returned
#GstVideoCodecState.
If the subclass wishes to copy over existing fields (like pixel aspect ratio,
or framerate) from an existing #GstVideoCodecState, it can be provided as a
@reference.
If the subclass wishes to override some fields from the output state (like
pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
The new output state will only take effect (set on pads and buffers) starting
from the next call to #gst_video_encoder_finish_frame().
the newly configured output state.
a #GstVideoEncoder
the #GstCaps to use for the output
An optional reference @GstVideoCodecState
Configures @encoder to handle Quality-of-Service events from downstream.
the encoder
the new qos value.
Subclasses can override any of the available virtual methods or not, as
needed. At minimum @handle_frame needs to be overridden, and @set_format
and @get_caps are likely needed as well.
%TRUE if the negotiation succeeded, else %FALSE.
a #GstVideoEncoder
Field order of interlaced content. This is only valid for
interlace-mode=interleaved and not interlace-mode=mixed. In the case of
mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via
buffer flags.
unknown field order for interlaced content.
The actual field order is signalled via buffer flags.
top field is first
bottom field is first
Convert @order to a #GstVideoFieldOrder
the #GstVideoFieldOrder of @order or
#GST_VIDEO_FIELD_ORDER_UNKNOWN when @order is not a valid
string representation for a #GstVideoFieldOrder.
a field order
Convert @order to its string representation.
@order as a string or NULL if @order in invalid.
a #GstVideoFieldOrder
Provides useful functions and a base class for video filters.
The videofilter will by default enable QoS on the parent GstBaseTransform
to implement frame dropping.
The video filter class structure.
the parent class structure
Extra video flags
no flags
a variable fps is selected, fps_n and fps_d
denote the maximum fps of the video
Each color has been scaled by the alpha
value.
Enum value describing the most common video formats.
Unknown or unset video format id
Encoded video format. Only ever use that in caps for
special video formats in combination with non-system
memory GstCapsFeatures where it does not make sense
to specify a real video format.
planar 4:2:0 YUV
planar 4:2:0 YVU (like I420 but UV planes swapped)
packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...)
packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...)
sparse rgb packed into 32 bit, space last
sparse reverse rgb packed into 32 bit, space last
sparse rgb packed into 32 bit, space first
sparse reverse rgb packed into 32 bit, space first
rgb with alpha channel last
reverse rgb with alpha channel last
rgb with alpha channel first
reverse rgb with alpha channel first
rgb
reverse rgb
planar 4:1:1 YUV
planar 4:2:2 YUV
packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...)
planar 4:4:4 YUV
packed 4:2:2 10-bit YUV, complex format
packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order
planar 4:2:0 YUV with interleaved UV plane
planar 4:2:0 YUV with interleaved VU plane
8-bit grayscale
16-bit grayscale, most significant byte first
16-bit grayscale, least significant byte first
packed 4:4:4 YUV (Y-U-V ...)
rgb 5-6-5 bits per component
reverse rgb 5-6-5 bits per component
rgb 5-5-5 bits per component
reverse rgb 5-5-5 bits per component
packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
planar 4:4:2:0 AYUV
8-bit paletted RGB
planar 4:1:0 YUV
planar 4:1:0 YUV (like YUV9 but UV planes swapped)
packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...)
rgb with alpha channel first, 16 bits per channel
packed 4:4:4 YUV with alpha channel, 16 bits per channel (A0-Y0-U0-V0 ...)
packed 4:4:4 RGB, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 8 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:2:2 YUV with interleaved UV plane (Since: 1.2)
planar 4:4:4 YUV with interleaved UV plane (Since: 1.2)
NV12 with 64x32 tiling in zigzag pattern (Since: 1.4)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:2:2 YUV with interleaved VU plane (Since: 1.6)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
packed 4:4:4 YUV (U-Y-V ...) (Since 1.10)
packed 4:2:2 YUV (V0-Y0-U0-Y1 V2-Y2-U2-Y3 V4 ...)
planar 4:4:4:4 ARGB, 8 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
10-bit grayscale, packed into 32bit words (2 bits padding) (Since: 1.14)
10-bit variant of @GST_VIDEO_FORMAT_NV12, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
10-bit variant of @GST_VIDEO_FORMAT_NV16, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
Converts a FOURCC value into the corresponding #GstVideoFormat.
If the FOURCC cannot be represented by #GstVideoFormat,
#GST_VIDEO_FORMAT_UNKNOWN is returned.
the #GstVideoFormat describing the FOURCC value
a FOURCC value representing raw YUV video
Find the #GstVideoFormat for the given parameters.
a #GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to
not specify a known format.
the amount of bits used for a pixel
the amount of bits used to store a pixel. This value is bigger than
@depth
the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN
the red mask
the green mask
the blue mask
the alpha mask, or 0 if no alpha mask
Convert the @format string to its #GstVideoFormat.
the #GstVideoFormat for @format or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
a format string
Get the #GstVideoFormatInfo for @format
The #GstVideoFormatInfo for @format.
a #GstVideoFormat
Get the default palette of @format. This the palette used in the pack
function for paletted formats.
the default palette of @format or %NULL when
@format does not have a palette.
a #GstVideoFormat
size of the palette in bytes
Converts a #GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If @format has
no corresponding FOURCC value, 0 is returned.
the FOURCC corresponding to @format
a #GstVideoFormat video format
Returns a string containing a descriptive name for
the #GstVideoFormat if there is one, or NULL otherwise.
the name corresponding to @format
a #GstVideoFormat video format
The different video flags that a format info can have.
The video format is YUV, components are numbered
0=Y, 1=U, 2=V.
The video format is RGB, components are numbered
0=R, 1=G, 2=B.
The video is gray, there is one gray component
with index 0.
The video format has an alpha components with
the number 3.
The video format has data stored in little
endianness.
The video format has a palette. The palette
is stored in the second plane and indexes are stored in the first plane.
The video format has a complex layout that
can't be described with the usual information in the #GstVideoFormatInfo.
This format can be used in a
#GstVideoFormatUnpack and #GstVideoFormatPack function.
The format is tiled, there is tiling information
in the last plane.
Information for a video format.
#GstVideoFormat
string representation of the format
use readable description of the format
#GstVideoFormatFlags
The number of bits used to pack data items. This can be less than 8
when multiple pixels are stored in a byte. for values > 8 multiple bytes
should be read according to the endianness flag before applying the shift
and mask.
the number of components in the video format.
the number of bits to shift away to get the component data
the depth in bits for each component
the pixel stride of each component. This is the amount of
bytes to the pixel immediately to the right. When bits < 8, the stride is
expressed in bits. For 24-bit RGB, this would be 3 bytes, for example,
while it would be 4 bytes for RGBx or ARGB.
the number of planes for this format. The number of planes can be
less than the amount of components when multiple components are packed into
one plane.
the plane number where a component can be found
the offset in the plane where the first pixel of the components
can be found.
subsampling factor of the width for the component. Use
GST_VIDEO_SUB_SCALE to scale a width.
subsampling factor of the height for the component. Use
GST_VIDEO_SUB_SCALE to scale a height.
the format of the unpacked pixels. This format must have the
#GST_VIDEO_FORMAT_FLAG_UNPACK flag set.
an unpack function for this format
the amount of lines that will be packed
an pack function for this format
The tiling mode
The width of a tile, in bytes, represented as a shift
The height of a tile, in bytes, represented as a shift
Packs @width pixels from @src to the given planes and strides in the
format @info. The pixels from source have each component interleaved
and will be packed into the planes in @data.
This function operates on pack_lines lines, meaning that @src should
contain at least pack_lines lines with a stride of @sstride and @y
should be a multiple of pack_lines.
Subsampled formats will use the horizontally and vertically cosited
component from the source. Subsampling should be performed before
packing.
Because this function does not have a x coordinate, it is not possible to
pack pixels starting from an unaligned position. For tiled images this
means that packing should start from a tile coordinate. For subsampled
formats this means that a complete pixel needs to be packed.
a #GstVideoFormatInfo
flags to control the packing
a source array
the source array stride
pointers to the destination data planes
strides of the destination planes
the chroma siting of the target when subsampled (not used)
the y position in the image to pack to
the amount of pixels to pack.
Unpacks @width pixels from the given planes and strides containing data of
format @info. The pixels will be unpacked into @dest with each component
interleaved as per @info's unpack_format, which will usually be one of
#GST_VIDEO_FORMAT_ARGB, #GST_VIDEO_FORMAT_AYUV, #GST_VIDEO_FORMAT_ARGB64 or
#GST_VIDEO_FORMAT_AYUV64 depending on the format to unpack.
@dest should at least be big enough to hold @width * bytes_per_pixel bytes
where bytes_per_pixel relates to the unpack format and will usually be
either 4 or 8 depending on the unpack format. bytes_per_pixel will be
the same as the pixel stride for plane 0 for the above formats.
For subsampled formats, the components will be duplicated in the destination
array. Reconstruction of the missing components can be performed in a
separate step after unpacking.
a #GstVideoFormatInfo
flags to control the unpacking
a destination array
pointers to the data planes
strides of the planes
the x position in the image to start from
the y position in the image to start from
the amount of pixels to unpack.
A video frame obtained from gst_video_frame_map()
the #GstVideoInfo
#GstVideoFrameFlags for the frame
the mapped buffer
pointer to metadata if any
id of the mapped frame. the id can for example be used to
indentify the frame in case of multiview video.
pointers to the plane data
mappings of the planes
Copy the contents from @src to @dest.
TRUE if the contents could be copied.
a #GstVideoFrame
a #GstVideoFrame
Copy the plane with index @plane from @src to @dest.
TRUE if the contents could be copied.
a #GstVideoFrame
a #GstVideoFrame
a plane
Use @info and @buffer to fill in the values of @frame. @frame is usually
allocated on the stack, and you will pass the address to the #GstVideoFrame
structure allocated on the stack; gst_video_frame_map() will then fill in
the structures with the various video-specific information you need to access
the pixels of the video buffer. You can then use accessor macros such as
GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(),
GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc.
to get to the pixels.
|[<!-- language="C" -->
GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);
for (h = 0; h < height; ++h) {
for (w = 0; w < width; ++w) {
guint8 *pixel = pixels + h * stride + w * pixel_stride;
memset (pixel, 0, pixel_stride);
}
}
gst_video_frame_unmap (&vframe);
}
...
]|
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
The purpose of this function is to make it easy for you to get to the video
pixels in a generic way, without you having to worry too much about details
such as whether the video data is allocated in one contiguous memory chunk
or multiple memory chunks (e.g. one for each plane); or if custom strides
and custom plane offsets are used or not (as signalled by GstVideoMeta on
each buffer). This function will just fill the #GstVideoFrame structure
with the right values and if you use the accessor macros everything will
just work and you can access the data easily. It also maps the underlying
memory chunks for you.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
#GstMapFlags
Use @info and @buffer to fill in the values of @frame with the video frame
information of frame @id.
When @id is -1, the default frame is mapped. When @id != -1, this function
will return %FALSE when there is no GstVideoMeta with that id.
All video planes of @buffer will be mapped and the pointers will be set in
@frame->data.
%TRUE on success.
pointer to #GstVideoFrame
a #GstVideoInfo
the buffer to map
the frame id to map
#GstMapFlags
Unmap the memory previously mapped with gst_video_frame_map.
a #GstVideoFrame
Extra video frame flags
no flags
The video frame is interlaced. In mixed
interlace-mode, this flag specifies if the frame is interlaced or
progressive.
The video frame has the top field first
The video frame has the repeat flag
The video frame has one field
The video contains one or
more non-mono views
The video frame is the first
in a set of corresponding views provided as sequential frames.
Additional mapping flags for gst_video_frame_map().
Don't take another reference of the buffer and store it in
the GstVideoFrame. This makes sure that the buffer stays
writable while the frame is mapped, but requires that the
buffer reference stays valid until the frame is unmapped again.
Offset to define more flags
The orientation of the GL texture.
Top line first in memory, left row first
Bottom line first in memory, left row first
Top line first in memory, right row first
Bottom line first in memory, right row first
The GL texture type.
Luminance texture, GL_LUMINANCE
Luminance-alpha texture, GL_LUMINANCE_ALPHA
RGB 565 texture, GL_RGB
RGB texture, GL_RGB
RGBA texture, GL_RGBA
R texture, GL_RED_EXT
RG texture, GL_RG_EXT
Extra buffer metadata for uploading a buffer to an OpenGL texture
ID. The caller of gst_video_gl_texture_upload_meta_upload() must
have OpenGL set up and call this from a thread where it is valid
to upload something to an OpenGL texture.
parent #GstMeta
Orientation of the textures
Number of textures that are generated
Type of each texture
Uploads the buffer which owns the meta to a specific texture ID.
%TRUE if uploading succeeded, %FALSE otherwise.
a #GstVideoGLTextureUploadMeta
the texture IDs to upload to
disable gamma handling
convert between input and output gamma
Different gamma conversion modes
Information describing image properties. This information can be filled
in from GstCaps with gst_video_info_from_caps(). The information is also used
to store the specific video info when mapping a video frame with
gst_video_frame_map().
Use the provided macros to access the info in this structure.
the format info of the video
the interlace mode
additional video flags
the width of the video
the height of the video
the default size of one frame
the number of views for multiview video
a #GstVideoChromaSite.
the colorimetry info
the pixel-aspect-ratio numerator
the pixel-aspect-ratio demnominator
the framerate numerator
the framerate demnominator
offsets of the planes
strides of the planes
Allocate a new #GstVideoInfo that is also initialized with
gst_video_info_init().
a new #GstVideoInfo. free with gst_video_info_free().
Adjust the offset and stride fields in @info so that the padding and
stride alignment in @align is respected.
Extra padding will be added to the right side when stride alignment padding
is required and @align will be updated with the new padding values.
%FALSE if alignment could not be applied, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
a #GstVideoInfo
alignment parameters
Converts among various #GstFormat types. This function handles
GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For
raw video, GST_FORMAT_DEFAULT corresponds to video frames. This
function can be used to handle pad queries of the type GST_QUERY_CONVERT.
TRUE if the conversion was successful.
a #GstVideoInfo
#GstFormat of the @src_value
value to convert
#GstFormat of the @dest_value
pointer to destination value
Copy a GstVideoInfo structure.
a new #GstVideoInfo. free with gst_video_info_free.
a #GstVideoInfo
Free a GstVideoInfo structure previously allocated with gst_video_info_new()
or gst_video_info_copy().
a #GstVideoInfo
Parse @caps and update @info.
TRUE if @caps could be parsed
a #GstVideoInfo
a #GstCaps
Initialize @info with default values.
a #GstVideoInfo
Compares two #GstVideoInfo and returns whether they are equal or not
%TRUE if @info and @other are equal, else %FALSE.
a #GstVideoInfo
a #GstVideoInfo
Set the default info for a video frame of @format and @width and @height.
Note: This initializes @info first, no values are preserved. This function
does not set the offsets correctly for interlaced vertically
subsampled formats.
%FALSE if the returned video info is invalid, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
a #GstVideoInfo
the format
a width
a height
Convert the values of @info into a #GstCaps.
a new #GstCaps containing the info of @info.
a #GstVideoInfo
The possible values of the #GstVideoInterlaceMode describing the interlace
mode of the stream.
all frames are progressive
2 fields are interleaved in one video
frame. Extra buffer flags describe the field order.
frames contains both interlaced and
progressive video, the buffer flags describe the frame and fields.
2 fields are stored in one buffer, use the
frame ID to get access to the required field. For multiview (the
'views' property > 1) the fields of view N can be found at frame ID
(N * 2) and (N * 2) + 1.
Each field has only half the amount of lines as noted in the
height property. This mode requires multiple GstVideoMeta metadata
to describe the fields.
Convert @mode to a #GstVideoInterlaceMode
the #GstVideoInterlaceMode of @mode or
#GST_VIDEO_INTERLACE_MODE_PROGRESSIVE when @mode is not a valid
string representation for a #GstVideoInterlaceMode.
a mode
Convert @mode to its string representation.
@mode as a string or NULL if @mode in invalid.
a #GstVideoInterlaceMode
Different color matrix conversion modes
do conversion between color matrices
use the input color matrix to convert
to and from R'G'B
use the output color matrix to convert
to and from R'G'B
disable color matrix conversion.
Extra buffer metadata describing image properties
parent #GstMeta
the buffer this metadata belongs to
additional video flags
the video format
identifier of the frame
the video width
the video height
the number of planes in the image
array of offsets for the planes. This field might not always be
valid, it is used by the default implementation of @map.
array of strides for the planes. This field might not always be
valid, it is used by the default implementation of @map.
Map the video plane with index @plane in @meta and return a pointer to the
first byte of the plane and the stride of the plane.
TRUE if the map operation was successful.
a #GstVideoMeta
a plane
a #GstMapInfo
the data of @plane
the stride of @plane
@GstMapFlags
Unmap a previously mapped plane with gst_video_meta_map().
TRUE if the memory was successfully unmapped.
a #GstVideoMeta
a plane
a #GstMapInfo
Extra data passed to a video transform #GstMetaTransformFunction such as:
"gst-video-scale".
the input #GstVideoInfo
the output #GstVideoInfo
Get the #GQuark for the "gst-video-scale" metadata transform operation.
a #GQuark
GstVideoMultiviewFlags are used to indicate extra properties of a
stereo/multiview stream beyond the frame layout and buffer mapping
that is conveyed in the #GstMultiviewMode.
No flags
For stereo streams, the
normal arrangement of left and right views is reversed.
The left view is vertically
mirrored.
The left view is horizontally
mirrored.
The right view is
vertically mirrored.
The right view is
horizontally mirrored.
For frame-packed
multiview modes, indicates that the individual
views have been encoded with half the true width or height
and should be scaled back up for display. This flag
is used for overriding input layout interpretation
by adjusting pixel-aspect-ratio.
For side-by-side, column interleaved or checkerboard packings, the
pixel width will be doubled. For row interleaved and top-bottom
encodings, pixel height will be doubled.
The video stream contains both
mono and multiview portions, signalled on each buffer by the
absence or presence of the @GST_VIDEO_BUFFER_FLAG_MULTIPLE_VIEW
buffer flag.
#GstVideoMultiviewFramePacking represents the subset of #GstVideoMultiviewMode
values that can be applied to any video frame without needing extra metadata.
It can be used by elements that provide a property to override the
multiview interpretation of a video stream when the video doesn't contain
any markers.
This enum is used (for example) on playbin, to re-interpret a played
video stream as a stereoscopic video. The individual enum values are
equivalent to and have the same value as the matching #GstVideoMultiviewMode.
A special value indicating
no frame packing info.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are
provided in the left and right half of the frame respectively.
Left and right eye
views are provided in the left and right half of the frame, but
have been sampled using quincunx method, with half-pixel offset
between the 2 views.
Alternating vertical
columns of pixels represent the left and right eye view respectively.
Alternating horizontal
rows of pixels represent the left and right eye view respectively.
The top half of the frame
contains the left eye, and the bottom half the right eye.
Pixels are arranged with
alternating pixels representing left and right eye views in a
checkerboard fashion.
All possible stereoscopic 3D and multiview representations.
In conjunction with #GstVideoMultiviewFlags, describes how
multiview content is being transported in the stream.
A special value indicating
no multiview information. Used in GstVideoInfo and other places to
indicate that no specific multiview handling has been requested or
provided. This value is never carried on caps.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are
provided in the left and right half of the frame respectively.
Left and right eye
views are provided in the left and right half of the frame, but
have been sampled using quincunx method, with half-pixel offset
between the 2 views.
Alternating vertical
columns of pixels represent the left and right eye view respectively.
Alternating horizontal
rows of pixels represent the left and right eye view respectively.
The top half of the frame
contains the left eye, and the bottom half the right eye.
Pixels are arranged with
alternating pixels representing left and right eye views in a
checkerboard fashion.
Left and right eye views
are provided in separate frames alternately.
Multiple
independent views are provided in separate frames in sequence.
This method only applies to raw video buffers at the moment.
Specific view identification is via the #GstVideoMultiviewMeta
and #GstVideoMeta(s) on raw video buffers.
Multiple views are
provided as separate #GstMemory framebuffers attached to each
#GstBuffer, described by the #GstVideoMultiviewMeta
and #GstVideoMeta(s)
The #GstVideoMultiviewMode value
Given a string from a caps multiview-mode field,
output the corresponding #GstVideoMultiviewMode
or #GST_VIDEO_MULTIVIEW_MODE_NONE
multiview-mode field string from caps
The caps string representation of the mode, or NULL if invalid.
Given a #GstVideoMultiviewMode returns the multiview-mode caps string
for insertion into a caps structure
A #GstVideoMultiviewMode value
The interface allows unified access to control flipping and autocenter
operation of video-sources or operators.
Get the horizontal centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the horizontal flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Set the horizontal centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the horizontal flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Set the vertical centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the vertical flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Get the horizontal centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the horizontal flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical centering offset from the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
Get the vertical flipping state (%TRUE for flipped) from the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
Set the horizontal centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the horizontal flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
Set the vertical centering offset for the given object.
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
Set the vertical flipping state (%TRUE for flipped) for the given object.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
#GstVideoOrientationInterface interface.
parent interface type.
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
return location for the result
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
%TRUE in case the element supports flipping
#GstVideoOrientation interface of a #GstElement
use flipping
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
%TRUE in case the element supports centering
#GstVideoOrientation interface of a #GstElement
centering offset
The different video orientation methods.
Identity (no rotation)
Rotate clockwise 90 degrees
Rotate 180 degrees
Rotate counter-clockwise 90 degrees
Flip horizontally
Flip vertically
Flip across upper left/lower right diagonal
Flip across upper right/lower left diagonal
Select flip method based on image-orientation tag
Current status depends on plugin internal setup
The #GstVideoOverlay interface is used for 2 main purposes :
* To get a grab on the Window where the video sink element is going to render.
This is achieved by either being informed about the Window identifier that
the video sink element generated, or by forcing the video sink element to use
a specific Window identifier for rendering.
* To force a redrawing of the latest video frame the video sink element
displayed on the Window. Indeed if the #GstPipeline is in #GST_STATE_PAUSED
state, moving the Window around will damage its content. Application
developers will want to handle the Expose events themselves and force the
video sink element to refresh the Window's content.
Using the Window created by the video sink is probably the simplest scenario,
in some cases, though, it might not be flexible enough for application
developers if they need to catch events such as mouse moves and button
clicks.
Setting a specific Window identifier on the video sink element is the most
flexible solution but it has some issues. Indeed the application needs to set
its Window identifier at the right time to avoid internal Window creation
from the video sink element. To solve this issue a #GstMessage is posted on
the bus to inform the application that it should set the Window identifier
immediately. Here is an example on how to do that correctly:
|[
static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);
XSetWindowBackgroundPixmap (disp, win, None);
XMapRaised (disp, win);
XSync (disp, FALSE);
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
win);
gst_message_unref (message);
return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
NULL);
...
}
]|
## Two basic usage scenarios
There are two basic usage scenarios: in the simplest case, the application
uses #playbin or #plasink or knows exactly what particular element is used
for video output, which is usually the case when the application creates
the videosink to use (e.g. #xvimagesink, #ximagesink, etc.) itself; in this
case, the application can just create the videosink element, create and
realize the window to render the video on and then
call gst_video_overlay_set_window_handle() directly with the XID or native
window handle, before starting up the pipeline.
As #playbin and #playsink implement the video overlay interface and proxy
it transparently to the actual video sink even if it is created later, this
case also applies when using these elements.
In the other and more common case, the application does not know in advance
what GStreamer video sink element will be used for video output. This is
usually the case when an element such as #autovideosink is used.
In this case, the video sink element itself is created
asynchronously from a GStreamer streaming thread some time after the
pipeline has been started up. When that happens, however, the video sink
will need to know right then whether to render onto an already existing
application window or whether to create its own window. This is when it
posts a prepare-window-handle message, and that is also why this message needs
to be handled in a sync bus handler which will be called from the streaming
thread directly (because the video sink will need an answer right then).
As response to the prepare-window-handle element message in the bus sync
handler, the application may use gst_video_overlay_set_window_handle() to tell
the video sink to render onto an existing window surface. At this point the
application should already have obtained the window handle / XID, so it
just needs to set it. It is generally not advisable to call any GUI toolkit
functions or window system functions from the streaming thread in which the
prepare-window-handle message is handled, because most GUI toolkits and
windowing systems are not thread-safe at all and a lot of care would be
required to co-ordinate the toolkit and window system calls of the
different threads (Gtk+ users please note: prior to Gtk+ 2.18
GDK_WINDOW_XID() was just a simple structure access, so generally fine to do
within the bus sync handler; this macro was changed to a function call in
Gtk+ 2.18 and later, which is likely to cause problems when called from a
sync handler; see below for a better approach without GDK_WINDOW_XID()
used in the callback).
## GstVideoOverlay and Gtk+
|[
#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h> // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h> // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
if (video_window_handle != 0) {
GstVideoOverlay *overlay;
// GST_MESSAGE_SRC (message) will be the video sink element
overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle (overlay, video_window_handle);
} else {
g_warning ("Should have obtained video_window_handle by now!");
}
gst_message_unref (message);
return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
// Tell Gtk+/Gdk to create a native window for this widget instead of
// drawing onto the parent widget.
// This is here just for pedagogical purposes, GDK_WINDOW_XID will call
// it as well in newer Gtk versions
if (!gdk_window_ensure_native (widget->window))
g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif
#ifdef GDK_WINDOWING_X11
{
gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
video_window_handle = xid;
}
#endif
#ifdef GDK_WINDOWING_WIN32
{
HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
video_window_handle = (guintptr) wnd;
}
#endif
}
...
int
main (int argc, char **argv)
{
GtkWidget *video_window;
GtkWidget *app_window;
...
app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
...
video_window = gtk_drawing_area_new ();
g_signal_connect (video_window, "realize",
G_CALLBACK (video_widget_realize_cb), NULL);
gtk_widget_set_double_buffered (video_window, FALSE);
...
// usually the video_window will not be directly embedded into the
// application window like this, but there will be many other widgets
// and the video window will be embedded in one of them instead
gtk_container_add (GTK_CONTAINER (ap_window), video_window);
...
// show the GUI
gtk_widget_show_all (app_window);
// realize window now so that the video window gets created and we can
// obtain its XID/HWND before the pipeline is started up and the videosink
// asks for the XID/HWND of the window to render onto
gtk_widget_realize (video_window);
// we should have the XID/HWND now
g_assert (video_window_handle != 0);
...
// set up sync handler for setting the xid once the pipeline is started
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
NULL);
gst_object_unref (bus);
...
gst_element_set_state (pipeline, GST_STATE_PLAYING);
...
}
]|
## GstVideoOverlay and Qt
|[
#include <glib.h>
#include <gst/gst.h>
#include <gst/video/videooverlay.h>
#include <QApplication>
#include <QTimer>
#include <QWidget>
int main(int argc, char *argv[])
{
if (!g_thread_supported ())
g_thread_init (NULL);
gst_init (&argc, &argv);
QApplication app(argc, argv);
app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));
// prepare the pipeline
GstElement *pipeline = gst_pipeline_new ("xvoverlay");
GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
gst_element_link (src, sink);
// prepare the ui
QWidget window;
window.resize(320, 240);
window.show();
WId xwinid = window.winId();
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);
// run the pipeline
GstStateChangeReturn sret = gst_element_set_state (pipeline,
GST_STATE_PLAYING);
if (sret == GST_STATE_CHANGE_FAILURE) {
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
// Exit application
QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
}
int ret = app.exec();
window.hide();
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return ret;
}
]|
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will install "render-rectangle" property into the
class.
Since 1.14
The class on which the properties will be installed
The first free property ID to use
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will parse and set the render rectangle calling
gst_video_overlay_set_render_rectangle().
%TRUE if the @property_id matches the GstVideoOverlay property
Since 1.14
The instance on which the property is set
The highest property ID.
The property ID
The #GValue to be set
Tell an overlay that it has been exposed. This will redraw the current frame
in the drawable even if the pipeline is PAUSED.
a #GstVideoOverlay to expose.
Tell an overlay that it should handle events from the window system. These
events are forwarded upstream as navigation events. In some window system,
events are not propagated in the window hierarchy if a client is listening
for them. This method allows you to disable events handling completely
from the #GstVideoOverlay.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
This will call the video overlay's set_window_handle method. You
should use this method to tell to an overlay to display video output to a
specific window (e.g. an XWindow on X11). Passing 0 as the @handle will
tell the overlay to stop using that window and create an internal one.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
Tell an overlay that it has been exposed. This will redraw the current frame
in the drawable even if the pipeline is PAUSED.
a #GstVideoOverlay to expose.
This will post a "have-window-handle" element message on the bus.
This function should only be used by video overlay plugin developers.
a #GstVideoOverlay which got a window
a platform-specific handle referencing the window
Tell an overlay that it should handle events from the window system. These
events are forwarded upstream as navigation events. In some window system,
events are not propagated in the window hierarchy if a client is listening
for them. This method allows you to disable events handling completely
from the #GstVideoOverlay.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
This will post a "prepare-window-handle" element message on the bus
to give applications an opportunity to call
gst_video_overlay_set_window_handle() before a plugin creates its own
window.
This function should only be used by video overlay plugin developers.
a #GstVideoOverlay which does not yet have an Window handle set
Configure a subregion as a video target within the window set by
gst_video_overlay_set_window_handle(). If this is not used or not supported
the video will fill the area of the window set as the overlay to 100%.
By specifying the rectangle, the video can be overlayed to a specific region
of that window only. After setting the new rectangle one should call
gst_video_overlay_expose() to force a redraw. To unset the region pass -1 for
the @width and @height parameters.
This method is needed for non fullscreen video overlay in UI toolkits that
do not support subwindows.
%FALSE if not supported by the sink.
a #GstVideoOverlay
the horizontal offset of the render area inside the window
the vertical offset of the render area inside the window
the width of the render area inside the window
the height of the render area inside the window
This will call the video overlay's set_window_handle method. You
should use this method to tell to an overlay to display video output to a
specific window (e.g. an XWindow on X11). Passing 0 as the @handle will
tell the overlay to stop using that window and create an internal one.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
Functions to create and handle overlay compositions on video buffers.
An overlay composition describes one or more overlay rectangles to be
blended on top of a video buffer.
This API serves two main purposes:
* it can be used to attach overlay information (subtitles or logos)
to non-raw video buffers such as GL/VAAPI/VDPAU surfaces. The actual
blending of the overlay can then be done by e.g. the video sink that
processes these non-raw buffers.
* it can also be used to blend overlay rectangles on top of raw video
buffers, thus consolidating blending functionality for raw video in
one place.
Together, this allows existing overlay elements to easily handle raw
and non-raw video as input in without major changes (once the overlays
have been put into a #GstOverlayComposition object anyway) - for raw
video the overlay can just use the blending function to blend the data
on top of the video, and for surface buffers it can just attach them to
the buffer and let the sink render the overlays.
Creates a new video overlay composition object to hold one or more
overlay rectangles.
a new #GstVideoOverlayComposition. Unref with
gst_video_overlay_composition_unref() when no longer needed.
a #GstVideoOverlayRectangle to add to the
composition
Adds an overlay rectangle to an existing overlay composition object. This
must be done right after creating the overlay composition.
a #GstVideoOverlayComposition
a #GstVideoOverlayRectangle to add to the
composition
Blends the overlay rectangles in @comp on top of the raw video data
contained in @video_buf. The data in @video_buf must be writable and
mapped appropriately.
Since @video_buf data is read and will be modified, it ought be
mapped with flag GST_MAP_READWRITE.
a #GstVideoOverlayComposition
a #GstVideoFrame containing raw video data in a
supported format. It should be mapped using GST_MAP_READWRITE
Makes a copy of @comp and all contained rectangles, so that it is possible
to modify the composition and contained rectangles (e.g. add additional
rectangles or change the render co-ordinates or render dimension). The
actual overlay pixel data buffers contained in the rectangles are not
copied.
a new #GstVideoOverlayComposition equivalent
to @comp.
a #GstVideoOverlayComposition to copy
Returns the @n-th #GstVideoOverlayRectangle contained in @comp.
the @n-th rectangle, or NULL if @n is out of
bounds. Will not return a new reference, the caller will need to
obtain her own reference using gst_video_overlay_rectangle_ref()
if needed.
a #GstVideoOverlayComposition
number of the rectangle to get
Returns the sequence number of this composition. Sequence numbers are
monotonically increasing and unique for overlay compositions and rectangles
(meaning there will never be a rectangle with the same sequence number as
a composition).
the sequence number of @comp
a #GstVideoOverlayComposition
Takes ownership of @comp and returns a version of @comp that is writable
(i.e. can be modified). Will either return @comp right away, or create a
new writable copy of @comp and unref @comp itself. All the contained
rectangles will also be copied, but the actual overlay pixel data buffers
contained in the rectangles are not copied.
a writable #GstVideoOverlayComposition
equivalent to @comp.
a #GstVideoOverlayComposition to copy
Returns the number of #GstVideoOverlayRectangle<!-- -->s contained in @comp.
the number of rectangles
a #GstVideoOverlayComposition
Extra buffer metadata describing image overlay data.
parent #GstMeta
the attached #GstVideoOverlayComposition
Overlay format flags.
no flags
RGB are premultiplied by A/255.
a global-alpha value != 1 is set.
#GstVideoOverlay interface
parent interface type.
a #GstVideoOverlay to expose.
a #GstVideoOverlay to expose.
a #gboolean indicating if events should be handled or not.
a #GstVideoOverlay to set the window on.
a handle referencing the window.
An opaque video overlay rectangle object. A rectangle contains a single
overlay rectangle which can be added to a composition.
Creates a new video overlay rectangle with ARGB or AYUV pixel data.
The layout in case of ARGB of the components in memory is B-G-R-A
on little-endian platforms
(corresponding to #GST_VIDEO_FORMAT_BGRA) and A-R-G-B on big-endian
platforms (corresponding to #GST_VIDEO_FORMAT_ARGB). In other words,
pixels are treated as 32-bit words and the lowest 8 bits then contain
the blue component value and the highest 8 bits contain the alpha
component value. Unless specified in the flags, the RGB values are
non-premultiplied. This is the format that is used by most hardware,
and also many rendering libraries such as Cairo, for example.
The pixel data buffer must have #GstVideoMeta set.
a new #GstVideoOverlayRectangle. Unref with
gst_video_overlay_rectangle_unref() when no longer needed.
a #GstBuffer pointing to the pixel memory
the X co-ordinate on the video where the top-left corner of this
overlay rectangle should be rendered to
the Y co-ordinate on the video where the top-left corner of this
overlay rectangle should be rendered to
the render width of this rectangle on the video
the render height of this rectangle on the video
flags
Makes a copy of @rectangle, so that it is possible to modify it
(e.g. to change the render co-ordinates or render dimension). The
actual overlay pixel data buffers contained in the rectangle are not
copied.
a new #GstVideoOverlayRectangle equivalent
to @rectangle.
a #GstVideoOverlayRectangle to copy
Retrieves the flags associated with a #GstVideoOverlayRectangle.
This is useful if the caller can handle both premultiplied alpha and
non premultiplied alpha, for example. By knowing whether the rectangle
uses premultiplied or not, it can request the pixel data in the format
it is stored in, to avoid unnecessary conversion.
the #GstVideoOverlayFormatFlags associated with the rectangle.
a #GstVideoOverlayRectangle
Retrieves the global-alpha value associated with a #GstVideoOverlayRectangle.
the global-alpha value associated with the rectangle.
a #GstVideoOverlayRectangle
a #GstBuffer holding the ARGB pixel data with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
a #GstBuffer holding the AYUV pixel data with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
a #GstBuffer holding the pixel data with
format as originally provided and specified in video meta with
width and height of the render dimensions as per
gst_video_overlay_rectangle_get_render_rectangle(). This function does
not return a reference, the caller should obtain a reference of her own
with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the ARGB pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the AYUV pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the pixel data as it is. This is useful if the caller can
do the scaling itself when handling the overlaying. The rectangle will
need to be scaled to the render dimensions, which can be retrieved using
gst_video_overlay_rectangle_get_render_rectangle().
a #GstBuffer holding the pixel data with
#GstVideoMeta set. This function does not return a reference, the caller
should obtain a reference of her own with gst_buffer_ref() if needed.
a #GstVideoOverlayRectangle
flags.
If a global_alpha value != 1 is set for the rectangle, the caller
should set the #GST_VIDEO_OVERLAY_FORMAT_FLAG_GLOBAL_ALPHA flag
if he wants to apply global-alpha himself. If the flag is not set
global_alpha is applied internally before returning the pixel-data.
Retrieves the render position and render dimension of the overlay
rectangle on the video.
TRUE if valid render dimensions were retrieved.
a #GstVideoOverlayRectangle
address where to store the X render offset
address where to store the Y render offset
address where to store the render width
address where to store the render height
Returns the sequence number of this rectangle. Sequence numbers are
monotonically increasing and unique for overlay compositions and rectangles
(meaning there will never be a rectangle with the same sequence number as
a composition).
Using the sequence number of a rectangle as an indicator for changed
pixel-data of a rectangle is dangereous. Some API calls, like e.g.
gst_video_overlay_rectangle_set_global_alpha(), automatically update
the per rectangle sequence number, which is misleading for renderers/
consumers, that handle global-alpha themselves. For them the
pixel-data returned by gst_video_overlay_rectangle_get_pixels_*()
wont be different for different global-alpha values. In this case a
renderer could also use the GstBuffer pointers as a hint for changed
pixel-data.
the sequence number of @rectangle
a #GstVideoOverlayRectangle
Sets the global alpha value associated with a #GstVideoOverlayRectangle. Per-
pixel alpha values are multiplied with this value. Valid
values: 0 <= global_alpha <= 1; 1 to deactivate.
@rectangle must be writable, meaning its refcount must be 1. You can
make the rectangles inside a #GstVideoOverlayComposition writable using
gst_video_overlay_composition_make_writable() or
gst_video_overlay_composition_copy().
a #GstVideoOverlayRectangle
Global alpha value (0 to 1.0)
Sets the render position and dimensions of the rectangle on the video.
This function is mainly for elements that modify the size of the video
in some way (e.g. through scaling or cropping) and need to adjust the
details of any overlays to match the operation that changed the size.
@rectangle must be writable, meaning its refcount must be 1. You can
make the rectangles inside a #GstVideoOverlayComposition writable using
gst_video_overlay_composition_make_writable() or
gst_video_overlay_composition_copy().
a #GstVideoOverlayRectangle
render X position of rectangle on video
render Y position of rectangle on video
render width of rectangle
render height of rectangle
The different flags that can be used when packing and unpacking.
No flag
When the source has a smaller depth
than the target format, set the least significant bits of the target
to 0. This is likely sightly faster but less accurate. When this flag
is not specified, the most significant bits of the source are duplicated
in the least significant bits of the destination.
The source is interlaced. The unpacked
format will be interlaced as well with each line containing
information from alternating fields. (Since 1.2)
Different primaries conversion modes
disable conversion between primaries
do conversion between primaries only
when it can be merged with color matrix conversion.
fast conversion between primaries
Helper structure representing a rectangular area.
X coordinate of rectangle's top-left point
Y coordinate of rectangle's top-left point
width of the rectangle
height of the rectangle
Extra buffer metadata describing an image region of interest
parent #GstMeta
GQuark describing the semantic of the Roi (f.i. a face, a pedestrian)
identifier of this particular ROI
identifier of its parent ROI, used f.i. for ROI hierarchisation.
x component of upper-left corner
y component of upper-left corner
bounding box width
bounding box height
list of #GstStructure containing element-specific params for downstream, see gst_video_region_of_interest_meta_add_params(). (Since: 1.14)
Attach element-specific parameters to @meta meant to be used by downstream
elements which may handle this ROI.
The name of @s is used to identify the element these parameters are meant for.
This is typically used to tell encoders how they should encode this specific region.
For example, a structure named "roi/x264enc" could be used to give the
QP offsets this encoder should use when encoding the region described in @meta.
Multiple parameters can be defined for the same meta so different encoders
can be supported by cross platform applications).
a #GstVideoRegionOfInterestMeta
a #GstStructure
Retrieve the parameter for @meta having @name as structure name,
or %NULL if there is none.
See also: gst_video_region_of_interest_meta_add_param()
a #GstStructure
a #GstVideoRegionOfInterestMeta
#GstVideoResampler is a structure which holds the information
required to perform various kinds of resampling filtering.
the input size
the output size
the maximum number of taps
the number of phases
array with the source offset for each output element
array with the phase to use for each output element
array with new number of taps for each phase
the taps for all phases
Clear a previously initialized #GstVideoResampler @resampler.
a #GstVideoResampler
Different resampler flags.
no flags
when no taps are given, half the
number of calculated taps. This can be used when making scalers
for the different fields of an interlaced picture. Since 1.10
Different subsampling and upsampling methods
Duplicates the samples when
upsampling and drops when downsampling
Uses linear interpolation to reconstruct
missing samples and averaging to downsample
Uses cubic interpolation
Uses sinc interpolation
Uses lanczos interpolation
#GstVideoScaler is a utility object for rescaling and resampling
video frames using various interpolation / sampling methods.
Scale a rectangle of pixels in @src with @src_stride to @dest with
@dest_stride using the horizontal scaler @hscaler and the vertical
scaler @vscale.
One or both of @hscale and @vscale can be NULL to only perform scaling in
one dimension or do a copy without scaling.
@x and @y are the coordinates in the destination image to process.
a horzontal #GstVideoScaler
a vertical #GstVideoScaler
a #GstVideoFormat for @srcs and @dest
source pixels
source pixels stride
destination pixels
destination pixels stride
the horizontal destination offset
the vertical destination offset
the number of output pixels to scale
the number of output lines to scale
Combine a scaler for Y and UV into one scaler for the packed @format.
a new horizontal videoscaler for @format.
a scaler for the Y component
a scaler for the U and V components
the input video format
the output video format
Free a previously allocated #GstVideoScaler @scale.
a #GstVideoScaler
For a given pixel at @out_offset, get the first required input pixel at
@in_offset and the @n_taps filter coefficients.
Note that for interlaced content, @in_offset needs to be incremented with
2 to get the next input line.
an array of @n_tap gdouble values with filter coefficients.
a #GstVideoScaler
an output offset
result input offset
result n_taps
Get the maximum number of taps for @scale.
the maximum number of taps
a #GstVideoScaler
Horizontally scale the pixels in @src to @dest, starting from @dest_offset
for @width samples.
a #GstVideoScaler
a #GstVideoFormat for @src and @dest
source pixels
destination pixels
the horizontal destination offset
the number of pixels to scale
Vertically combine @width pixels in the lines in @src_lines to @dest.
@dest is the location of the target line at @dest_offset and
@srcs are the input lines for @dest_offset, as obtained with
gst_video_scaler_get_info().
a #GstVideoScaler
a #GstVideoFormat for @srcs and @dest
source pixels lines
destination pixels
the vertical destination offset
the number of pixels to scale
Make a new @method video scaler. @in_size source lines/pixels will
be scaled to @out_size destination lines/pixels.
@n_taps specifies the amount of pixels to use from the source for one output
pixel. If n_taps is 0, this function chooses a good value automatically based
on the @method and @in_size/@out_size.
a #GstVideoResample
a #GstVideoResamplerMethod
#GstVideoScalerFlags
number of taps to use
number of source elements
number of destination elements
extra options
Different scale flags.
no flags
Set up a scaler for interlaced content
Provides useful functions and a base class for video sinks.
GstVideoSink will configure the default base sink to drop frames that
arrive later than 20ms as this is considered the default threshold for
observing out-of-sync frames.
Takes @src rectangle and position it at the center of @dst rectangle with or
without @scaling. It handles clipping if the @src rectangle is bigger than
the @dst one and @scaling is set to FALSE.
the #GstVideoRectangle describing the source area
the #GstVideoRectangle describing the destination area
a pointer to a #GstVideoRectangle which will receive the result area
a #gboolean indicating if scaling should be applied or not
Whether to show video frames during preroll. If set to %FALSE, video
frames will only be rendered in PLAYING state.
video width (derived class needs to set this)
video height (derived class needs to set this)
The video sink class structure. Derived classes should override the
@show_frame virtual function.
the parent class structure
Enum value describing the available tiling modes.
Unknown or unset tile mode
Every four adjacent blocks - two
horizontally and two vertically are grouped together and are located
in memory in Z or flipped Z order. In case of odd rows, the last row
of blocks is arranged in linear order.
Enum value describing the most common tiling types.
Tiles are indexed. Use
gst_video_tile_get_index () to retrieve the tile at the requested
coordinates.
@field_count must be 0 for progressive video and 1 or 2 for interlaced.
A representation of a SMPTE time code.
@hours must be positive and less than 24. Will wrap around otherwise.
@minutes and @seconds must be positive and less than 60.
@frames must be less than or equal to @config.fps_n / @config.fps_d
These values are *NOT* automatically normalized.
the corresponding #GstVideoTimeCodeConfig
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
@field_count is 0 for progressive, 1 or 2 for interlaced.
@latest_daiy_jam reference is stolen from caller.
a new #GstVideoTimeCode with the given values.
The values are not checked for being in a valid range. To see if your
timecode actually has valid content, use #gst_video_time_code_is_valid.
Numerator of the frame rate
Denominator of the frame rate
The latest daily jam of the #GstVideoTimeCode
#GstVideoTimeCodeFlags
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
a new empty #GstVideoTimeCode
The resulting config->latest_daily_jam is set to
midnight, and timecode is set to the given time.
the #GVideoTimeCode representation of @dt.
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
a new #GstVideoTimeCode from the given string
The string that represents the #GstVideoTimeCode
Adds or subtracts @frames amount of frames to @tc. tc needs to
contain valid data, as verified by #gst_video_time_code_is_valid.
a valid #GstVideoTimeCode
How many frames to add or subtract
This makes a component-wise addition of @tc_inter to @tc. For example,
adding ("01:02:03:04", "00:01:00:00") will return "01:03:03:04".
When it comes to drop-frame timecodes,
adding ("00:00:00;00", "00:01:00:00") will return "00:01:00;02"
because of drop-frame oddities. However,
adding ("00:09:00;02", "00:01:00:00") will return "00:10:00;00"
because this time we can have an exact minute.
A new #GstVideoTimeCode with @tc_inter added.
The #GstVideoTimeCode where the diff should be added. This
must contain valid timecode values.
The #GstVideoTimeCodeInterval to add to @tc.
The interval must contain valid values, except that for drop-frame
timecode, it may also contain timecodes which would normally
be dropped. These are then corrected to the next reasonable timecode.
Initializes @tc with empty/zero/NULL values.
a #GstVideoTimeCode
Compares @tc1 and @tc2 . If both have latest daily jam information, it is
taken into account. Otherwise, it is assumed that the daily jam of both
@tc1 and @tc2 was at the same time. Both time codes must be valid.
1 if @tc1 is after @tc2, -1 if @tc1 is before @tc2, 0 otherwise.
a #GstVideoTimeCode
another #GstVideoTimeCode
a new #GstVideoTimeCode with the same values as @tc .
a #GstVideoTimeCode
how many frames have passed since the daily jam of @tc .
a valid #GstVideoTimeCode
Frees @tc .
a #GstVideoTimeCode
Adds one frame to @tc .
a valid #GstVideoTimeCode
@field_count is 0 for progressive, 1 or 2 for interlaced.
@latest_daiy_jam reference is stolen from caller.
Initializes @tc with the given values.
The values are not checked for being in a valid range. To see if your
timecode actually has valid content, use #gst_video_time_code_is_valid.
a #GstVideoTimeCode
Numerator of the frame rate
Denominator of the frame rate
The latest daily jam of the #GstVideoTimeCode
#GstVideoTimeCodeFlags
the hours field of #GstVideoTimeCode
the minutes field of #GstVideoTimeCode
the seconds field of #GstVideoTimeCode
the frames field of #GstVideoTimeCode
Interlaced video field count
The resulting config->latest_daily_jam is set to
midnight, and timecode is set to the given time.
a #GstVideoTimeCode
Numerator of the frame rate
Denominator of the frame rate
#GDateTime to convert
#GstVideoTimeCodeFlags
Interlaced video field count
whether @tc is a valid timecode (supported frame rate,
hours/minutes/seconds/frames not overflowing)
#GstVideoTimeCode to check
how many nsec have passed since the daily jam of @tc .
a valid #GstVideoTimeCode
The @tc.config->latest_daily_jam is required to be non-NULL.
the #GDateTime representation of @tc.
A valid #GstVideoTimeCode to convert
the SMPTE ST 2059-1:2015 string representation of @tc. That will
take the form hh:mm:ss:ff . The last separator (between seconds and frames)
may vary:
';' for drop-frame, non-interlaced content and for drop-frame interlaced
field 2
',' for drop-frame interlaced field 1
':' for non-drop-frame, non-interlaced content and for non-drop-frame
interlaced field 2
'.' for non-drop-frame interlaced field 1
#GstVideoTimeCode to convert
Supported frame rates: 30000/1001, 60000/1001 (both with and without drop
frame), and integer frame rates e.g. 25/1, 30/1, 50/1, 60/1.
The configuration of the time code.
Numerator of the frame rate
Denominator of the frame rate
the corresponding #GstVideoTimeCodeFlags
The latest daily jam information, if present, or NULL
Flags related to the time code information.
For drop frame, only 30000/1001 and 60000/1001 frame rates are supported.
No flags
Whether we have drop frame rate
Whether we have interlaced video
A representation of a difference between two #GstVideoTimeCode instances.
Will not necessarily correspond to a real timecode (e.g. 00:00:10;00)
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
a new #GstVideoTimeCodeInterval with the given values.
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
@tc_inter_str must only have ":" as separators.
a new #GstVideoTimeCodeInterval from the given string
The string that represents the #GstVideoTimeCodeInterval
Initializes @tc with empty/zero/NULL values.
a #GstVideoTimeCodeInterval
a new #GstVideoTimeCodeInterval with the same values as @tc .
a #GstVideoTimeCodeInterval
Frees @tc .
a #GstVideoTimeCodeInterval
Initializes @tc with the given values.
a #GstVideoTimeCodeInterval
the hours field of #GstVideoTimeCodeInterval
the minutes field of #GstVideoTimeCodeInterval
the seconds field of #GstVideoTimeCodeInterval
the frames field of #GstVideoTimeCodeInterval
Extra buffer metadata describing the GstVideoTimeCode of the frame.
Each frame is assumed to have its own timecode, i.e. they are not
automatically incremented/interpolated.
parent #GstMeta
the GstVideoTimeCode to attach
The video transfer function defines the formula for converting between
non-linear RGB (R'G'B') and linear RGB
unknown transfer function
linear RGB, gamma 1.0 curve
Gamma 1.8 curve
Gamma 2.0 curve
Gamma 2.2 curve
Gamma 2.2 curve with a linear segment in the lower
range
Gamma 2.2 curve with a linear segment in the
lower range
Gamma 2.4 curve with a linear segment in the lower
range
Gamma 2.8 curve
Logarithmic transfer characteristic
100:1 range
Logarithmic transfer characteristic
316.22777:1 range
Gamma 2.2 curve with a linear segment in the lower
range. Used for BT.2020 with 12 bits per
component. Since: 1.6
Gamma 2.19921875. Since: 1.8
Attaches GstVideoAffineTransformationMeta metadata to @buffer with
the given parameters.
the #GstVideoAffineTransformationMeta on @buffer.
a #GstBuffer
Attaches GstVideoGLTextureUploadMeta metadata to @buffer with the given
parameters.
the #GstVideoGLTextureUploadMeta on @buffer.
a #GstBuffer
the #GstVideoGLTextureOrientation
the number of textures
array of #GstVideoGLTextureType
the function to upload the buffer to a specific texture ID
user data for the implementor of @upload
function to copy @user_data
function to free @user_data
Attaches GstVideoMeta metadata to @buffer with the given parameters and the
default offsets and strides for @format and @width x @height.
This function calculates the default offsets and strides and then calls
gst_buffer_add_video_meta_full() with them.
the #GstVideoMeta on @buffer.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
Attaches GstVideoMeta metadata to @buffer with the given parameters.
the #GstVideoMeta on @buffer.
a #GstBuffer
#GstVideoFrameFlags
a #GstVideoFormat
the width
the height
number of planes
offset of each plane
stride of each plane
Sets an overlay composition on a buffer. The buffer will obtain its own
reference to the composition, meaning this function does not take ownership
of @comp.
a #GstVideoOverlayCompositionMeta
a #GstBuffer
a #GstVideoOverlayComposition
Attaches #GstVideoRegionOfInterestMeta metadata to @buffer with the given
parameters.
the #GstVideoRegionOfInterestMeta on @buffer.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoRegionOfInterestMeta metadata to @buffer with the given
parameters.
the #GstVideoRegionOfInterestMeta on @buffer.
a #GstBuffer
Type of the region of interest (e.g. "face")
X position
Y position
width
height
Attaches #GstVideoTimeCodeMeta metadata to @buffer with the given
parameters.
the #GstVideoTimeCodeMeta on @buffer.
a #GstBuffer
a #GstVideoTimeCode
Attaches #GstVideoTimeCodeMeta metadata to @buffer with the given
parameters.
the #GstVideoTimeCodeMeta on @buffer.
a #GstBuffer
framerate numerator
framerate denominator
a #GDateTime for the latest daily jam
a #GstVideoTimeCodeFlags
hours since the daily jam
minutes since the daily jam
seconds since the daily jam
frames since the daily jam
fields since the daily jam
Find the #GstVideoMeta on @buffer with the lowest @id.
Buffers can contain multiple #GstVideoMeta metadata items when dealing with
multiview buffers.
the #GstVideoMeta with lowest id (usually 0) or %NULL when there
is no such metadata on @buffer.
a #GstBuffer
Find the #GstVideoMeta on @buffer with the given @id.
Buffers can contain multiple #GstVideoMeta metadata items when dealing with
multiview buffers.
the #GstVideoMeta with @id or %NULL when there is no such metadata
on @buffer.
a #GstBuffer
a metadata id
Find the #GstVideoRegionOfInterestMeta on @buffer with the given @id.
Buffers can contain multiple #GstVideoRegionOfInterestMeta metadata items if
multiple regions of interests are marked on a frame.
the #GstVideoRegionOfInterestMeta with @id or %NULL when there is
no such metadata on @buffer.
a #GstBuffer
a metadata id
Get the video alignment from the bufferpool configuration @config in
in @align
%TRUE if @config could be parsed correctly.
a #GstStructure
a #GstVideoAlignment
Set the video alignment in @align to the bufferpool configuration
@config
a #GstStructure
a #GstVideoAlignment
Convenience function to check if the given message is a
"prepare-window-handle" message from a #GstVideoOverlay.
whether @msg is a "prepare-window-handle" message
a #GstMessage
Inspect a #GstEvent and return the #GstNavigationEventType of the event, or
#GST_NAVIGATION_EVENT_INVALID if the event is not a #GstNavigation event.
A #GstEvent to inspect.
Inspect a #GstNavigation command event and retrieve the enum value of the
associated command.
TRUE if the navigation command could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to GstNavigationCommand to receive the
type of the navigation event.
A #GstEvent to inspect.
A pointer to a location to receive
the string identifying the key press. The returned string is owned by the
event, and valid only until the event is unreffed.
Retrieve the details of either a #GstNavigation mouse button press event or
a mouse button release event. Determine which type the event is using
gst_navigation_event_get_type() to retrieve the #GstNavigationEventType.
TRUE if the button number and both coordinates could be extracted,
otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gint that will receive the button
number associated with the event.
Pointer to a gdouble to receive the x coordinate of the
mouse button event.
Pointer to a gdouble to receive the y coordinate of the
mouse button event.
Inspect a #GstNavigation mouse movement event and extract the coordinates
of the event.
TRUE if both coordinates could be extracted, otherwise FALSE.
A #GstEvent to inspect.
Pointer to a gdouble to receive the x coordinate of the
mouse movement.
Pointer to a gdouble to receive the y coordinate of the
mouse movement.
Check a bus message to see if it is a #GstNavigation event, and return
the #GstNavigationMessageType identifying the type of the message if so.
The type of the #GstMessage, or
#GST_NAVIGATION_MESSAGE_INVALID if the message is not a #GstNavigation
notification.
A #GstMessage to inspect.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_ANGLES_CHANGED for notifying an application
that the current angle, or current number of angles available in a
multiangle video has changed.
The new #GstMessage.
A #GstObject to set as source of the new message.
The currently selected angle.
The number of viewing angles now available.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_COMMANDS_CHANGED
The new #GstMessage.
A #GstObject to set as source of the new message.
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_EVENT.
The new #GstMessage.
A #GstObject to set as source of the new message.
A navigation #GstEvent
Creates a new #GstNavigation message with type
#GST_NAVIGATION_MESSAGE_MOUSE_OVER.
The new #GstMessage.
A #GstObject to set as source of the new message.
%TRUE if the mouse has entered a clickable area of the display.
%FALSE if it over a non-clickable area.
Parse a #GstNavigation message of type GST_NAVIGATION_MESSAGE_ANGLES_CHANGED
and extract the @cur_angle and @n_angles parameters.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a #guint to receive the new
current angle number, or NULL
A pointer to a #guint to receive the new angle
count, or NULL.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_EVENT
and extract contained #GstEvent. The caller must unref the @event when done
with it.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
a pointer to a #GstEvent to receive
the contained navigation event.
Parse a #GstNavigation message of type #GST_NAVIGATION_MESSAGE_MOUSE_OVER
and extract the active/inactive flag. If the mouse over event is marked
active, it indicates that the mouse is over a clickable area.
%TRUE if the message could be successfully parsed. %FALSE if not.
A #GstMessage to inspect.
A pointer to a gboolean to receive the
active/inactive state, or NULL.
Inspect a #GstQuery and return the #GstNavigationQueryType associated with
it if it is a #GstNavigation query.
The #GstNavigationQueryType of the query, or
#GST_NAVIGATION_QUERY_INVALID
The query to inspect
Create a new #GstNavigation angles query. When executed, it will
query the pipeline for the set of currently available angles, which may be
greater than one in a multiangle video.
The new query.
Create a new #GstNavigation commands query. When executed, it will
query the pipeline for the set of currently available commands.
The new query.
Parse the current angle number in the #GstNavigation angles @query into the
#guint pointed to by the @cur_angle variable, and the number of available
angles into the #guint pointed to by the @n_angles variable.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
Pointer to a #guint into which to store the
currently selected angle value from the query, or NULL
Pointer to a #guint into which to store the
number of angles value from the query, or NULL
Parse the number of commands in the #GstNavigation commands @query.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the number of commands in this query.
Parse the #GstNavigation command query and retrieve the @nth command from
it into @cmd. If the list contains less elements than @nth, @cmd will be
set to #GST_NAVIGATION_COMMAND_INVALID.
%TRUE if the query could be successfully parsed. %FALSE if not.
a #GstQuery
the nth command to retrieve.
a pointer to store the nth command into.
Set the #GstNavigation angles query result field in @query.
a #GstQuery
the current viewing angle to set.
the number of viewing angles to set.
Set the #GstNavigation command query result fields in @query. The number
of commands passed must be equal to @n_commands.
a #GstQuery
the number of commands to set.
An array containing @n_cmds
@GstNavigationCommand values.
Lets you blend the @src image into the @dest image
The #GstVideoFrame where to blend @src in
the #GstVideoFrame that we want to blend into
The x offset in pixel where the @src image should be blended
the y offset in pixel where the @src image should be blended
the global_alpha each per-pixel alpha value is multiplied
with
Scales a buffer containing RGBA (or AYUV) video. This is an internal
helper function which is used to scale subtitle overlays, and may be
deprecated in the near future. Use #GstVideoScaler to scale video buffers
instead.
the #GstVideoInfo describing the video data in @src_buffer
the source buffer containing video pixels to scale
the height in pixels to scale the video data in @src_buffer to
the width in pixels to scale the video data in @src_buffer to
pointer to a #GstVideoInfo structure that will be filled in
with the details for @dest_buffer
a pointer to a #GstBuffer variable, which will be
set to a newly-allocated buffer containing the scaled pixels.
Given the Pixel Aspect Ratio and size of an input video frame, and the
pixel aspect ratio of the intended display device, calculates the actual
display ratio the video will be rendered with.
A boolean indicating success and a calculated Display Ratio in the
dar_n and dar_d parameters.
The return value is FALSE in the case of integer overflow or other error.
Numerator of the calculated display_ratio
Denominator of the calculated display_ratio
Width of the video frame in pixels
Height of the video frame in pixels
Numerator of the pixel aspect ratio of the input video.
Denominator of the pixel aspect ratio of the input video.
Numerator of the pixel aspect ratio of the display device
Denominator of the pixel aspect ratio of the display device
Convert @s to a #GstVideoChromaSite
a #GstVideoChromaSite or %GST_VIDEO_CHROMA_SITE_UNKNOWN when @s does
not contain a valid chroma description.
a chromasite string
Perform resampling of @width chroma pixels in @lines.
a #GstVideoChromaResample
pixel lines
the number of pixels on one line
Create a new resampler object for the given parameters. When @h_factor or
@v_factor is > 0, upsampling will be used, otherwise subsampling is
performed.
a new #GstVideoChromaResample that should be freed with
gst_video_chroma_resample_free() after usage.
a #GstVideoChromaMethod
a #GstVideoChromaSite
#GstVideoChromaFlags
the #GstVideoFormat
horizontal resampling factor
vertical resampling factor
Converts @site to its string representation.
a string describing @site.
a #GstVideoChromaSite
Get the coefficients used to convert between Y'PbPr and R'G'B' using @matrix.
When:
|[
0.0 <= [Y',R',G',B'] <= 1.0)
(-0.5 <= [Pb,Pr] <= 0.5)
]|
the general conversion is given by:
|[
Y' = Kr*R' + (1-Kr-Kb)*G' + Kb*B'
Pb = (B'-Y')/(2*(1-Kb))
Pr = (R'-Y')/(2*(1-Kr))
]|
and the other way around:
|[
R' = Y' + Cr*2*(1-Kr)
G' = Y' - Cb*2*(1-Kb)*Kb/(1-Kr-Kb) - Cr*2*(1-Kr)*Kr/(1-Kr-Kb)
B' = Y' + Cb*2*(1-Kb)
]|
TRUE if @matrix was a YUV color format and @Kr and @Kb contain valid
values.
a #GstVideoColorMatrix
result red channel coefficient
result blue channel coefficient
Get information about the chromaticity coordinates of @primaries.
a #GstVideoColorPrimariesInfo for @primaries.
a #GstVideoColorPrimaries
Compute the offset and scale values for each component of @info. For each
component, (c[i] - offset[i]) / scale[i] will scale the component c[i] to the
range [0.0 .. 1.0].
The reverse operation (c[i] * scale[i]) + offset[i] can be used to convert
the component values in range [0.0 .. 1.0] back to their representation in
@info and @range.
a #GstVideoColorRange
a #GstVideoFormatInfo
output offsets
output scale
Convert @val to its gamma decoded value. This is the inverse operation of
@gst_video_color_transfer_encode().
For a non-linear value L' in the range [0..1], conversion to the linear
L is in general performed with a power function like:
|[
L = L' ^ gamma
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamme decoded value of @val
a #GstVideoTransferFunction
a value
Convert @val to its gamma encoded value.
For a linear value L in the range [0..1], conversion to the non-linear
(gamma encoded) L' is in general performed with a power function like:
|[
L' = L ^ (1 / gamma)
]|
Depending on @func, different formulas might be applied. Some formulas
encode a linear segment in the lower range.
the gamme encoded value of @val
a #GstVideoTransferFunction
a value
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
The converted #GstSample, or %NULL if an error happened (in which case @err
will point to the #GError).
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
Converts a raw video buffer into the specified output caps.
The output caps can be any raw video formats or any image formats (jpeg, png, ...).
The width, height and pixel-aspect-ratio can also be specified in the output caps.
@callback will be called after conversion, when an error occured or if conversion didn't
finish after @timeout. @callback will always be called from the thread default
%GMainContext, see g_main_context_get_thread_default(). If GLib before 2.22 is used,
this will always be the global default main context.
@destroy_notify will be called after the callback was called and @user_data is not needed
anymore.
a #GstSample
the #GstCaps to convert to
the maximum amount of time allowed for the processing.
%GstVideoConvertSampleCallback that will be called after conversion.
extra data that will be passed to the @callback
%GDestroyNotify to be called after @user_data is not needed anymore
Create a new converter object to convert between @in_info and @out_info
with @config.
a #GstVideoConverter or %NULL if conversion is not possible.
a #GstVideoInfo
a #GstVideoInfo
a #GstStructure with configuration options
Make a new dither object for dithering lines of @format using the
algorithm described by @method.
Each component will be quantized to a multiple of @quantizer. Better
performance is achived when @quantizer is a power of 2.
@width is the width of the lines that this ditherer will handle.
a new #GstVideoDither
a #GstVideoDitherMethod
a #GstVideoDitherFlags
a #GstVideoFormat
quantizer
the width of the lines
Checks if an event is a force key unit event. Returns true for both upstream
and downstream force key unit events.
%TRUE if the event is a valid force key unit event
A #GstEvent to check
Creates a new downstream force key unit event. A downstream force key unit
event can be sent down the pipeline to request downstream elements to produce
a key unit. A downstream force key unit event must also be sent when handling
an upstream force key unit event to notify downstream that the latter has been
handled.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use
gst_video_event_parse_downstream_force_key_unit().
The new GstEvent
the timestamp of the buffer that starts a new key unit
the stream_time of the buffer that starts a new key unit
the running_time of the buffer that starts a new key unit
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Creates a new Still Frame event. If @in_still is %TRUE, then the event
represents the start of a still frame sequence. If it is %FALSE, then
the event ends a still frame sequence.
To parse an event created by gst_video_event_new_still_frame() use
gst_video_event_parse_still_frame().
The new GstEvent
boolean value for the still-frame state of the event.
Creates a new upstream force key unit event. An upstream force key unit event
can be sent to request upstream elements to produce a key unit.
@running_time can be set to request a new key unit at a specific
running_time. If set to GST_CLOCK_TIME_NONE, upstream elements will produce a
new key unit as soon as possible.
To parse an event created by gst_video_event_new_downstream_force_key_unit() use
gst_video_event_parse_downstream_force_key_unit().
The new GstEvent
the running_time at which a new key unit should be produced
%TRUE to produce headers when starting a new key unit
integer that can be used to number key units
Get timestamp, stream-time, running-time, all-headers and count in the force
key unit event. See gst_video_event_new_downstream_force_key_unit() for a
full description of the downstream force key unit event.
@running_time will be adjusted for any pad offsets of pads it was passing through.
%TRUE if the event is a valid downstream force key unit event.
A #GstEvent to parse
A pointer to the timestamp in the event
A pointer to the stream-time in the event
A pointer to the running-time in the event
A pointer to the all_headers flag in the event
A pointer to the count field of the event
Parse a #GstEvent, identify if it is a Still Frame event, and
return the still-frame state from the event if it is.
If the event represents the start of a still frame, the in_still
variable will be set to TRUE, otherwise FALSE. It is OK to pass NULL for the
in_still variable order to just check whether the event is a valid still-frame
event.
Create a still frame event using gst_video_event_new_still_frame()
%TRUE if the event is a valid still-frame event. %FALSE if not
A #GstEvent to parse
A boolean to receive the still-frame status from the event, or NULL
Get running-time, all-headers and count in the force key unit event. See
gst_video_event_new_upstream_force_key_unit() for a full description of the
upstream force key unit event.
Create an upstream force key unit event using gst_video_event_new_upstream_force_key_unit()
@running_time will be adjusted for any pad offsets of pads it was passing through.
%TRUE if the event is a valid upstream force-key-unit event. %FALSE if not
A #GstEvent to parse
A pointer to the running_time in the event
A pointer to the all_headers flag in the event
A pointer to the count field in the event
Convert @order to a #GstVideoFieldOrder
the #GstVideoFieldOrder of @order or
#GST_VIDEO_FIELD_ORDER_UNKNOWN when @order is not a valid
string representation for a #GstVideoFieldOrder.
a field order
Convert @order to its string representation.
@order as a string or NULL if @order in invalid.
a #GstVideoFieldOrder
Converts a FOURCC value into the corresponding #GstVideoFormat.
If the FOURCC cannot be represented by #GstVideoFormat,
#GST_VIDEO_FORMAT_UNKNOWN is returned.
the #GstVideoFormat describing the FOURCC value
a FOURCC value representing raw YUV video
Find the #GstVideoFormat for the given parameters.
a #GstVideoFormat or GST_VIDEO_FORMAT_UNKNOWN when the parameters to
not specify a known format.
the amount of bits used for a pixel
the amount of bits used to store a pixel. This value is bigger than
@depth
the endianness of the masks, #G_LITTLE_ENDIAN or #G_BIG_ENDIAN
the red mask
the green mask
the blue mask
the alpha mask, or 0 if no alpha mask
Convert the @format string to its #GstVideoFormat.
the #GstVideoFormat for @format or GST_VIDEO_FORMAT_UNKNOWN when the
string is not a known format.
a format string
Get the #GstVideoFormatInfo for @format
The #GstVideoFormatInfo for @format.
a #GstVideoFormat
Get the default palette of @format. This the palette used in the pack
function for paletted formats.
the default palette of @format or %NULL when
@format does not have a palette.
a #GstVideoFormat
size of the palette in bytes
Converts a #GstVideoFormat value into the corresponding FOURCC. Only
a few YUV formats have corresponding FOURCC values. If @format has
no corresponding FOURCC value, 0 is returned.
the FOURCC corresponding to @format
a #GstVideoFormat video format
Returns a string containing a descriptive name for
the #GstVideoFormat if there is one, or NULL otherwise.
the name corresponding to @format
a #GstVideoFormat video format
Given the nominal duration of one video frame,
this function will check some standard framerates for
a close match (within 0.1%) and return one if possible,
It will calculate an arbitrary framerate if no close
match was found, and return %FALSE.
It returns %FALSE if a duration of 0 is passed.
%TRUE if a close "standard" framerate was
recognised, and %FALSE otherwise.
Nominal duration of one frame
Numerator of the calculated framerate
Denominator of the calculated framerate
Convert @mode to a #GstVideoInterlaceMode
the #GstVideoInterlaceMode of @mode or
#GST_VIDEO_INTERLACE_MODE_PROGRESSIVE when @mode is not a valid
string representation for a #GstVideoInterlaceMode.
a mode
Convert @mode to its string representation.
@mode as a string or NULL if @mode in invalid.
a #GstVideoInterlaceMode
Get the #GQuark for the "gst-video-scale" metadata transform operation.
a #GQuark
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed stereo
video modes with double the height of a single view for use in
caps negotiations. Currently this is top-bottom and row-interleaved.
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed
stereo video modes that have double the width/height of a single
view for use in caps negotiation. Currently this is just
'checkerboard' layout.
A const #GValue containing a list of stereo video modes
Utility function that returns a #GValue with a GstList of packed stereo
video modes with double the width of a single view for use in
caps negotiations. Currently this is side-by-side, side-by-side-quincunx
and column-interleaved.
A const #GValue containing a list of mono video modes
Utility function that returns a #GValue with a GstList of mono video
modes (mono/left/right) for use in caps negotiations.
A const #GValue containing a list of 'unpacked' stereo video modes
Utility function that returns a #GValue with a GstList of unpacked
stereo video modes (separated/frame-by-frame/frame-by-frame-multiview)
for use in caps negotiations.
A boolean indicating whether the
#GST_VIDEO_MULTIVIEW_FLAG_HALF_ASPECT flag should be set.
Utility function that heuristically guess whether a
frame-packed stereoscopic video contains half width/height
encoded views, or full-frame views by looking at the
overall display aspect ratio.
A #GstVideoMultiviewMode
Video frame width in pixels
Video frame height in pixels
Numerator of the video pixel-aspect-ratio
Denominator of the video pixel-aspect-ratio
The #GstVideoMultiviewMode value
Given a string from a caps multiview-mode field,
output the corresponding #GstVideoMultiviewMode
or #GST_VIDEO_MULTIVIEW_MODE_NONE
multiview-mode field string from caps
The caps string representation of the mode, or NULL if invalid.
Given a #GstVideoMultiviewMode returns the multiview-mode caps string
for insertion into a caps structure
A #GstVideoMultiviewMode value
Utility function that transforms the width/height/PAR
and multiview mode and flags of a #GstVideoInfo into
the requested mode.
A #GstVideoInfo structure to operate on
A #GstVideoMultiviewMode value
A set of #GstVideoMultiviewFlags
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will install "render-rectangle" property into the
class.
Since 1.14
The class on which the properties will be installed
The first free property ID to use
This helper shall be used by classes implementing the #GstVideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will parse and set the render rectangle calling
gst_video_overlay_set_render_rectangle().
%TRUE if the @property_id matches the GstVideoOverlay property
Since 1.14
The instance on which the property is set
The highest property ID.
The property ID
The #GValue to be set
Make a new @method video scaler. @in_size source lines/pixels will
be scaled to @out_size destination lines/pixels.
@n_taps specifies the amount of pixels to use from the source for one output
pixel. If n_taps is 0, this function chooses a good value automatically based
on the @method and @in_size/@out_size.
a #GstVideoResample
a #GstVideoResamplerMethod
#GstVideoScalerFlags
number of taps to use
number of source elements
number of destination elements
extra options
Get the tile index of the tile at coordinates @x and @y in the tiled
image of @x_tiles by @y_tiles.
Use this method when @mode is of type %GST_VIDEO_TILE_MODE_INDEXED.
the index of the tile at @x and @y in the tiled image of
@x_tiles by @y_tiles.
a #GstVideoTileMode
x coordinate
y coordinate
number of horizintal tiles
number of vertical tiles