36 KiB
The color matrix is used to convert between Y'PbPr and non-linear RGB (R'G'B')
unknown matrix
identity matrix
FCC color matrix
ITU-R BT.709 color matrix
ITU-R BT.601 color matrix
SMPTE 240M color matrix
ITU-R BT.2020 color matrix. Since: 1.6
The color primaries define the how to transform linear RGB values to and from the CIE XYZ colorspace.
unknown color primaries
BT709 primaries
BT470M primaries
BT470BG primaries
SMPTE170M primaries
SMPTE240M primaries
Generic film
BT2020 primaries. Since: 1.6
Adobe RGB primaries. Since: 1.8
Possible color range values. These constants are defined for 8 bit color values and can be scaled for other bit depths.
unknown range
[0..255] for 8 bit components
[16..235] for 8 bit components. Chroma has [16..240] range.
Structure describing the color info.
Parse the colorimetry string and update self
with the parsed
values.
color
a colorimetry string
Returns
true
if color
points to valid colorimetry info.
Compare the 2 colorimetry sets for equality
other
another VideoColorimetry
Returns
true
if self
and other
are equal.
Check if the colorimetry information in info
matches that of the
string color
.
color
a colorimetry string
Returns
true
if color
conveys the same colorimetry info as the color
information in info
.
Make a string representation of self
.
Returns
a string representation of self
.
Field order of interlaced content. This is only valid for interlace-mode=interleaved and not interlace-mode=mixed. In the case of mixed or GST_VIDEO_FIELD_ORDER_UNKOWN, the field order is signalled via buffer flags.
unknown field order for interlaced content. The actual field order is signalled via buffer flags.
top field is first
bottom field is first
Feature: v1_12
Provides useful functions and a base class for video filters.
The videofilter will by default enable QoS on the parent GstBaseTransform to implement frame dropping.
Implements
gst_base::BaseTransformExt
, gst::ElementExt
, gst::ObjectExt
, glib::object::ObjectExt
Enum value describing the most common video formats.
Unknown or unset video format id
Encoded video format. Only ever use that in caps for special video formats in combination with non-system memory GstCapsFeatures where it does not make sense to specify a real video format.
planar 4:2:0 YUV
planar 4:2:0 YVU (like I420 but UV planes swapped)
packed 4:2:2 YUV (Y0-U0-Y1-V0 Y2-U2-Y3-V2 Y4 ...)
packed 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
packed 4:4:4 YUV with alpha channel (A0-Y0-U0-V0 ...)
sparse rgb packed into 32 bit, space last
sparse reverse rgb packed into 32 bit, space last
sparse rgb packed into 32 bit, space first
sparse reverse rgb packed into 32 bit, space first
rgb with alpha channel last
reverse rgb with alpha channel last
rgb with alpha channel first
reverse rgb with alpha channel first
rgb
reverse rgb
planar 4:1:1 YUV
planar 4:2:2 YUV
packed 4:2:2 YUV (Y0-V0-Y1-U0 Y2-V2-Y3-U2 Y4 ...)
planar 4:4:4 YUV
packed 4:2:2 10-bit YUV, complex format
packed 4:2:2 16-bit YUV, Y0-U0-Y1-V1 order
planar 4:2:0 YUV with interleaved UV plane
planar 4:2:0 YUV with interleaved VU plane
8-bit grayscale
16-bit grayscale, most significant byte first
16-bit grayscale, least significant byte first
packed 4:4:4 YUV (Y-U-V ...)
rgb 5-6-5 bits per component
reverse rgb 5-6-5 bits per component
rgb 5-5-5 bits per component
reverse rgb 5-5-5 bits per component
packed 10-bit 4:2:2 YUV (U0-Y0-V0-Y1 U2-Y2-V2-Y3 U4 ...)
planar 4:4:2:0 AYUV
8-bit paletted RGB
planar 4:1:0 YUV
planar 4:1:0 YUV (like YUV9 but UV planes swapped)
packed 4:1:1 YUV (Cb-Y0-Y1-Cr-Y2-Y3 ...)
rgb with alpha channel first, 16 bits per channel
packed 4:4:4 YUV with alpha channel, 16 bits per channel (A0-Y0-U0-V0 ...)
packed 4:4:4 RGB, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:0 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:2:2 YUV, 10 bits per channel
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 YUV, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 8 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:4:4 RGB, 10 bits per channel (Since: 1.2)
planar 4:2:2 YUV with interleaved UV plane (Since: 1.2)
planar 4:4:4 YUV with interleaved UV plane (Since: 1.2)
NV12 with 64x32 tiling in zigzag pattern (Since: 1.4)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:0 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:2:2 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:4:4:4 YUV, 10 bits per channel (Since: 1.6)
planar 4:2:2 YUV with interleaved VU plane (Since: 1.6)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
planar 4:2:0 YUV with interleaved UV plane, 10 bits per channel (Since: 1.10)
packed 4:4:4 YUV (U-Y-V ...) (Since 1.10)
packed 4:2:2 YUV (V0-Y0-U0-Y1 V2-Y2-U2-Y3 V4 ...)
planar 4:4:4:4 ARGB, 8 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 10 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4 RGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:4:4:4 ARGB, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:0 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:2:2 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
planar 4:4:4 YUV, 12 bits per channel (Since: 1.12)
10-bit grayscale, packed into 32bit words (2 bits padding) (Since: 1.14)
10-bit variant of VideoFormat::Nv12
, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
10-bit variant of VideoFormat::Nv16
, packed into 32bit words (MSB 2 bits padding) (Since: 1.14)
Information for a video format.
A video frame obtained from VideoFrame::map
Copy the contents from src
to self
.
src
a VideoFrame
Returns
TRUE if the contents could be copied.
Copy the plane with index plane
from src
to self
.
src
a VideoFrame
plane
a plane
Returns
TRUE if the contents could be copied.
Use info
and buffer
to fill in the values of self
. self
is usually
allocated on the stack, and you will pass the address to the VideoFrame
structure allocated on the stack; VideoFrame::map
will then fill in
the structures with the various video-specific information you need to access
the pixels of the video buffer. You can then use accessor macros such as
GST_VIDEO_FRAME_COMP_DATA(), GST_VIDEO_FRAME_PLANE_DATA(),
GST_VIDEO_FRAME_COMP_STRIDE(), GST_VIDEO_FRAME_PLANE_STRIDE() etc.
to get to the pixels.
GstVideoFrame vframe;
...
// set RGB pixels to black one at a time
if (gst_video_frame_map (&vframe, video_info, video_buffer, GST_MAP_WRITE)) {
guint8 *pixels = GST_VIDEO_FRAME_PLANE_DATA (vframe, 0);
guint stride = GST_VIDEO_FRAME_PLANE_STRIDE (vframe, 0);
guint pixel_stride = GST_VIDEO_FRAME_COMP_PSTRIDE (vframe, 0);
for (h = 0; h < height; ++h) {
for (w = 0; w < width; ++w) {
guint8 *pixel = pixels + h * stride + w * pixel_stride;
memset (pixel, 0, pixel_stride);
}
}
gst_video_frame_unmap (&vframe);
}
...
All video planes of buffer
will be mapped and the pointers will be set in
self
->data.
The purpose of this function is to make it easy for you to get to the video
pixels in a generic way, without you having to worry too much about details
such as whether the video data is allocated in one contiguous memory chunk
or multiple memory chunks (e.g. one for each plane); or if custom strides
and custom plane offsets are used or not (as signalled by GstVideoMeta on
each buffer). This function will just fill the VideoFrame
structure
with the right values and if you use the accessor macros everything will
just work and you can access the data easily. It also maps the underlying
memory chunks for you.
info
a VideoInfo
buffer
the buffer to map
flags
gst::MapFlags
Returns
true
on success.
Use info
and buffer
to fill in the values of self
with the video frame
information of frame id
.
When id
is -1, the default frame is mapped. When id
!= -1, this function
will return false
when there is no GstVideoMeta with that id.
All video planes of buffer
will be mapped and the pointers will be set in
self
->data.
info
a VideoInfo
buffer
the buffer to map
id
the frame id to map
flags
gst::MapFlags
Returns
true
on success.
Unmap the memory previously mapped with gst_video_frame_map.
Information describing image properties. This information can be filled
in from GstCaps with VideoInfo::from_caps
. The information is also used
to store the specific video info when mapping a video frame with
VideoFrame::map
.
Use the provided macros to access the info in this structure.
Allocate a new VideoInfo
that is also initialized with
VideoInfo::init
.
Returns
a new VideoInfo
. free with VideoInfo::free
.
Adjust the offset and stride fields in self
so that the padding and
stride alignment in align
is respected.
Extra padding will be added to the right side when stride alignment padding
is required and align
will be updated with the new padding values.
align
alignment parameters
Returns
false
if alignment could not be applied, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
Converts among various gst::Format
types. This function handles
GST_FORMAT_BYTES, GST_FORMAT_TIME, and GST_FORMAT_DEFAULT. For
raw video, GST_FORMAT_DEFAULT corresponds to video frames. This
function can be used to handle pad queries of the type GST_QUERY_CONVERT.
src_format
gst::Format
of the src_value
src_value
value to convert
dest_format
gst::Format
of the dest_value
dest_value
pointer to destination value
Returns
TRUE if the conversion was successful.
Copy a GstVideoInfo structure.
Returns
a new VideoInfo
. free with gst_video_info_free.
Free a GstVideoInfo structure previously allocated with VideoInfo::new
or VideoInfo::copy
.
Parse caps
and update self
.
caps
a gst::Caps
Returns
TRUE if caps
could be parsed
Initialize self
with default values.
Compares two VideoInfo
and returns whether they are equal or not
other
a VideoInfo
Returns
true
if self
and other
are equal, else false
.
Set the default info for a video frame of format
and width
and height
.
Note: This initializes self
first, no values are preserved. This function
does not set the offsets correctly for interlaced vertically
subsampled formats.
format
the format
width
a width
height
a height
Returns
false
if the returned video info is invalid, e.g. because the
size of a frame can't be represented as a 32 bit integer (Since: 1.12)
Convert the values of self
into a gst::Caps
.
Returns
a new gst::Caps
containing the info of self
.
The possible values of the VideoInterlaceMode
describing the interlace
mode of the stream.
all frames are progressive
2 fields are interleaved in one video frame. Extra buffer flags describe the field order.
frames contains both interlaced and progressive video, the buffer flags describe the frame and fields.
2 fields are stored in one buffer, use the frame ID to get access to the required field. For multiview (the 'views' property > 1) the fields of view N can be found at frame ID (N * 2) and (N * 2) + 1. Each field has only half the amount of lines as noted in the height property. This mode requires multiple GstVideoMeta metadata to describe the fields.
VideoMultiviewFramePacking
represents the subset of VideoMultiviewMode
values that can be applied to any video frame without needing extra metadata.
It can be used by elements that provide a property to override the
multiview interpretation of a video stream when the video doesn't contain
any markers.
This enum is used (for example) on playbin, to re-interpret a played
video stream as a stereoscopic video. The individual enum values are
equivalent to and have the same value as the matching VideoMultiviewMode
.
A special value indicating no frame packing info.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are provided in the left and right half of the frame respectively.
Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views.
Alternating vertical columns of pixels represent the left and right eye view respectively.
Alternating horizontal rows of pixels represent the left and right eye view respectively.
The top half of the frame contains the left eye, and the bottom half the right eye.
Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion.
All possible stereoscopic 3D and multiview representations.
In conjunction with VideoMultiviewFlags
, describes how
multiview content is being transported in the stream.
A special value indicating no multiview information. Used in GstVideoInfo and other places to indicate that no specific multiview handling has been requested or provided. This value is never carried on caps.
All frames are monoscopic.
All frames represent a left-eye view.
All frames represent a right-eye view.
Left and right eye views are provided in the left and right half of the frame respectively.
Left and right eye views are provided in the left and right half of the frame, but have been sampled using quincunx method, with half-pixel offset between the 2 views.
Alternating vertical columns of pixels represent the left and right eye view respectively.
Alternating horizontal rows of pixels represent the left and right eye view respectively.
The top half of the frame contains the left eye, and the bottom half the right eye.
Pixels are arranged with alternating pixels representing left and right eye views in a checkerboard fashion.
Left and right eye views are provided in separate frames alternately.
Multiple
independent views are provided in separate frames in sequence.
This method only applies to raw video buffers at the moment.
Specific view identification is via the GstVideoMultiviewMeta
and VideoMeta
(s) on raw video buffers.
Multiple views are
provided as separate gst::Memory
framebuffers attached to each
gst::Buffer
, described by the GstVideoMultiviewMeta
and VideoMeta
(s)
The VideoOverlay
interface is used for 2 main purposes :
- To get a grab on the Window where the video sink element is going to render. This is achieved by either being informed about the Window identifier that the video sink element generated, or by forcing the video sink element to use a specific Window identifier for rendering.
- To force a redrawing of the latest video frame the video sink element
displayed on the Window. Indeed if the
gst::Pipeline
is ingst::State::Paused
state, moving the Window around will damage its content. Application developers will want to handle the Expose events themselves and force the video sink element to refresh the Window's content.
Using the Window created by the video sink is probably the simplest scenario, in some cases, though, it might not be flexible enough for application developers if they need to catch events such as mouse moves and button clicks.
Setting a specific Window identifier on the video sink element is the most
flexible solution but it has some issues. Indeed the application needs to set
its Window identifier at the right time to avoid internal Window creation
from the video sink element. To solve this issue a gst::Message
is posted on
the bus to inform the application that it should set the Window identifier
immediately. Here is an example on how to do that correctly:
static GstBusSyncReply
create_window (GstBus * bus, GstMessage * message, GstPipeline * pipeline)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
win = XCreateSimpleWindow (disp, root, 0, 0, 320, 240, 0, 0, 0);
XSetWindowBackgroundPixmap (disp, win, None);
XMapRaised (disp, win);
XSync (disp, FALSE);
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message)),
win);
gst_message_unref (message);
return GST_BUS_DROP;
}
...
int
main (int argc, char **argv)
{
...
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) create_window, pipeline,
NULL);
...
}
Two basic usage scenarios
There are two basic usage scenarios: in the simplest case, the application
uses playbin
or plasink
or knows exactly what particular element is used
for video output, which is usually the case when the application creates
the videosink to use (e.g. xvimagesink
, ximagesink
, etc.) itself; in this
case, the application can just create the videosink element, create and
realize the window to render the video on and then
call VideoOverlay::set_window_handle
directly with the XID or native
window handle, before starting up the pipeline.
As playbin
and playsink
implement the video overlay interface and proxy
it transparently to the actual video sink even if it is created later, this
case also applies when using these elements.
In the other and more common case, the application does not know in advance
what GStreamer video sink element will be used for video output. This is
usually the case when an element such as autovideosink
is used.
In this case, the video sink element itself is created
asynchronously from a GStreamer streaming thread some time after the
pipeline has been started up. When that happens, however, the video sink
will need to know right then whether to render onto an already existing
application window or whether to create its own window. This is when it
posts a prepare-window-handle message, and that is also why this message needs
to be handled in a sync bus handler which will be called from the streaming
thread directly (because the video sink will need an answer right then).
As response to the prepare-window-handle element message in the bus sync
handler, the application may use VideoOverlay::set_window_handle
to tell
the video sink to render onto an existing window surface. At this point the
application should already have obtained the window handle / XID, so it
just needs to set it. It is generally not advisable to call any GUI toolkit
functions or window system functions from the streaming thread in which the
prepare-window-handle message is handled, because most GUI toolkits and
windowing systems are not thread-safe at all and a lot of care would be
required to co-ordinate the toolkit and window system calls of the
different threads (Gtk+ users please note: prior to Gtk+ 2.18
GDK_WINDOW_XID() was just a simple structure access, so generally fine to do
within the bus sync handler; this macro was changed to a function call in
Gtk+ 2.18 and later, which is likely to cause problems when called from a
sync handler; see below for a better approach without GDK_WINDOW_XID()
used in the callback).
GstVideoOverlay and Gtk+
#include <gst/video/videooverlay.h>
#include <gtk/gtk.h>
#ifdef GDK_WINDOWING_X11
#include <gdk/gdkx.h> // for GDK_WINDOW_XID
#endif
#ifdef GDK_WINDOWING_WIN32
#include <gdk/gdkwin32.h> // for GDK_WINDOW_HWND
#endif
...
static guintptr video_window_handle = 0;
...
static GstBusSyncReply
bus_sync_handler (GstBus * bus, GstMessage * message, gpointer user_data)
{
// ignore anything but 'prepare-window-handle' element messages
if (!gst_is_video_overlay_prepare_window_handle_message (message))
return GST_BUS_PASS;
if (video_window_handle != 0) {
GstVideoOverlay *overlay;
// GST_MESSAGE_SRC (message) will be the video sink element
overlay = GST_VIDEO_OVERLAY (GST_MESSAGE_SRC (message));
gst_video_overlay_set_window_handle (overlay, video_window_handle);
} else {
g_warning ("Should have obtained video_window_handle by now!");
}
gst_message_unref (message);
return GST_BUS_DROP;
}
...
static void
video_widget_realize_cb (GtkWidget * widget, gpointer data)
{
#if GTK_CHECK_VERSION(2,18,0)
// Tell Gtk+/Gdk to create a native window for this widget instead of
// drawing onto the parent widget.
// This is here just for pedagogical purposes, GDK_WINDOW_XID will call
// it as well in newer Gtk versions
if (!gdk_window_ensure_native (widget->window))
g_error ("Couldn't create native window needed for GstVideoOverlay!");
#endif
#ifdef GDK_WINDOWING_X11
{
gulong xid = GDK_WINDOW_XID (gtk_widget_get_window (video_window));
video_window_handle = xid;
}
#endif
#ifdef GDK_WINDOWING_WIN32
{
HWND wnd = GDK_WINDOW_HWND (gtk_widget_get_window (video_window));
video_window_handle = (guintptr) wnd;
}
#endif
}
...
int
main (int argc, char **argv)
{
GtkWidget *video_window;
GtkWidget *app_window;
...
app_window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
...
video_window = gtk_drawing_area_new ();
g_signal_connect (video_window, "realize",
G_CALLBACK (video_widget_realize_cb), NULL);
gtk_widget_set_double_buffered (video_window, FALSE);
...
// usually the video_window will not be directly embedded into the
// application window like this, but there will be many other widgets
// and the video window will be embedded in one of them instead
gtk_container_add (GTK_CONTAINER (ap_window), video_window);
...
// show the GUI
gtk_widget_show_all (app_window);
// realize window now so that the video window gets created and we can
// obtain its XID/HWND before the pipeline is started up and the videosink
// asks for the XID/HWND of the window to render onto
gtk_widget_realize (video_window);
// we should have the XID/HWND now
g_assert (video_window_handle != 0);
...
// set up sync handler for setting the xid once the pipeline is started
bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_set_sync_handler (bus, (GstBusSyncHandler) bus_sync_handler, NULL,
NULL);
gst_object_unref (bus);
...
gst_element_set_state (pipeline, GST_STATE_PLAYING);
...
}
GstVideoOverlay and Qt
#include <glib.h>
#include <gst/gst.h>
#include <gst/video/videooverlay.h>
#include <QApplication>
#include <QTimer>
#include <QWidget>
int main(int argc, char *argv[])
{
if (!g_thread_supported ())
g_thread_init (NULL);
gst_init (&argc, &argv);
QApplication app(argc, argv);
app.connect(&app, SIGNAL(lastWindowClosed()), &app, SLOT(quit ()));
// prepare the pipeline
GstElement *pipeline = gst_pipeline_new ("xvoverlay");
GstElement *src = gst_element_factory_make ("videotestsrc", NULL);
GstElement *sink = gst_element_factory_make ("xvimagesink", NULL);
gst_bin_add_many (GST_BIN (pipeline), src, sink, NULL);
gst_element_link (src, sink);
// prepare the ui
QWidget window;
window.resize(320, 240);
window.show();
WId xwinid = window.winId();
gst_video_overlay_set_window_handle (GST_VIDEO_OVERLAY (sink), xwinid);
// run the pipeline
GstStateChangeReturn sret = gst_element_set_state (pipeline,
GST_STATE_PLAYING);
if (sret == GST_STATE_CHANGE_FAILURE) {
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
// Exit application
QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
}
int ret = app.exec();
window.hide();
gst_element_set_state (pipeline, GST_STATE_NULL);
gst_object_unref (pipeline);
return ret;
}
Implements
Trait containing all VideoOverlay
methods.
Implementors
This helper shall be used by classes implementing the VideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will install "render-rectangle" property into the
class.
Since 1.14
oclass
The class on which the properties will be installed
last_prop_id
The first free property ID to use
This helper shall be used by classes implementing the VideoOverlay
interface that want the render rectangle to be controllable using
properties. This helper will parse and set the render rectangle calling
VideoOverlay::set_render_rectangle
.
object
The instance on which the property is set
last_prop_id
The highest property ID.
property_id
The property ID
value
The gobject::Value
to be set
Returns
true
if the property_id
matches the GstVideoOverlay property
Since 1.14
Tell an overlay that it has been exposed. This will redraw the current frame in the drawable even if the pipeline is PAUSED.
This will post a "have-window-handle" element message on the bus.
This function should only be used by video overlay plugin developers.
handle
a platform-specific handle referencing the window
Tell an overlay that it should handle events from the window system. These
events are forwarded upstream as navigation events. In some window system,
events are not propagated in the window hierarchy if a client is listening
for them. This method allows you to disable events handling completely
from the VideoOverlay
.
handle_events
a gboolean
indicating if events should be handled or not.
This will post a "prepare-window-handle" element message on the bus
to give applications an opportunity to call
VideoOverlay::set_window_handle
before a plugin creates its own
window.
This function should only be used by video overlay plugin developers.
Configure a subregion as a video target within the window set by
VideoOverlay::set_window_handle
. If this is not used or not supported
the video will fill the area of the window set as the overlay to 100%.
By specifying the rectangle, the video can be overlayed to a specific region
of that window only. After setting the new rectangle one should call
VideoOverlay::expose
to force a redraw. To unset the region pass -1 for
the width
and height
parameters.
This method is needed for non fullscreen video overlay in UI toolkits that do not support subwindows.
x
the horizontal offset of the render area inside the window
y
the vertical offset of the render area inside the window
width
the width of the render area inside the window
height
the height of the render area inside the window
Returns
false
if not supported by the sink.
This will call the video overlay's set_window_handle method. You
should use this method to tell to an overlay to display video output to a
specific window (e.g. an XWindow on X11). Passing 0 as the handle
will
tell the overlay to stop using that window and create an internal one.
handle
a handle referencing the window.
Enum value describing the available tiling modes.
Unknown or unset tile mode
Every four adjacent blocks - two horizontally and two vertically are grouped together and are located in memory in Z or flipped Z order. In case of odd rows, the last row of blocks is arranged in linear order.
The video transfer function defines the formula for converting between non-linear RGB (R'G'B') and linear RGB
unknown transfer function
linear RGB, gamma 1.0 curve
Gamma 1.8 curve
Gamma 2.0 curve
Gamma 2.2 curve
Gamma 2.2 curve with a linear segment in the lower range
Gamma 2.2 curve with a linear segment in the lower range
Gamma 2.4 curve with a linear segment in the lower range
Gamma 2.8 curve
Logarithmic transfer characteristic 100:1 range
Logarithmic transfer characteristic 316.22777:1 range
Gamma 2.2 curve with a linear segment in the lower range. Used for BT.2020 with 12 bits per component. Since: 1.6
Gamma 2.19921875. Since: 1.8