Because we no longer have a custom buffer pool that holds a reference
to the display, there is no way for a cyclic reference to happen like
before, so we no longer need to explicitly call a function from the
display to release the wl_buffers.
However, the general mechanism of registering buffers to the display
and forcibly releasing them when the display is destroyed is still
needed to avoid potential memory leaks. The comment in wlbuffer.c
is updated to reflect the current situation.
This also removes the GstWlMeta and adds a wrapper class for wl_buffer
which is saved in the GstBuffer qdata instead of being a GstMeta.
The motivation behind this is mainly to allow attaching wl_buffers on
GstBuffers that have not been allocated inside the GstWaylandBufferPool,
so that if for example an upstream element is sending us a buffer
from a different pool, which however does not need to be copied
to a buffer from our pool because it may be a hardware buffer
(hello dmabuf!), we can create a wl_buffer directly from it and first,
attach it on it so that we don't have to re-create a wl_buffer every
time the same GstBuffer arrives and second, force the whole mechanism
for keeping the buffer out of the pool until there is a wl_buffer::release
on that foreign GstBuffer.
This means that the given surface in set_window_handle can now be
the window's top-level surface on top of which waylandsink creates
its own subsurface for rendering the video.
This has many advantages:
* We can maintain aspect ratio by overlaying the subsurface in
the center of the given area and fill the parent surface's area
black in case we need to draw borders (instead of adding another
subsurface inside the subsurface given from the application,
so, less subsurfaces)
* We can more easily support toolkits without subsurfaces (see gtk)
* We can get properly use gst_video_overlay_set_render_rectangle
as our api to set the video area size from the application and
therefore remove gst_wayland_video_set_surface_size.