DVB-T2 supports 5, 10 and 1.712 MHz
Order of the enum values (new values after _AUTO)
has been kept congruent with the one in the v4l
API for consistency
Do not try to render a buffer that is already being rendered.
This happens typically during the initial rendering stage as the first
buffer is rendered twice: first by preroll(), then by render().
This commit avoids this assertion failure:
CRITICAL: gst_wayland_compositor_acquire_buffer: assertion
'meta->used_by_compositor == FALSE' failed
https://bugzilla.gnome.org/show_bug.cgi?id=738069
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
If waylandsink is the owner of the display then it is in charge
of catching input events on the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=733682
Signed-off-by: Tifaine Inguere <tifaine.inguere@st.com>
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
There are two cases covered here:
1) The GstWlDisplay forces the release of the last buffer and the pool
gets destroyed in this context, which means it unregisters all the
other buffers from the GstWlDisplay as well and the display->buffers
hash table gets corrupted because it is iterating.
2) The pool and its buffers get destroyed concurrently from another
thread while GstWlDisplay is finalizing and many things get corrupted.
The main reason behind this is that when the video caps change and the video
subsurface needs to resize and change position, the wl_subsurface.set_position
call needs a commit in its parent in order to take effect. Previously,
the parent was the application's surface, over which there is no control.
Now, the parent is inside the sink, so we can commit it and change size smoothly.
As a side effect, this also allows the sink to draw its black borders on
its own, without the need for the application to do that. And another side
effect is that this can now allow resizing the sink when it is in top-level
mode and have it respect the aspect ratio.
Because we no longer have a custom buffer pool that holds a reference
to the display, there is no way for a cyclic reference to happen like
before, so we no longer need to explicitly call a function from the
display to release the wl_buffers.
However, the general mechanism of registering buffers to the display
and forcibly releasing them when the display is destroyed is still
needed to avoid potential memory leaks. The comment in wlbuffer.c
is updated to reflect the current situation.
This reduces the complexity of having a custom buffer pool, as
we don't really need it. We only need the custom allocation part.
And since the wl_buffer is no longer saved in a GstMeta, we can
create it and add it on the buffers in the sink's render()
function, which removes the reference cycle caused by the pool
holding a reference to the display and also allows more generic
scenarios (the allocator being used in another pool, or buffers
being allocated without a pool [if anything stupid does that]).
This commit also simplifies the propose_allocation() function,
which doesn't really need to do all these complicated checks,
since there is always a correct buffer pool available, created
in set_caps().
The other side effect of this commit is that a new wl_shm_pool
is now created for every GstMemory, which means that we use
as much shm memory as we actually need and no more. Previously,
the created wl_shm_pool would allocate space for 15 buffers, no
matter if they were being used or not.
This also removes the GstWlMeta and adds a wrapper class for wl_buffer
which is saved in the GstBuffer qdata instead of being a GstMeta.
The motivation behind this is mainly to allow attaching wl_buffers on
GstBuffers that have not been allocated inside the GstWaylandBufferPool,
so that if for example an upstream element is sending us a buffer
from a different pool, which however does not need to be copied
to a buffer from our pool because it may be a hardware buffer
(hello dmabuf!), we can create a wl_buffer directly from it and first,
attach it on it so that we don't have to re-create a wl_buffer every
time the same GstBuffer arrives and second, force the whole mechanism
for keeping the buffer out of the pool until there is a wl_buffer::release
on that foreign GstBuffer.
Header will be read each and everytime parse function will be called
which is not necessary since until we have complete data,
we need not parse the header again.
https://bugzilla.gnome.org/show_bug.cgi?id=737984
frame-layer header is represented as a sequence of 32 bit unsigned
integer serialized in little-endian byte order, so framesize is on the
first 3 bytes.
SMPTE 421M Annex L.
https://bugzilla.gnome.org/show_bug.cgi?id=738243
Also, strictly speaking, these numbers aren't DLT_*; they are LINKTYPE_* because
libpcap translates from internal OS-specific DLT_ numbering to the portable
LINKTYPE_ number space when writing files.
https://bugzilla.gnome.org/show_bug.cgi?id=738206
Determines the amount of time that a pad will wait for a buffer before
being marked unresponsive.
Network sources may fail to produce buffers for an extended period of time,
currently causing the pipeline to stall possibly indefinitely, waiting for
these buffers to appear.
Subclasses should render unresponsive pads with either silence (audio), the
last (video) frame or what makes the most sense in the given context.
In gst_hls_demux_get_next_fragment() the next fragment URI gets
stored in next_fragment_uri, but the gst_hls_demux_updates_loop()
can at any time update the playlist, rendering this string invalid.
Therefore, any data (like key, iv, URIs) that is taken from a
GstM3U8Client needs to be copied. In addition, accessing the
internals of a GstM3U8Client requires locking.
https://bugzilla.gnome.org/show_bug.cgi?id=737793
- The shader was outputing the wrong values compared with raw
videotestsrc.
- deal with the texture edge properly.
- properly sample the 2x1 rectangle for the u and v values
- don't double sample the y value
The previous implementation kept accumulating GSources,
slowing down the iteration and leaking memory.
Instead of trying to fix the main context flushing, replace
it with a GAsyncQueue which is simple to flush and has
less overhead.
https://bugzilla.gnome.org/show_bug.cgi?id=736782