In simple profile, level set to 0 or 2 indicate low and medium level
respectively. In main profile, level set to 0, 2 or 4 indicate low,
medium and high level respectively.
Level values are defined in Annex J.1.2 of the SMPTE 421M.
https://bugzilla.gnome.org/show_bug.cgi?id=738230
Signal sparse streams properly in stream-start event and force sending
of pending sticky events which have been stored on the pad already and
which otherwise would only be sent on the first buffer or serialized
event (which means very late in case of subtitle streams). Playsink in
playbin waits for stream-start or another serialized event, and if we
don't do this it will wait for the multiqueue to run full before
starting playback, which might take a couple of seconds.
https://bugzilla.gnome.org/show_bug.cgi?id=734040
All pads of a stream are now added at the beginning. In order to cope with
streams that don't get any data (forever or for a long time) we detect gaps
and push out GAP events when needed.
Cleanups and commenting by Jan Schmidt <jan@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=734040
Some VC1 decoder can have different caps according to wmv format, ie
WMV3 or WVC1.
So instead of keeping the first available caps, we interserct with
current WMV format.
https://bugzilla.gnome.org/show_bug.cgi?id=738532
Using NSApp directly seems to confuse something, as the compiler
was expecting an id<NSFileManagerDelegate>. Switched to using
[NSApplication sharedApplication], and specified the delegate
protocol on the window class as well.
https://bugzilla.gnome.org/show_bug.cgi?id=738740
It is not required on OSX apparently and was only added in 10.9.6 there.
Calculating the correct level from the configuration is not trivial, so let's
just not set a level at all here.
When stream-format is ASF or sequence-layer-raw-frame, we basically have
a raw frame so we can parse it to extract some information such the
keyframe flag. The only requirement is to have a valid sequence-header.
This commit parse the frame header and set the DELTA_UNIT buffer flag in
case the frame is not a keyframe.
https://bugzilla.gnome.org/show_bug.cgi?id=738519
DVB-T2 supports 5, 10 and 1.712 MHz
Order of the enum values (new values after _AUTO)
has been kept congruent with the one in the v4l
API for consistency
Do not try to render a buffer that is already being rendered.
This happens typically during the initial rendering stage as the first
buffer is rendered twice: first by preroll(), then by render().
This commit avoids this assertion failure:
CRITICAL: gst_wayland_compositor_acquire_buffer: assertion
'meta->used_by_compositor == FALSE' failed
https://bugzilla.gnome.org/show_bug.cgi?id=738069
Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com>
Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
If waylandsink is the owner of the display then it is in charge
of catching input events on the surface.
https://bugzilla.gnome.org/show_bug.cgi?id=733682
Signed-off-by: Tifaine Inguere <tifaine.inguere@st.com>
Reviewed-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
There are two cases covered here:
1) The GstWlDisplay forces the release of the last buffer and the pool
gets destroyed in this context, which means it unregisters all the
other buffers from the GstWlDisplay as well and the display->buffers
hash table gets corrupted because it is iterating.
2) The pool and its buffers get destroyed concurrently from another
thread while GstWlDisplay is finalizing and many things get corrupted.
The main reason behind this is that when the video caps change and the video
subsurface needs to resize and change position, the wl_subsurface.set_position
call needs a commit in its parent in order to take effect. Previously,
the parent was the application's surface, over which there is no control.
Now, the parent is inside the sink, so we can commit it and change size smoothly.
As a side effect, this also allows the sink to draw its black borders on
its own, without the need for the application to do that. And another side
effect is that this can now allow resizing the sink when it is in top-level
mode and have it respect the aspect ratio.
Because we no longer have a custom buffer pool that holds a reference
to the display, there is no way for a cyclic reference to happen like
before, so we no longer need to explicitly call a function from the
display to release the wl_buffers.
However, the general mechanism of registering buffers to the display
and forcibly releasing them when the display is destroyed is still
needed to avoid potential memory leaks. The comment in wlbuffer.c
is updated to reflect the current situation.
This reduces the complexity of having a custom buffer pool, as
we don't really need it. We only need the custom allocation part.
And since the wl_buffer is no longer saved in a GstMeta, we can
create it and add it on the buffers in the sink's render()
function, which removes the reference cycle caused by the pool
holding a reference to the display and also allows more generic
scenarios (the allocator being used in another pool, or buffers
being allocated without a pool [if anything stupid does that]).
This commit also simplifies the propose_allocation() function,
which doesn't really need to do all these complicated checks,
since there is always a correct buffer pool available, created
in set_caps().
The other side effect of this commit is that a new wl_shm_pool
is now created for every GstMemory, which means that we use
as much shm memory as we actually need and no more. Previously,
the created wl_shm_pool would allocate space for 15 buffers, no
matter if they were being used or not.