1. Propagate the GstGLDisplay we create
2. Add the created GstGLContext to the propagated GstGLDisplay
Otherwise with multi-branch GL pipelines involving gtkglsink, things
will fall apart and errors will be genarated somewhere.
Except for gst/gl/gstglfuncs.h
It is up to the client app to include these headers.
It is coherent with the fact that gstreamer-gl.pc does not
require any egl.pc/gles.pc. I.e. it is the responsability
of the app to search these headers within its build setup.
For example gstreamer-vaapi includes explicitly EGL/egl.h
and search for it in its configure.ac.
For example with this patch, if an app includes the headers
gst/gl/egl/gstglcontext_egl.h
gst/gl/egl/gstgldisplay_egl.h
gst/gl/egl/gstglmemoryegl.h
it will *no longer* automatically include EGL/egl.h and GLES2/gl2.h.
Which is good because the app might want to use the gstgl api only
without the need to bother about gl headers.
Also added a test: cd tests/check && make libs/gstglheaders.check
https://bugzilla.gnome.org/show_bug.cgi?id=784779
Scenario:
A manifest starts out in live mode but then the recording is finalized
and a subsequent update changes the state to a non-live manifest when
the server has finished recording/transcoding/whatever with the full
list of fragments.
Without this patch, the manifest update task is never stopped on the
live->non-live transition and will busy loop, burning through one CPU
core.
https://bugzilla.gnome.org/show_bug.cgi?id=786275
Make a bunch of symbols private that are currently leaked
accidentally because they have a gst_* prefix and are used
internally. We mark those we can't make static with
G_GNUC_INTERNAL so that they get hidden with the autotools
build as well (although we could just pass -fvisibility=hidden
there too).
The goal here is to minimize the work needed to bring all images
to a common format. A better criteria than the number of pads
with a given format is the number of pixels with a given format.
https://bugzilla.gnome.org/show_bug.cgi?id=786078
This commit ensures that the idle probe which GstAdaptiveDemuxStream
adds to the upstream source pad is removed after use. Previously a new
probe was added to the pad whenever a fragment was downloaded, meaning
the number of pad probe callbacks being executed increased continually.
https://bugzilla.gnome.org/show_bug.cgi?id=785957
There can be twice as many stream tasks running as there are output
pads for playback of variant HLS playlists. Half of them are the
current pads, and the other half are the pads that are about to be
switched to due to a bitrate change.
The old code only stopped the current streams which could result
in a deadlock on stopping the pipeline. The changes force stopping
and joining of any prepared streams too.
https://bugzilla.gnome.org/show_bug.cgi?id=785987
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827
Found on rpi when gpu_mem is too low so there is not enough memory to
create the eglimage. But still gst_buffer_pool_acquire_buffer succeeded.
And it leads to a CRITICAL assert:
gst_egl_image_get_image: assertion 'GST_IS_EGL_IMAGE (image)' failed
https://bugzilla.gnome.org/show_bug.cgi?id=785518
Otherwise check_events() will not remove the GAP event (as the queue
tail is not the event anymore but the GAP buffer), then the GAP buffer
is handled, then the GAP event is handled again, ... forever.
Avoids dereferencing dead objects
What happens in the autovideosink case is that context 1 is created and
destroyed before all the async operations hae executed on the associated
window. When the delayed operations execute, they then reference dead
objects and crash.
We fix this by keeping refs over all async operations so the object
cannot be deleted while async operations are in flight.
https://bugzilla.gnome.org/show_bug.cgi?id=782379
This is a workaround for a regression introduced by
f4190a49c0
( adaptivedemux: Check live seeking range more often )
The goal of the previous commit was to be able to cope with non-1.0
rates on live streams which have a "seeking window" (i.e. the server
keeps around quite a bit of the live stream so you can seek back into
it).
Without that commit, two different kind of issues would happen:
* When doing reverse playback, you would never check whether you
are outside of the seekable region. And would then continuously
try to download fragments that are no longer present.
* When doing fast forward, you would end up requesting fragments
which are not present yet.
In order to determine whether one was *really* outside of the seekable
window, we check whether the current stream position is still
within the seekable region.
The *problem* though with that commit is that it assumes that subclasses
will return continuously updated seeking ranges (i.e. dependent on the
current time), which is *NOT* the case.
For example:
* dashdemux does use the current UTC to determine the seekable region
* hlsdemux uses the values from the last updated manifest
Therefore if one downloads fragments faster than realtime, for HLS
we would end up at the end of the last manifest seekable range, and
the previous commit would consider the stream as being ended... which
is not the case.
In the long run, we need to figure out a way to cope with non-1.0
rates on live streams for all types of stream (including HLS).
https://bugzilla.gnome.org/show_bug.cgi?id=783075
This is a race that was exposed by the {hls|dash}.scrub_forward_seeking
validate test.
The "race" is that a subclass might want to change format, causing
a new stream to be created (but not exposed/switched yet) and put on the
prepared_streams list. That stream will have values (including pending
segment) from the pre-seek state.
Before the stream is exposed/switched, a new seek comes in and the stream
values get updated ... but the ones that will be changed don't get updated
causing them to push out wrong segments once they are exposed.
https://bugzilla.gnome.org/show_bug.cgi?id=773159
Add a function to install the default RGBA pad templates,
but don't make them required so that there can be
GstGLFilter sub-classes with different input/output
caps if they want. Remove the hard-coded RGBA restriction in
the set_caps_features call, as it will be taken care
of by intersecting with the pad templates.
Update all the sub-classes to match
Adaptive demuxers are special demuxers that runs their own sources
internally. In this patch we flag the demuxer as being a source in order
to receive the downstream events. We then handle the EOS event by
resetting the internal state and pushing EOS on all pads. This handling
is done asynchronously to avoid blocking user thread.
https://bugzilla.gnome.org/show_bug.cgi?id=723868
When the pad has received EOS, its buffer may still be mixed
any number of times, when the pad's framerate is inferior
to the output framerate.
This was introduced by my patch in
https://bugzilla.gnome.org/show_bug.cgi?id=782962, this patch
also correctly addresses the initial issue.
On the raspberry pi no pkg-config file is provided for the bcm_host
library. We are using AC_CHECK_LIB to detect this lib with autotools,
cc.find_library() library is a closer meson equivalent.
https://bugzilla.gnome.org/show_bug.cgi?id=784026
We have to pass the "height" as height = vmeta->offset[1] / width to the
API, which of course does not work well for formats with only a single
plane. Use the whole memory size instead of the offset in that case.
Previous commit let demux call gst_uri_downloader_cancel() on _demux_reset().
Note that, _demux_reset() called during PAUSED_TO_READY and READY_TO_PAUSED.
And, it will set "cancelled" on uridownloader which blocks the use of
uridownloader. The issue is that, subclass can use the uridownloader not only
live streaming for manifest update, but also for fetching another manifests
such as variant and rendition m3u8 of hls streaming. So to unblock it,
demux should clear "cancelled" before processing initial manifest.
https://bugzilla.gnome.org/show_bug.cgi?id=783401
before broadcasting preroll.
The deadlock was as follows:
-> The subclass pushes a buffer on a newly-created stream in T1
-> We take the preroll lock in T1, to handle_preroll
-> The demuxer is stopped in T2, we take the MANIFEST_LOCK
-> T1 starts blocking because it received a reconfigure event
and needs to take the MANIFEST_LOCK
-> T2 deadlocks because it now wants the preroll_lock.
https://bugzilla.gnome.org/show_bug.cgi?id=783255
Make sure the manifest update loop is stopped before proceeding with the
resetting of the manifest data. Otherwise, the updates loop will try to
use it and it leads to a segfault
https://bugzilla.gnome.org/show_bug.cgi?id=783028
As we release the MANIFEST_LOCK in stop_tasks,
demux->priv->old_streams can be set, we need to free these
otherwise we may end up trying to dispose elements in the
READY state.
https://bugzilla.gnome.org/show_bug.cgi?id=783256
When an accurate seek is requested on a live stream, only requests the
exact value for the "starting position" (i.e. start in forward playback
and stop in reverse playback).
https://bugzilla.gnome.org/show_bug.cgi?id=782698
And to make that 100% obvious, only use variables declared within the
switch cases instead of function-wide ones.
Also remove useless one-time-use-only variable.
CID #1409857
The live seeking range was only checked when doing actual seeks. This was
assuming that the rate would always be 1.0 (i.e. the playback would
advance in realtime, and therefore fragments would always be available
since the seeking window moves at the same rate).
With non-1.0 rates, this no longer becomes valid, and therefore we need
to check whether we are still within the live seeking range when advancing.
https://bugzilla.gnome.org/show_bug.cgi?id=783075
What we want is to retry downloading the fragment on 4xx/5xx errors
however returning EOS will cause waiting for a manifest update for live
(which may be a really long time) or stop everything for non-live.
Change that to only return EOS/ERROR once we've reached the error limit.
https://bugzilla.gnome.org/show_bug.cgi?id=776609
GL_RGB565 is sized internal glformat, the corresponding glformat
should be GL_RGB and type is GL_UNSIGNED_SHORT_565. Otherwise will
return GL_INVALID_ENUM when creating texture.
https://bugzilla.gnome.org/show_bug.cgi?id=783066
This ensures that they really get processed in order with
buffers. Just waiting for the queue to be empty is sometimes not
enough as the buffers are dropped from the pad before the result is
pushed to the next element, sometimes resulting in surprising
re-ordering.
In the case an aggregator is created and pads are requested but only
linked later, we end up never updating the upstream latency.
This was because latency queries on pads that are not linked succeed,
so we never did a new query once a live source has been linked, so the
thread was never started.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
When caps changes while streaming, the new caps was getting processed
immediately in videoaggregator, but the next buffer in the queue that
corresponds to this new caps was not necessarily being used immediately,
which resulted sometimes in using an old buffer with new caps. Of course
there used to be a separate buffer_vinfo for mapping the buffer with its
own caps, but in compositor the GstVideoConverter was still using wrong
info and resulted in invalid reads and corrupt output.
This approach here is more safe. We delay using the new caps
until we actually select the next buffer in the queue for use.
This way we also eliminate the need for buffer_vinfo, since the
pad->info is always in sync with the format of the selected buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=780682
The function needs to be unlocked if any data is received, but only
end the first buffer processing on an actual buffer, synchronized events
don't matter on the first buffer processing.
https://bugzilla.gnome.org/show_bug.cgi?id=781673
With the macOS/iOS implementations, the active thread can change
multiple times over the life of a pipeline which would expose a race in
the thread tracking.
Fix by taking a ref on the active thread while the context is active.
https://bugzilla.gnome.org/show_bug.cgi?id=779202