Create a single, global GDK GL context and the corresponding GStreamer
GL display and wrapped GStreamer GL context when initializing the first
sink and continue using that for all further sinks.
Additionally, don't create a full GStreamer GL context inside the sink
but only distribute the wrapped GL context in the pipeline so that
elements that actually need a full GL context can create one that is
sharing with that one. The sink itself does not need a full GStreamer GL
context.
Then inside the sink check that any GL memory that arrives was created
by a GL context that can share with the wrapped GDK GL context and only
then use it.
And lastly, use the correct GL contexts for a) creating a sync point and
b) actually waiting on it.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/318
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1100>
If creating a playlist or fragment stream fails (disk is full, the
directory is removed, ...), we will currently crash because the signal
handler expects a non-None GIOStream. The actual callback is allowed to
return None values and we handle this in the caller, so let's not have
this restriction on the signal handler.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1097>
for uploaded object default content-type is set to binary/octet-stream,
which is correct.
metadata cannot be used to set content-type and content-disposition as
setting metadata add a prefix x-amz-meta to key
e.g. setting metadate "content-type=video/mp4" actually set value as
x-amz-meta-content-type. So these has to be seaprate property.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1085>
This works around non-determinism in aggregator where depending on
timing it can happen that it consumes all buffers from both pads or
waits for another buffer on one pad while the other one already has one.
The effect in this test was that it sometimes timed out. By providing
one more buffer it is guaranteed now that at this point the muxer is
beyond the end of the first fragment.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1081>
The logic of the element requires the next buffer to be available
immediately after we are done pushing the previous, otherwise we insert
a repeat.
Making the src loop handle events and queries broke this, as upstream is
almost guaranteed not to deliver a buffer in time if we allow non-buffer
items to block upstream's push.
To fix this, replace our single-item `Option` with a `VecDeque` that we
allow to hold an unlimited number of events or queries, but only one
buffer at a time.
In addition, the code was confused about the current caps and segment.
This wasn't an issue before making the src loop handle events and
queries, as only the sinkpad cared about the current segment, using it
to buffers received, and only the srcpad cared about the current caps,
sending it just before sending the next received buffer.
Now the sinkpad cares about caps (through `update_fallback_duration`)
and the srcpad cares about the segment (when not in single-segment
mode).
Fix this by
- making `in_caps` always hold the current caps of the sinkpad,
- adding `pending_caps`, which is used by the srcpad to store
caps to be sent with the next received buffer,
- adding `in_segment`, holding the current segment of the sinkpad,
- adding `pending_segment`, which is used by the srcpad to store
the segment to be sent with the next received buffer,
- adding `out_segment`, holding the current segment of the srcpad.
Maybe a fix for
https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/298.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1082>