I didn't find the behavior and purpose of streamsynchronizer documented
or intuitive. Eventually I got Edward to explain it to me, which was
very helpful. Now I'm contributing some docs so that the next person
doesn't have to figure it out by asking around and hoping for an answer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7084>
When glupload generates sink caps based on src caps after determining upload method, src
caps may only contain RGBA format.
In this case, the raw caps on the sink pad generated by glupload will only contain the
RGBA format, which will cause caps negotiation fail, because the filter caps used for
negotiation by the upstream element may only contain other formats, such as xBGR, etc.
Add the formats supported by #GstGLMemory to raw caps to ensure that caps negotiation
succeeds.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7061>
release_frame() can be useful for manually dropping frames without posting QoS messages like finish_frame() would.
Matches the same kind of API on the decoder side of things.
Modifies the behaviour of release_frame() to make sure events from released frames are stored as 'pending'
and pushed before the next non-dropped frame. This is needed because now release_frame() can be called outside of
finish_frame(), so we would potentially just lose events and bad things would happen.
drop_frame() was also added to match the decoder API. It functions almost identically to finish_frame() without a buffer
attached to the frame, except instead of immediately pushing the frame's events, it will store them as pending.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7190>
In case when conn->input_stream is NULL and glib was built with
"glib_checks" enabled, g_pollable_input_stream_read_nonblocking()
returns -1, but does not set the "err".
The call stack:
read_bytes() ->
fill_bytes() ->
fill_raw_bytes()
The return value -1 passed up to read_bytes() and incorrectly
processed there after "error:" label.
This changes the return value to EINVAL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7210>
glCheckFrameStatus() can fail by returning 0, and otherwise return a
status. Fix the trace to make it clear when we get an unkown status
compare to having an error, in which case we also trace the error code.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7291>
videoscale does not have convert function, so remove the convert
description in it's classification. Otherwise, if we want use
autovideoconvert to convert colorsapce, autovideoconvert will select
videoscale to do convert and this will cause to fail.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7215>
${libdir}/gstreamer-1.0/include is only valid after installation, but
extra_cflags are added unconditionally, so we can't use that for
include flags.
Instead, let's add the include flag via variables, which are different
for installed and uninstalled pc files.
This is particularly bad for consuming GStreamer via CMake which barfs
on non-existent include paths.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7130>
The video meta API now is a mandatory request for DMA kind negotiation. In
current code, the raw method always adds that video meta API in the query
of propose_allocation(), so we do not need to do the duplicated task here.
Just adding a comment to declare that.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6698>
- Align `glib_debug`, `glib_assert` and `glib_checks` options with GLib,
otherwise glib subproject won't inherit their value. Previous names
and values are preserved using Meson's deprecation mechanism.
- Add `extra-checks` and `benchmarks` options in the main project so it
can be inherited in GStreamer subprojects.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1165>
We were storing the probe id in a different structure (DecodebinOutputStream)
than the pad it is targetting (which is in MultiQueueSlot).
The problem is that when re-targetting outputs (to a different slot)... we would
end up having an invalid probe id, or not have a reference to an existing one.
Instead, store the probe id in the same structure as the pad it's targetting
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7069>
ensure_input_parsebin() has a top comment saying it must be called with
INPUT_LOCK taken, but 2 out of 3 usages of the function call it without
taking that mutex.
This patch adds locking in these two remaining usages.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5279>
This fixes a regression introduced by 6c4f52ea20
There are cases where the input stream will be push-based, time-segment and not
have a collection nor caps. This means the event-based checks are not sufficient
to decide when/where to plug in a identity or parsebin to process the input.
For those corner cases we setup a buffer probe to ensure we always end up with
at least a parsebin
Fixes#3609
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/7010>
The typefind code was rejecting content smaller than 128 bytes making it
impossible to play files with very small srt files.
But those can actually be properly detected so fix typefind to allow
smaller content and try its best with it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6937>
When measuring video latency, one mechanism involves taking a photo
with a camera of two screens showing the test video overlayed with
timeoverlay or clockoverlay. In these cases, if the display's pixel
response time is crappy, you will see ghosting due to which it can be
quite difficult to discern what the current timestamp being shown is.
This commit adds a property that *also* shows the timestamp in
a different (sequentially predictable) location every frame, which
makes it easy to tell what the latest rendered timestamp is.
For bonus points, you can also use the fade-time of the previous frame
to measure with sub-framerate accuracy when the photo was taken, not
just clamped to the framerate, giving you a higher precision latency
value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6935>
When dealing with push-based inputs, we are now delaying the creation of
parsebin/identity until we get all pre-buffer events.
We therefore can simplify the handling of new pads being linked and only have to
check if upstream can handle pull-based or not.
Avoids creating parsebin for parsed upstream data altogether
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6953>
Simple gst_gl_sync_meta_wait() is not sufficient to ensure GL commands
are executed before dma-buf devices get to see the buffer.
This is the first step that should make the code behave correctly for
everybody, although there may be performance penalty. In the future we
should introduce a more general sync meta that would allow to move the
waiting from gldownload (the producer) to the sink elements (the
consumers).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6968>
When we are dealing with parsed inputs (i.e. using identity), we need to ensure
that we have a valid stream collection (and therefore DBCollection) before
anything flows dowsntream.
In those cases, we hold onto those events until we get such a collection.
Fixes#3356
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
This commit separates collection and selections into a new separate structure:
DecodebinCollection.
This provides a much cleaner/saner way of dealing with collections being
updated, gapless playback, etc...
There is now a list of DecodebinCollection in flight, of which two are special:
* input_collection, the currently inputted/merged collection
* output_collection, the currently active collection on the output of multiqueue
Handling GST_EVENT_SELECT_STREAMS is split, by looking for the collection to
which it applies. And the requested streams are stored in it. IIF that
collection is output_collection we can do the switch, else it will be updated
when it becomes active.
Detecting which collection/selection is active is done by looking at the
GST_EVENT_STREAM_START on the output of the multiqueue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Move the handling of GST_EVENT_STREAM_START on a slot to a separate function
* There was a lot of usage of `gst_stream_get_stream_id()` for the slot
active_stream. Cache that instead of constantly querying it.
* Rename the variables in `handle_stream_switch()` to be clearer
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Centralize associating an output to a slot in one function, including properly
resetting those fields
* Rename functions to be more explicit
* Move code to "reset" an output stream into a dedicated function (will be used
later)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
* Rename the function names to be clearer, with prefixes
* Pass the input (or stream) directly where appropriate
* Document usage, inputs, ownership
* Rename variables for clarity where applicable
* Avoid double lock/unlock if callee can handle it directly
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
Simplify its usage by having it directly create the message if the collection
changed. This is what caller were always doing and avoids releasing selection
locks yet-another-time
Also use it in more places to avoid code repetition
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6774>
To simplify the description, I'm assuming we only have two streams: video and audio.
For the video stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(1) => blocked waiting in gst_stream_synchronizer_wait
- FLUSH_START => unblocked
- FLUSH_STOP => stream->wait reset to false
- NEW_SEGMENT(2) => not waiting, since stream->wait is false
Then for the audio stream, we have the following events :
- STREAM_START => stream->wait set to true
- NEW_SEGMENT(2) => blocked waiting in gst_stream_synchronizer_wait for ever.
Note: The first NEW_SEGMENT event and the FLUSH_START, FLUSH_STOP events of the audio stream
are dropped before being received by the streamsynchronizer element, because the decodebin audio pad src
is not yet linked to the playsink audio pad sink.
To fix this deadlock, we don't reset stream->wait to false in the FLUSH_STOP event when it is not
waiting for the EOS of the other streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6763>
In some cases you want to ensure that a specific element factory is used
while requiring some specific caps but this was not possible. You can
now do `qtmux:video/x-prores,variant=standard|factory-name=avenc_prores_ks`
to ensure that the `avenc_prores_ks` factory is used to produce the
'standard' variant of prores video stream.
This also enhances a bit the documentation
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6875>
This counter is incremented once for every segment, meaning it would
e.g. overflow after 24 days when using 1ms segments. Once that happens,
completely wrong positions are reported and invalid memory is handed out
for writing/reading the next segments.
As the affected variables are unfortunately part of the public API of
the struct, a second set of variables is added together with accessor
functions and both variables are kept in sync for backwards
compatibility.
All existing users of the two variables are moved to the new ones but
external code might still run into the overflow.
This also slightly breaks API as external code updating the variables
will have no effect anymore but the only known user of this is
pulsesink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6740>
There's nothing requiring <= 64 channels except for getting the reorder
map and creating a channel mixing matrix, but those won't be possible to
call anyway as channel positions can only express up to 64 channels.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6819>
when pipline is
glvideomixerelement->glcolorconvertelement->gldownloadelement and
glcolorconvertelement is not passthrough, the gl bufferpool between
glvideomixerelement and glcolorconvertelement will not add gl sync meta
during allocating buffer. This will cause that glcolorconvert's inbuf
has no sync meta to wait for.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6756>
The assertion that was present before is a bit too harsh, since there is now
a (understandable) use-case where this could happen.
In gapless use-case, with two files containing the same type (ex:audio). The
first one *does* expose a collection with an audio stream, but decoding
fails (for whatever reason).
That would cause us to have configured a audio combiner, which was never
used (i.e. not active).
Then the second file plays and we (wrongly) assume it should be activated
... whereas the combiner was indeed present.
Demote the assertion to a warning and properly handle it
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3389
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6737>
Clarify the fact that `encodebasebin->src_pad` is set when using a static source
pad (`encodebin`) and when not set it's dynamically added source
pads (`encodebin2`).
Fixes usage of encodebin2 when profiles are updated
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6667>
Since https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6153 ,
subtitle "decoders" (i.e. which decode to raw text) are no longer auto-plugged
by parsebin.
But if a given format does not have a parser at all, we would end up outputting
non-time/non-parsed outputs.
In order to mitigate the issue, until such parsers are available, we check if
the subtitle stream is in TIME format or not (i.e. whether it comes from a
parser or demuxer). If not, we attempt to plug in a subtitle "decoder".
Fixes#3463
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6592>
Reset the waiting thread counter in all places to be consistent
when sending signal for the audio ring buffer. This fix applies it to
pause, stop and release, which are states that will go into a callback
of the subclass. Having the waiting counter reset will avoid having
executing thread of the same subclass trying to take the mutex when
callong gst_audio_ring_buffer_advance.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6195>
On practice a failure happened due to a race condition, the instance
already have been freed, but it could also happen if the instance
would be null.
Instead of crashing this sanity check is a more suitable option,
since with G_DEBUG=fatal-warnings it will crash too.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6475>
This reverts commit 8e923a8e2d.
This caused regressions, see #3303.
Without this commit, osxaudiosrc ! osxaudiosink won't work
right, but since that hasn't really been a huge problem
for years it's probably best to revert this until a proper
solution can be figured out.
(cherry picked from commit f04f86f3ee)
(cherry picked from commit 93255efece)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6405>
None of the GL allocators actually offer a generic alloc() implementation. As a
side effect, they cannot be offered as they don't work with generic video
buffer pool.
Our specialized buffer pool can be dropped by tee or alphacombine as sharing the
same buffer pool over two branch is not supported by the pool API.
Fixes#3372
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6327>
Memory from gst_adapter_map() could live shorter than the GstMemory that the GstBuffer wraps around it, which in lucky
cases 'just' caused a re-use of the same memory for multiple (potentially still in use!) input buffers, but could easily
end up pointing to an already-freed memory.
Manifested when an AudioToolbox encoder kept getting silence inserted in seemingly random circumstances, turned out
to be the memory being re-used by GStreamer at the same time that the AT API was processing it...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6320>
This will mimic the playbin2 behaviour, which sets the "next" entry to be
NULL.
The biggest impact this has is that when going back to READY the current play
entry will be discarded (instead of being kept around for when you go back to
PAUSED/PLAYING).
Fixes#3371
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6324>
find_slot_for_stream_id() will return a slot which has the request stream-id as
active_stream *or* pending_stream (i.e. the slot on which that stream is
currently being outputted or will be outputted).
When figuring out which slot to use (if any) we want to consider stream-id
which *will* appear on a given slot which isn't outputting anything yet the same
way as if we didn't find a slot yet.
Fixes races when doing intensive state changes
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/6270>