Usually the latency message is only posted whenever latency of an
element changes but that might be too early as the sinks might not be
able to query the latency at that point yet.
Similarly adding a new sink should cause latency reconfiguration once
that new sink is able to report its latency.
This fixes latency configuration in pipelines where webrtcbin is the
only "sink", i.e. it is used in a sendonly session. Before, the latency
would always be configured to 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/843>
The peeked buffer was always reset after calling ::aggregate() but under
no other circumstances. If a pad was removed after peeking and before
::aggregate() returned then the peeked buffer would be leaked.
This can easily happen if pads are removed from the aggregator from a
pad probe downstream of the source pad but still in the source pad's
streaming thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/784>
When parsing interlaced video streams, ignoring incoming DTS could
cause the parser to end up with PTS < DTS output buffers, for example
when increasing next_dts using the duration of the last pushed
buffer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/681>
While we can fixe the upstream latency using the min-upstream-latency, we
are now forced to use queues (hence more thread) in order to store the pending
data whenever we have an upstream source that has lower latency.
This fixes the issue by allowing to buffer the fixed upstream latency. This is
particularly handy on single core systems were having too many threads can
cause serious performance issues.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/677>
e.g. h264parse ! video/x-h264,stream-format=avc receives the following:
- caps: video/x-raw,stream-format=byte-stream
- gap event: baseparse tries to choose some default caps but would
override the downstream chosen caps field with upstreams value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/581>
Some base classes like videoaggregator try retrieving the latency during
construction, which causes the latency values to be set already until
reconfiguration happens.
By resetting them the same way as in stop() we ensure that we always
start cleanly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/599>
When going to PLAYING we will now have a clock and can stop waiting on
the condition variable and instead start waiting on the clock if
necessary for the current configuration.
In the other direction when going to PAUSED the clock might have
disappeared and we might need to wait on the condition variable again
instead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/601>
On the first buffer the base class would update the segment position
based on the start-time-selection. If the subclass provides its own
segment this will caused unexpected behaviour and override segment
information that was explicitly set by the subclass.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/600>
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/issues/771
for context.
This exposes new API that subclasses must call from their
aggregate() implementation to signal that they have selected
the next samples they will aggregate: gst_aggregator_selected_samples()
GstAggregator will emit a new signal there, `samples-selected`,
handlers can then look up samples per pad with the newly-added
gst_aggregator_peek_next_sample.
In addition, a new FIXME is logged when subclasses haven't actually
called `selected_samples` from their aggregate() implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/549>
In reverse playback, buffers have to be displayed at buffer.stop running
time, otherwise a same set of buffer can't be displayed in the exact opposite
order to forward playback.
For example, seeking a video stream at 1fps with start=0, stop=5s, rate=1.0
will display the following buffers:
b0.pts = 0s, b0.duration = 1s - at running time = 0s
b1.pts = 1s, b1.duration = 1s - at running time = 1s
b2.pts = 2s, b2.duration = 1s - at running time = 2s
b3.pts = 3s, b3.duration = 1s - at running time = 3s
b4.pts = 4s, b4.duration = 1s - at running time = 4s
<wait at EOS for 1second>
Now, playing that reverse with start=0, stop=5s, rate=1.0 has to display
the following buffers:
b0.pts = 4s, b0.duration = 1s - at running time = 0s
b1.pts = 3s, b1.duration = 1s - at running time = 1s
b2.pts = 2s, b2.duration = 1s - at running time = 2s
b3.pts = 1s, b3.duration = 1s - at running time = 3s
b4.pts = 0s, b4.duration = 1s - at running time = 4s
<wait at EOS for 1second>
With the previous code, it reproduced the following:
b0.pts = 4s, b0.duration = 1s - at running time = 1s
b1.pts = 3s, b1.duration = 1s - at running time = 2s
b2.pts = 2s, b2.duration = 1s - at running time = 3s
b3.pts = 1s, b3.duration = 1s - at running time = 4s
b4.pts = 0s, b4.duration = 1s - at running time = 5s
<NO WAIT AT EOS AND POST EOS RIGHT AWAY>
This is being tested with the `validate.launch_pipeline.sink.reverse_playback_clock_waits.*`
set of tests
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/450>
In reverse playback, buffers are played back from buffer.stop
(buffer.pts + buffer.duration) to buffer.pts, which means that the
position after the buffer is consumed is buffer.pts, not buffer.pts -
buffer.duration.
Without that change, and when `automatic_eos` feature is on,
we were dropping the last buffers as marking the stream EOS one buffer
too soon.
This is now being tested extensively by GstValidate in the
`validate.test.clock_sync.*` set of tests.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/450>
To allow the refcounting tracer to work better. In childproxy/iterator
these might be plain GObjects but gst_object_unref() also works on them.
In other places where it is never GstObject, g_object_unref() is kept.
Even when pulling a new 64KB buffer from upstream, don't return
more data than was asked for in the pull_range() method and then
return less later, as that confused subclasses like h264parse.
Add a unit test that when a subclass asks for more data, it always
receives a larger buffer on the next iteration, never less.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/530
When running in pull mode (for e.g. mp3 reading),
baseparse currently reads 64KB from upstream, then mp3parse
consumes typically around 417/418 bytes of it. Then
on the next loop, it will read a full fresh 64KB again,
which is a big waste.
Fix the read loop to use the available cache buffer first
before going for more data, until the cache drops to < 1KB.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/518
What may happen is that during the course of processing a buffer,
all of the pads in a flow combiner may disappear. In this case, we
would return NOT_LINKED. Instead return whatever the input flow return
was.