The peeked buffer was always reset after calling ::aggregate() but under
no other circumstances. If a pad was removed after peeking and before
::aggregate() returned then the peeked buffer would be leaked.
This can easily happen if pads are removed from the aggregator from a
pad probe downstream of the source pad but still in the source pad's
streaming thread.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/784>
e.g. h264parse ! video/x-h264,stream-format=avc receives the following:
- caps: video/x-raw,stream-format=byte-stream
- gap event: baseparse tries to choose some default caps but would
override the downstream chosen caps field with upstreams value.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/581>
It was zero and in some condition it means that the control binding
values where ignored (as shown in the test). Setting it to MAXDOUBLE
so that the first time we sync the values from a a timestamp in the
right range the proper value is computed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/564>
Even when pulling a new 64KB buffer from upstream, don't return
more data than was asked for in the pull_range() method and then
return less later, as that confused subclasses like h264parse.
Add a unit test that when a subclass asks for more data, it always
receives a larger buffer on the next iteration, never less.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/530
When running in pull mode (for e.g. mp3 reading),
baseparse currently reads 64KB from upstream, then mp3parse
consumes typically around 417/418 bytes of it. Then
on the next loop, it will read a full fresh 64KB again,
which is a big waste.
Fix the read loop to use the available cache buffer first
before going for more data, until the cache drops to < 1KB.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/518
What may happen is that during the course of processing a buffer,
all of the pads in a flow combiner may disappear. In this case, we
would return NOT_LINKED. Instead return whatever the input flow return
was.
There was a case where we started waiting on the clock before setting
the clock time, leading to the wait succeeding instead of being late:
gsttestclock.c:1073:F:testclock:test_late_crank:0: '1 * GST_SECOND' (1000000000) is not equal to 'context.jitter' (-4000000000)
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/426
Co-authored by: Mathieu Duponchelle <mathieu@centricular.com>
At the moment, we can only use crank if the pending entry is in the
future. This patch leaves the clock time to the same point if the
pending entry was in the past. This still execute a single entry. This
will be needed for the jitterbuffer, since as soon as we stop waking up
the jitterbuffer when the timer is reschedule later, we may endup with
such case in the unit tests.
Related to #608
Instead of tracking "pending_flush_*" on the pads and the
aggregator, we now simply track the last seqnum for flush start
and flush stop events on the pads, and use it to determine whether
we should enter or exit our flushing state.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/977
We won't be able to do ASSERT_CRITICAL, but the main body of the tests
are still valid, and given we ship GStreamer with this configuration, it
is important to be able to run some tests against it.
Position queries with GST_FORMAT_TIME are supposed to return stream
time.
gst_base_sink_get_position() estimates the current stream time on its
own instead of using gst_segment_to_stream_time(), but the algorithm
used was not taking segment.offset into account, resulting in invalid
values when this field was set to a non-zero value.
https://bugzilla.gnome.org/show_bug.cgi?id=792434
Add a gst_base_src_submit_buffer_list() function that allows subclasses
to produce a bufferlist containing multiple buffers in the ::create()
function. The buffers in the buffer list will then also be pushed out
in one go as a GstBufferList. This can reduce push overhead
significantly for sources with packetised inputs (such as udpsrc)
in high-throughput scenarios.
The _submit_buffer_list() approach was chosen because it is fairly
straight-forward, backwards-compatible, bindings-friendly (as opposed
to e.g. making the create function return a mini object instead),
and it allows the subclass maximum control: the subclass can decide
dynamically at runtime whether to return a list or a single buffer
(which would be messier if we added a create_list virtual method).
https://bugzilla.gnome.org/show_bug.cgi?id=750241
Convenience function to just grab all pending data
from the harness, e.g. if we just want to check if
it matches what we expect and we don't care about
the chunking or buffer metadata.
Based on patch by: Havard Graff <havard.graff@gmail.com>
Remove gst_init() from a few tests. Use _OBJECT variants in logging. Remove
arbitrary extra blank lines. Make push_event() more like push_buffer() - set
the event to NULL and add cleanup to _chain_data_clear().
Using two (or more) probes on the same pad where one of the probe
returns HANDLED or DROP is tricky since the other probes might
not be called.
Instead use regular probes and a proper pad (the sinkpad already existed,
it only required to be activated and have a dummy chain function for
the events/buffers to be received/handled properly)
Failure by this commit 2dfa548f36, which is
to append hooks instead of prepend.
Because of this change, aggretated_cb is not called and leads to failure.
And correct to check flush stop value instead of flush start value
https://bugzilla.gnome.org/show_bug.cgi?id=757801