We won't be able to do ASSERT_CRITICAL, but the main body of the tests
are still valid, and given we ship GStreamer with this configuration, it
is important to be able to run some tests against it.
Allows determining from downstream what the expected bitrate of a stream
may be which is useful in queue2 for setting time based limits when
upstream does not provide timing information.
Implement bitrate query handling in queue2
https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/issues/60
A pointer to a hook in this list can easily not be unique, given both
the slice-allocator reusing memory, and the OS re-using freed blocks
in malloc.
By doing many repeated add and remove of probes, this becomes very easily
reproduced.
Instead use hook_id, which *is* unique for a added GHook.
If a segment has stop == -1, then gst_segment_to_running_time()
would refuse to calculate a running time for negative rates,
but gst_segment_do_seek() allows this scenario and uses a
valid duration for calculations.
Make the 2 functions consistent by using any configured duration
to calculate a running time too in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=796559
Using a rate of 1.1 in the test is causing the test to
fail on 32-bit because ceil(1.1 * 10) can round to 12.
Instead use a rate 2.0 that can be expressed as floating
point number and doesn't trigger the problem.
https://bugzilla.gnome.org/show_bug.cgi?id=797154
Fixes a configure error with gst-build:
subprojects/gst-plugins-base/meson.build:235:2: ERROR: Fetched variable 'gst_check_dep' in the subproject 'gstreamer' is not a dependency object.
Fixes for gst_segment_position_from_running_time_full() when
converting running_times that precede the segment start (or
stop in a negative rate segment)
The return value was incorrectly negated in those cases.
Add some more unit test checks for those cases, and especially
for segments with offsets.
Otherwise it's not guaranteed that buffers are actually on disk after
pushing them, and reading the file via g_file_get_contents() might not
include them yet.
This reverts commit 11e0f451eb.
When pushing a sticky event out of a pad with a pad probe or pad offset,
those should not be applied to the event that is actually stored in the
event but only in the event sent downstream. The pad probe and pad
offsets are conceptually *after* the pad, added by external code and
should not affect any internal state of pads/elements.
Also storing the modified event has the side-effect that a re-sent event
would arrive with any previous modifications done by the same pad probe
again inside that pad probe, and it would have to check if its
modifications are already applied or not.
For sink pads and generally for events arriving in a pad, some further
changes are still needed and those are tracked in
https://bugzilla.gnome.org/show_bug.cgi?id=765049
In addition, the commit also had a refcounting problem with events,
causing already destroyed events to be stored inside pads.
Previously gst_buffer_list_foreach() could modify (drop or replace)
buffers in non-writable lists, which could cause all kinds of problems
if other code also has a reference to the list and assumes that it stays
the same.
https://bugzilla.gnome.org/show_bug.cgi?id=796692
gst_buffer_list_new_sized(0) will cause an underflow in a calculation
which then makes it try to allocate huge amounts of memory, which
may lead to aborts.
https://bugzilla.gnome.org/show_bug.cgi?id=795758
In the case where the user sets a new padprobeinfo->data in a probe
where the data is a sticky event, the new sticky event should be automatically
sticked on the probed pad.
https://bugzilla.gnome.org/show_bug.cgi?id=795330
Remove unneeded reapplication of patterns. Besides being
superfluous (gst_debug_reset_threshold already applies
patterns) it was also wrong and didn't stop checking patterns
after the first match (broken in 67e9d139).
Also fix up unit test which checked for the wrong order.
https://bugzilla.gnome.org/show_bug.cgi?id=794717
The queue gets filled by the tail, so a query will always be the tail
object, not the head object. Also add a _peek_tail_struct() method to the
GstQueueArray to enable looking at the tail.
With unit test to prevent future regression.
https://bugzilla.gnome.org/show_bug.cgi?id=762875
Position queries with GST_FORMAT_TIME are supposed to return stream
time.
gst_base_sink_get_position() estimates the current stream time on its
own instead of using gst_segment_to_stream_time(), but the algorithm
used was not taking segment.offset into account, resulting in invalid
values when this field was set to a non-zero value.
https://bugzilla.gnome.org/show_bug.cgi?id=792434
Occasionally this test would fail, especially if the system is under load,
because the position query would pick up the last position from the
last buffer timestamp which has a lower timestamp than what we're
looking for. The sleep is long enough, however. It's unclear to me why
exactly this happens but there seems to be some kind of scheduling
issue going on as the streaming thread floods the sink with buffers.
Let's throttle the fakesrc to 100 buffers per second and make the sink
sync to the clock to restore some sanity. It should be totally sufficient
to test what we want to test, and seems to make things reliable here.
Set up all ten pipelines and preroll them first, and only set
them to playing to run wild after they're all set up. If we set
them to PLAYING directly and let those threads run wild, then
it might take ages (many seconds) for the other pipelines to
even get up and running, especially on machines with only one
or two cores, and operating systems that suck at scheduling.
Now the fakesink test takes 19 secs instead of 71 secs on a
single-cpu windows machine.
Scale the number of threads used in the stress tests according to
the number of cores/cpus. We want some contention, but we also
don't want too much contention, as some operating systems are
better at handling 100 threads running wild on a single core
than others.
Add header with structure sizes for 64-bit windows as well.
They're almost the same as on Linux, but it looks like things
like padding unions get aligned slightly differently so there
are a handful of differences:
sizeof(GstGhostPad) is 528, expected 536
sizeof(GstPad) is 512, expected 520
sizeof(GstPadProbeInfo) is 64, expected 72
sizeof(GstProxyPad) is 520, expected 528
The test checks that categories not covered by the pattern in the
GST_DEBUG string have debug level GST_LEVEL_DEFAULT set, but previous
tests mess with the default threshold, which made this test fail on
Windows or when run with CK_FORK=no. Fix this by resetting everything
at the beginning, and then also do a sanity check afterwards.
Add a gst_base_src_submit_buffer_list() function that allows subclasses
to produce a bufferlist containing multiple buffers in the ::create()
function. The buffers in the buffer list will then also be pushed out
in one go as a GstBufferList. This can reduce push overhead
significantly for sources with packetised inputs (such as udpsrc)
in high-throughput scenarios.
The _submit_buffer_list() approach was chosen because it is fairly
straight-forward, backwards-compatible, bindings-friendly (as opposed
to e.g. making the create function return a mini object instead),
and it allows the subclass maximum control: the subclass can decide
dynamically at runtime whether to return a list or a single buffer
(which would be messier if we added a create_list virtual method).
https://bugzilla.gnome.org/show_bug.cgi?id=750241
Convenience function to just grab all pending data
from the harness, e.g. if we just want to check if
it matches what we expect and we don't care about
the chunking or buffer metadata.
Based on patch by: Havard Graff <havard.graff@gmail.com>
Remove gst_init() from a few tests. Use _OBJECT variants in logging. Remove
arbitrary extra blank lines. Make push_event() more like push_buffer() - set
the event to NULL and add cleanup to _chain_data_clear().
Using two (or more) probes on the same pad where one of the probe
returns HANDLED or DROP is tricky since the other probes might
not be called.
Instead use regular probes and a proper pad (the sinkpad already existed,
it only required to be activated and have a dummy chain function for
the events/buffers to be received/handled properly)
Failure by this commit 2dfa548f36, which is
to append hooks instead of prepend.
Because of this change, aggretated_cb is not called and leads to failure.
And correct to check flush stop value instead of flush start value
https://bugzilla.gnome.org/show_bug.cgi?id=757801
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
When registering categories after gst_init() we would re-check *all*
categories against the existing GST_DEBUG patterns again, whereas
it's enough to just check the new category. Moreover, we would parse
the GST_DEBUG pattern string again and re-add that to the existing
pattern list for every newly-registered debug category, and then
check that against all categories of course. This made registering
categories after gst_init() very very slow.
Checking that the pad is in the correct mode before the parent is
checked makes the call always succeed if the mode is ok.
This fixes a race with ghostpad where gst_pad_activate_mode() could
trigger a g_critical() if the ghostpad is unparented while the
proxypad is deactivating, for instance if the ghostpad is released.
More specifically, gst_ghost_pad_internal_activate_push_default()'s
call to gst_pad_activate_mode() would fail if ghostpad doesn't have a
parent. With this patch it will return true of mode is already
correct.
An object that can be waited on and asked for asynchronous values.
In much the same way as promise/futures in js/java/etc
A callback can be installed for when the promise changes state.
Original idea by
Jan Schmidt <jan@centricular.com>
With contributions from
Nirbheek Chauhan <nirbheek@centricular.com>
Mathieu Duponchelle <mathieu@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=789843
Add convenience API that iterates over all pads, sink pads or
source pads and makes sure that the foreach function is called
exactly once for each pad.
This is a KISS implementation. It doesn't use GstIterator and
doesn't try to do clever things like resync if pads are added
or removed while the function is executing. We can still do that
in future if we think it's needed, but in practice it will
likely make absolutely no difference whatsoever, since these
things will have to be handled properly elsewhere by the element
anyway if they're important.
After all, it's always possible that a pad is added or removed
just after the iterator finishes iterating, but before the
function returns.
This is also a replacement for gst_aggregator_iterate_sink_pads().
https://bugzilla.gnome.org/show_bug.cgi?id=785679
Boy scout rule. Make is a little less painful to debug the tests by using
fail_unless_equals_{uint64,int64,float} where appropriate. Ideally the large
tests would be splitted to avoid guessing data dependencies.
For linked elements, the resulting gst_bin_iterate_sorted() will
properly return elements from sink to sources.
If we have some elements that are not linked, we *still* want to
ensure that we return:
* In priority any sinks
* Last of all any sources
* And in between any element which is neither source nor sink
For this to work, when looking for the next candidate element,
not only check the degree order, but if there are two candidates
with the same degree order, prefer the non-source one.
Amongst other things, this fixes the case where we activating a
bin containing unlinked sources and other elements. Without this
we could end up activating sources (which might start adding pads
to be linked) before other (to which those new source element pads
might be linked) are not activated
https://bugzilla.gnome.org/show_bug.cgi?id=788434
The real use case is when downstream didn't set a pool or
allocation params, in which case we expect the tee to not
create a pool or param from thin air. Dowstream setting
an pool with size=0 was in fact testing a downstream element
bug. The fact we handle that is accidental.
If the aggregated size is 0 and we create a pool, the pool would provide
buffers with no memory assigned. Handle that case and skip the pool.
This was the behaviour before cf803ea9f4.
Add a test for this scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=730758
In the unit test refactoring, the unlinked pad required to test
the different behaviour induced by "allow-not-linked" property
was removed.
Commit e364d7944e
Move all the code for this test in the proper function, and re-add
the missing unlinked pad. This makes the test useful again.