When performing a key unit trickmode seek, it may be useful to
specify a minimum interval between the output frames, either
in very high rate cases, or as a protection against streams
that may contain an overly large amount of key frames.
One use case is ONVIF Section 6.5.3:
<https://www.onvif.org/specs/stream/ONVIF-Streaming-Spec.pdf>
For metas where order might be significant if multiple metas are
attached to the same buffer, so store a sequence number with the
meta when adding it to the buffer. This allows users of the meta
to make sure metas are processed in the right order.
We need a 64-bit integer for the sequence number here in the API,
a 32-bit one might overflow too easily with high packet/buffer
rates. We could do it rtp-seqnum style of course, but that's a
bit of a pain.
We could also make it so that gst_buffer_add_meta() just keeps metas in
order or rely on the order we add the metas in, but that seems too
fragile overall, when buffers (incl. metas) get merged or split.
Also add a compare function for easier sorting.
We store the seqnum in the MetaItem struct here and not in the
GstMeta struct since there's no padding in the GstMeta struct.
We could add a private struct to GstMeta before the start of
GstMeta, but that's what MetaItem effectively is implementation-
wise. We can still change this later if we want, since it's all
private.
Fixes#262
gstharness.c: Use G_GSIZE_FORMAT instead of hard-coding %zu
error: unknown conversion type character 'z' in format [-Werror=format]
gst-inspect.c: GPid is void* on non-UNIX, and we only use it on UNIX
error: initialization makes pointer from integer without a cast [-Werror]
gstmeta.c: Use and then discard value
error: value computed is not used [-Werror=unused-value]
With this, gstreamer builds with -Werror on MinGW
While extremelly rare, time and gst_date_time_new_* will have
diff values and potentially trigger an assertion. Thus move
the calls as closely together as possible to mitigate this.
We have to ensure that all background threads from thread pools are shut
down, or otherwise they might not have had a chance yet to drop their
last reference to the pipeline and then the assertion for a reference
count of 1 on the pipeline fails.
Otherwise it can easily happen that the pad is destroyed before the
thread disappears, as happened sometimes in the test_pad_probe_block_add_remove
test where joining of the thread was done *after* the pad was unreffed
and destroyed.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/339
Existing test for iterating/removing buffer meta data was insufficient
to detect linked list corruption when removing multiple items, and could
also suffer from such corruption in attempting to count remaining items.
Modified the one test and added several others to exercise multiple
scenarios.
Validates fix for issue #332.
We won't be able to do ASSERT_CRITICAL, but the main body of the tests
are still valid, and given we ship GStreamer with this configuration, it
is important to be able to run some tests against it.
A pointer to a hook in this list can easily not be unique, given both
the slice-allocator reusing memory, and the OS re-using freed blocks
in malloc.
By doing many repeated add and remove of probes, this becomes very easily
reproduced.
Instead use hook_id, which *is* unique for a added GHook.
If a segment has stop == -1, then gst_segment_to_running_time()
would refuse to calculate a running time for negative rates,
but gst_segment_do_seek() allows this scenario and uses a
valid duration for calculations.
Make the 2 functions consistent by using any configured duration
to calculate a running time too in that case.
https://bugzilla.gnome.org/show_bug.cgi?id=796559
Using a rate of 1.1 in the test is causing the test to
fail on 32-bit because ceil(1.1 * 10) can round to 12.
Instead use a rate 2.0 that can be expressed as floating
point number and doesn't trigger the problem.
https://bugzilla.gnome.org/show_bug.cgi?id=797154
Fixes for gst_segment_position_from_running_time_full() when
converting running_times that precede the segment start (or
stop in a negative rate segment)
The return value was incorrectly negated in those cases.
Add some more unit test checks for those cases, and especially
for segments with offsets.
This reverts commit 11e0f451eb.
When pushing a sticky event out of a pad with a pad probe or pad offset,
those should not be applied to the event that is actually stored in the
event but only in the event sent downstream. The pad probe and pad
offsets are conceptually *after* the pad, added by external code and
should not affect any internal state of pads/elements.
Also storing the modified event has the side-effect that a re-sent event
would arrive with any previous modifications done by the same pad probe
again inside that pad probe, and it would have to check if its
modifications are already applied or not.
For sink pads and generally for events arriving in a pad, some further
changes are still needed and those are tracked in
https://bugzilla.gnome.org/show_bug.cgi?id=765049
In addition, the commit also had a refcounting problem with events,
causing already destroyed events to be stored inside pads.
Previously gst_buffer_list_foreach() could modify (drop or replace)
buffers in non-writable lists, which could cause all kinds of problems
if other code also has a reference to the list and assumes that it stays
the same.
https://bugzilla.gnome.org/show_bug.cgi?id=796692
gst_buffer_list_new_sized(0) will cause an underflow in a calculation
which then makes it try to allocate huge amounts of memory, which
may lead to aborts.
https://bugzilla.gnome.org/show_bug.cgi?id=795758
In the case where the user sets a new padprobeinfo->data in a probe
where the data is a sticky event, the new sticky event should be automatically
sticked on the probed pad.
https://bugzilla.gnome.org/show_bug.cgi?id=795330
Remove unneeded reapplication of patterns. Besides being
superfluous (gst_debug_reset_threshold already applies
patterns) it was also wrong and didn't stop checking patterns
after the first match (broken in 67e9d139).
Also fix up unit test which checked for the wrong order.
https://bugzilla.gnome.org/show_bug.cgi?id=794717
Occasionally this test would fail, especially if the system is under load,
because the position query would pick up the last position from the
last buffer timestamp which has a lower timestamp than what we're
looking for. The sleep is long enough, however. It's unclear to me why
exactly this happens but there seems to be some kind of scheduling
issue going on as the streaming thread floods the sink with buffers.
Let's throttle the fakesrc to 100 buffers per second and make the sink
sync to the clock to restore some sanity. It should be totally sufficient
to test what we want to test, and seems to make things reliable here.
Scale the number of threads used in the stress tests according to
the number of cores/cpus. We want some contention, but we also
don't want too much contention, as some operating systems are
better at handling 100 threads running wild on a single core
than others.
Add header with structure sizes for 64-bit windows as well.
They're almost the same as on Linux, but it looks like things
like padding unions get aligned slightly differently so there
are a handful of differences:
sizeof(GstGhostPad) is 528, expected 536
sizeof(GstPad) is 512, expected 520
sizeof(GstPadProbeInfo) is 64, expected 72
sizeof(GstProxyPad) is 520, expected 528
The test checks that categories not covered by the pattern in the
GST_DEBUG string have debug level GST_LEVEL_DEFAULT set, but previous
tests mess with the default threshold, which made this test fail on
Windows or when run with CK_FORK=no. Fix this by resetting everything
at the beginning, and then also do a sanity check afterwards.
When registering categories after gst_init() we would re-check *all*
categories against the existing GST_DEBUG patterns again, whereas
it's enough to just check the new category. Moreover, we would parse
the GST_DEBUG pattern string again and re-add that to the existing
pattern list for every newly-registered debug category, and then
check that against all categories of course. This made registering
categories after gst_init() very very slow.