The queue gets filled by the tail, so a query will always be the tail
object, not the head object. Also add a _peek_tail_struct() method to the
GstQueueArray to enable looking at the tail.
With unit test to prevent future regression.
https://bugzilla.gnome.org/show_bug.cgi?id=762875
Set up all ten pipelines and preroll them first, and only set
them to playing to run wild after they're all set up. If we set
them to PLAYING directly and let those threads run wild, then
it might take ages (many seconds) for the other pipelines to
even get up and running, especially on machines with only one
or two cores, and operating systems that suck at scheduling.
Now the fakesink test takes 19 secs instead of 71 secs on a
single-cpu windows machine.
The real use case is when downstream didn't set a pool or
allocation params, in which case we expect the tee to not
create a pool or param from thin air. Dowstream setting
an pool with size=0 was in fact testing a downstream element
bug. The fact we handle that is accidental.
If the aggregated size is 0 and we create a pool, the pool would provide
buffers with no memory assigned. Handle that case and skip the pool.
This was the behaviour before cf803ea9f4.
Add a test for this scenario.
https://bugzilla.gnome.org/show_bug.cgi?id=730758
In the unit test refactoring, the unlinked pad required to test
the different behaviour induced by "allow-not-linked" property
was removed.
Commit e364d7944e
Move all the code for this test in the proper function, and re-add
the missing unlinked pad. This makes the test useful again.
Split the large allocation_query test into seperate tests. Add a setup helper
to reduce code duplication. Fix the original test that used fail_unless instead
of ck_assert_int_eq and had it accidentially working.
This carries over code for a similar test from multiqueue to ensure full
control over the dataflow while testing. (The previous attempt was racy
since the fill level changed without any thread sync with the test code.)
https://bugzilla.gnome.org/show_bug.cgi?id=771210
low/high-watermark are of type double, and given in range 0.0-1.0. This
makes it possible to set low/high watermarks with greater resolution,
which is useful with large multiqueue max sizes and watermarks like 0.5%.
Also adding a test to check the fill and watermark level behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=770628
low/high-watermark are of type double, and given in range 0.0-1.0. This
makes it possible to set low/high watermarks with greater resolution,
which is useful with large queue2 max sizes and watermarks like 0.5%.
Also adding a test to check the fill and watermark level behavior.
https://bugzilla.gnome.org/show_bug.cgi?id=769449
gst_check_setup_sink_pad() internally uses gst_check_chain_func() so we
should call gst_check_drop_buffers() when tearing down tests to free the
buffers which have been exchanged through the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=765903
This ensures the following special case is handled properly:
1. Queue is empty
2. Data is pushed, fill level is below the current high-threshold
3. high-threshold is set to a level that is below the current fill level
Since mq->percent wasn't being recalculated in step #3 properly, this
caused the multiqueue to switch off its buffering state when new data is
pushed in, and never post a 100% buffering message. The application will
have received a <100% buffering message from step #2, but will never see
100%.
Fix this by recalculating the current fill level percentage during
high-threshold property changes in the same manner as it is done when
use-buffering is modified.
https://bugzilla.gnome.org/show_bug.cgi?id=763757
Changing states up and down while buffers are being pushed is not
a valid use case. If a pad is deactivated and reactivated during
a buffer push it is racy with the check of pushed sticky events
and the actual chainfunction call. As it might call the chain
without noticing the peer pad lost its previous sticky events.
https://bugzilla.gnome.org/show_bug.cgi?id=758340
Iterator may need to be resynced, for instance if pads are released
during state change.
got_eos should be protected by the object lock of the element, not of
the pad, as is the case throughout the rest of the funnel code.
https://bugzilla.gnome.org/show_bug.cgi?id=755343
Writing a test for unscheduling the gst_clock_id_wait inside the
identity element, found an invalid read, caused by removing the clock-id
when calling _unschedule instead of letting the code calling _wait remove
the clock-id after being unscheduled.
https://bugzilla.gnome.org/show_bug.cgi?id=752055
Update test_seeking testcase to verify the render and render_list
virtual method handle buffers and buffer list containing multiple
memory blocks correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=747223
GstFileSink implements the render_list virtual method to render
a list of buffers. Update the test_seeking test case to also
check the render_list method implementation.
https://bugzilla.gnome.org/show_bug.cgi?id=747100
This property avoids not linked error when all the pads are unlinked
or when there are no source pads. This is useful in dynamic pipelines
where it can happen that for a short time there are no pads at all or
all downstream pads are not linked yet.
https://bugzilla.gnome.org/show_bug.cgi?id=746436
3 new tests:
1) Tests that a stream that is empty (just an EOS event)
on inactive pad doesn't get through and tamper
with the active pad that still has data
2) Tests that a stream that is shorter than the active one
(pushes EOS earlier) doesn't has its EOS pushed
3) Tests that switching to an inactive stream that has received
EOS will make input-selector push EOS
https://bugzilla.gnome.org/show_bug.cgi?id=746518