The problem lies in the fact that multiqueue will now operate somewhat
similarly to the flow aggregation logic of demuxers and therefore
will stopp whenever all downstream pads return NOT_LINKED and/or
UNEXPECTED and there's no more buffers to push.
The latest commits should not affect any regular use-case, but the bug
report will be kept open so the previous behaviour can be re-established
if needed.
Fixes#609486
When we receive an UNEXPECTED flowreturn from downstream, we must not shutdown
the pushing thread because upstream will at some point push an EOS that we still
need to push further downstream.
To achieve this, convert the UNEXPECTED return value to OK. Add a fixme so that
we implement the right logic to propagate the flowreturn upstream at some point.
Also clean up the unit test a little.
Fixes#608136
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_multi_queue_set_property),
(gst_multi_queue_request_new_pad), (gst_single_queue_flush),
(gst_multi_queue_loop), (gst_multi_queue_sink_activate_push):
Make it so that pads are considered linked until a buffer is pushed
and discovered otherwise. This avoids problems with decodebin2 hanging
after a seek in the filesrc ! decodebin2 name=d ! fakesink d. ! fakesink
case.
Make sure we lock the multiqueue when updating the max-size properties.
Fix a crash on Solaris in a debug statement in get_request_pad that
passes a NULL string to GST_DEBUG.
* tests/check/elements/multiqueue.c: (mq_dummypad_chain),
(run_output_order_test):
Fix the test to allow the first buffer on not-linked pads to come out
of sequence while multiqueue discovers that they are not-linked.
Original commit message from CVS:
* tests/check/elements/multiqueue.c: (mq_dummypad_chain),
(mq_dummypad_event), (run_output_order_test):
Use a GStaticMutex to protect all cases where libcheck
fail_if/fail_unless macros might be called from multiple threads
simultaneously to avoid errors like:
"check_pack.c:107: :-1081725400:Bad message type arg"
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_multi_queue_init),
(gst_single_queue_flush), (apply_segment), (apply_buffer),
(gst_single_queue_push_one), (gst_multi_queue_loop),
(gst_multi_queue_sink_activate_push), (gst_multi_queue_sink_event),
(gst_multi_queue_src_activate_push), (wake_up_next_non_linked),
(compute_high_id), (gst_single_queue_new):
* plugins/elements/gstmultiqueue.h:
Take the multiqueue lock when updating the fill level so we don't get
confused.
After applying a buffer or event on the src pad segment, make sure to
call gst_data_queue_limits_changed() to get the data queue to unblock
and check the filled state again.
Rework the not-linked pad handling so the logic is that not-linked
pads can push as fast as they like, but only so they never get
ahead of any linked pads.
* tests/check/elements/multiqueue.c: (mq_sinkpad_to_srcpad),
(mq_dummypad_getcaps), (mq_dummypad_chain), (mq_dummypad_event),
(run_output_order_test), (GST_START_TEST), (multiqueue_suite):
Add a test to check that not-linked pads always stay behind
linked pads.
Original commit message from CVS:
* libs/gst/base/gstdataqueue.c:
* libs/gst/base/gstdataqueue.h:
* plugins/elements/gstmultiqueue.c: (gst_single_queue_push_one),
(gst_multi_queue_item_new), (gst_multi_queue_chain),
(gst_multi_queue_sink_event):
* tests/check/elements/multiqueue.c: (multiqueue_suite):
Fix multiqueue leaking buffers and events when downstream or the
queue are flushing. Make refcounting assumptions explicit and
document them (shouldn't break existing code that uses it other than
maybe leak miniobjects, but that already happens anyway). Add unit
test for the most common flushing case. Fixes#423700.
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_single_queue_free):
Don't leak GCond.
* tests/check/Makefile.am:
* tests/check/elements/.cvsignore:
* tests/check/elements/multiqueue.c: (setup_multiqueue),
(GST_START_TEST), (multiqueue_suite):
Add some dead simple unit tests for the 'multiqueue' element
(some bits don't work yet and are disabled for now).