This ensures the following special case is handled properly:
1. Queue is empty
2. Data is pushed, fill level is below the current high-threshold
3. high-threshold is set to a level that is below the current fill level
Since mq->percent wasn't being recalculated in step #3 properly, this
caused the multiqueue to switch off its buffering state when new data is
pushed in, and never post a 100% buffering message. The application will
have received a <100% buffering message from step #2, but will never see
100%.
Fix this by recalculating the current fill level percentage during
high-threshold property changes in the same manner as it is done when
use-buffering is modified.
https://bugzilla.gnome.org/show_bug.cgi?id=763757
From the test case:
/* This test creates a multiqueue with 2 streams. One receives
* a constant flow of buffers, the other only gets one buffer, and then
* new-segment events, and returns not-linked. The multiqueue should not fill.
*/
If one of the queues goes EOS and the other returns NOT_LINKED the stream
can be considerered EOS as a NOT_LINKED means that one of the branches has no
sink downstream that will block the EOS message posting.
https://bugzilla.gnome.org/show_bug.cgi?id=725917
The check itself is racy.
(CK_FORK=no GST_CHECK=test_output_order make elements/multiqueue.forever).
The problem is indeed the test and not the actual element behaviour.
The objects to push are being pulled out of the single internal queues in the
right order and at the right time...
But between:
* the moment the global multiqueue lock is released (which was used to detect
if we should pop and push downstream the next buffer)
* and the moment it is received by the source pad (which does the check)
=> another single queue (like the unlinked pad) might pop and push a buffer
downstream
What should we do ? Putting a bigger margin of error (say 5 buffers) doesn't
help, it'll eventually fail.
I can't see how we can detect this reliably.
https://bugzilla.gnome.org/show_bug.cgi?id=708661
Remove the getcaps function on the pad and use the CAPS query for
the same effect.
Add PROXY_CAPS to the pad flags. This instructs the default caps event and query
handlers to pass on the CAPS related queries and events. This simplifies a lot
of elements that passtrough caps negotiation.
Make two utility functions to proxy caps queries and aggregate the result. Needs
to use the pad forward function instead later.
Make the _query_peer_ utility functions use the gst_pad_peer_query() function to
make sure the probes are emited properly.
Improve GstSegment, rename some fields. The idea is to have the GstSegment
structure represent the timing structure of the buffers as they are generated by
the source or demuxer element.
gst_segment_set_seek() -> gst_segment_do_seek()
Rename the NEWSEGMENT event to SEGMENT.
Make parsing of the SEGMENT event into a GstSegment structure.
Pass a GstSegment structure when making a new SEGMENT event. This allows us to
pass the timing info directly to the next element. No accumulation is needed in
the receiving element, all the info is inside the element.
Remove gst_segment_set_newsegment(): This function as used to accumulate
segments received from upstream, which is now not needed anymore because the
segment event contains the complete timing information.
Remove code that isn't needed any longer, which sets the multiqueue
to PLAYING and back before unreffing, in order to avoid a deadlock
waiting for gstpad tasks that were never started. The problem seems
to have been fixed long ago.
The problem lies in the fact that multiqueue will now operate somewhat
similarly to the flow aggregation logic of demuxers and therefore
will stopp whenever all downstream pads return NOT_LINKED and/or
UNEXPECTED and there's no more buffers to push.
The latest commits should not affect any regular use-case, but the bug
report will be kept open so the previous behaviour can be re-established
if needed.
Fixes#609486
When we receive an UNEXPECTED flowreturn from downstream, we must not shutdown
the pushing thread because upstream will at some point push an EOS that we still
need to push further downstream.
To achieve this, convert the UNEXPECTED return value to OK. Add a fixme so that
we implement the right logic to propagate the flowreturn upstream at some point.
Also clean up the unit test a little.
Fixes#608136
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_multi_queue_set_property),
(gst_multi_queue_request_new_pad), (gst_single_queue_flush),
(gst_multi_queue_loop), (gst_multi_queue_sink_activate_push):
Make it so that pads are considered linked until a buffer is pushed
and discovered otherwise. This avoids problems with decodebin2 hanging
after a seek in the filesrc ! decodebin2 name=d ! fakesink d. ! fakesink
case.
Make sure we lock the multiqueue when updating the max-size properties.
Fix a crash on Solaris in a debug statement in get_request_pad that
passes a NULL string to GST_DEBUG.
* tests/check/elements/multiqueue.c: (mq_dummypad_chain),
(run_output_order_test):
Fix the test to allow the first buffer on not-linked pads to come out
of sequence while multiqueue discovers that they are not-linked.
Original commit message from CVS:
* tests/check/elements/multiqueue.c: (mq_dummypad_chain),
(mq_dummypad_event), (run_output_order_test):
Use a GStaticMutex to protect all cases where libcheck
fail_if/fail_unless macros might be called from multiple threads
simultaneously to avoid errors like:
"check_pack.c:107: :-1081725400:Bad message type arg"
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_multi_queue_init),
(gst_single_queue_flush), (apply_segment), (apply_buffer),
(gst_single_queue_push_one), (gst_multi_queue_loop),
(gst_multi_queue_sink_activate_push), (gst_multi_queue_sink_event),
(gst_multi_queue_src_activate_push), (wake_up_next_non_linked),
(compute_high_id), (gst_single_queue_new):
* plugins/elements/gstmultiqueue.h:
Take the multiqueue lock when updating the fill level so we don't get
confused.
After applying a buffer or event on the src pad segment, make sure to
call gst_data_queue_limits_changed() to get the data queue to unblock
and check the filled state again.
Rework the not-linked pad handling so the logic is that not-linked
pads can push as fast as they like, but only so they never get
ahead of any linked pads.
* tests/check/elements/multiqueue.c: (mq_sinkpad_to_srcpad),
(mq_dummypad_getcaps), (mq_dummypad_chain), (mq_dummypad_event),
(run_output_order_test), (GST_START_TEST), (multiqueue_suite):
Add a test to check that not-linked pads always stay behind
linked pads.
Original commit message from CVS:
* libs/gst/base/gstdataqueue.c:
* libs/gst/base/gstdataqueue.h:
* plugins/elements/gstmultiqueue.c: (gst_single_queue_push_one),
(gst_multi_queue_item_new), (gst_multi_queue_chain),
(gst_multi_queue_sink_event):
* tests/check/elements/multiqueue.c: (multiqueue_suite):
Fix multiqueue leaking buffers and events when downstream or the
queue are flushing. Make refcounting assumptions explicit and
document them (shouldn't break existing code that uses it other than
maybe leak miniobjects, but that already happens anyway). Add unit
test for the most common flushing case. Fixes#423700.
Original commit message from CVS:
* plugins/elements/gstmultiqueue.c: (gst_single_queue_free):
Don't leak GCond.
* tests/check/Makefile.am:
* tests/check/elements/.cvsignore:
* tests/check/elements/multiqueue.c: (setup_multiqueue),
(GST_START_TEST), (multiqueue_suite):
Add some dead simple unit tests for the 'multiqueue' element
(some bits don't work yet and are disabled for now).