Use the last result as a default when pushing a item from a single queue,
otherwise the status gets reset to _OK when pushing events.
This causes problems when mistakenly activating a not-linked stream
that is being ignored upstream as it is not being used (adaptive
scenarios), it will make the multiqueue post a buffering message
on a pad that won't receive buffers
https://bugzilla.gnome.org/show_bug.cgi?id=725917
When the single queue size was just bumped by 1 to allow more buffers to
be added, the buffers limit could be reduced to the current level when
setting the max-size-buffers property. This would result in a stall
since the queue would not grow anymore at this point.
Prevent this by not reducing a single queue size below the current
number of buffers + 1.
https://bugzilla.gnome.org/show_bug.cgi?id=712597
In the case where one singlequeue is full and all other are not linked, the
growing of the full queue does not work correctly. The result depends on if
the full queue is last in the queue list or not.
https://bugzilla.gnome.org/show_bug.cgi?id=722891
When prerolling/buffering, multiqueue has its buffers limit set
to 0, this means it can take an infinite amount of buffers.
When prerolling/buffering finishes, its limit is set back to 5, but
only if the current level is lower than 5. It should (almost) never be
and this will cause prerolling/buffering to need to wait to reach the
hard bytes and time limits, which are much higher.
This can lead to a very long startup time. This patch fixes this
by setting the single queues to the max(current, new_value) instead
of simply ignoring the new value and letting it as infinite(0)
https://bugzilla.gnome.org/show_bug.cgi?id=712597
This makes buffering stop in case a stream switch happens. This is
important for adaptive streams that can disable not-linked streams
to avoid consuming the network bandwidth.
https://bugzilla.gnome.org/show_bug.cgi?id=719575
After patch bda406c4, the state of the singlequeue was set to OK, but nothing
would then wake up the thread, as the other wakeup functions only look at
singlequeues that are marked as having received as not-linked.
https://bugzilla.gnome.org/show_bug.cgi?id=708200
If the multiqueue has automatically grown chances are good that
we will cause the pipeline to starve if the maximum level is reduced
below that automatically grown size.
https://bugzilla.gnome.org/show_bug.cgi?id=707156
We must be certain that we don't cause a deadlock when blocking the serialized
queries. One such deadlock can happen when we are buffering and downstream is
blocked in preroll and a serialized query arrives. Downstream will not unblock
(and allow our query to execute) until we complete buffering and buffering will
not complete until we can answer the query..
https://bugzilla.gnome.org/show_bug.cgi?id=702840
The control of wheteher a SingleQueue is full is not correct.
Rewrote single_queue_overrun_cb() so it checks the correct variables
when checking if the queue has reached the hard limits, and to
increase the max buffer limit once for each call.
https://bugzilla.gnome.org/show_bug.cgi?id=690557
We reset all the waiting streams, let them push another buffer to
see if they're now active again. This allows faster switching
between streams and prevents deadlocks if downstream does any
waiting too.
Also improve locking a bit, srcresult must be protected by the
multiqueue lock too because it's used/set from random threads.
Add the pad mode to the activate function so that we can reuse the same function
for all activation modes. This makes the core logic smaller and allows for some
elements to make their activation code easier. It would allow us to add more
scheduling modes later without having to add more activate functions.
Remove the getcaps function on the pad and use the CAPS query for
the same effect.
Add PROXY_CAPS to the pad flags. This instructs the default caps event and query
handlers to pass on the CAPS related queries and events. This simplifies a lot
of elements that passtrough caps negotiation.
Make two utility functions to proxy caps queries and aggregate the result. Needs
to use the pad forward function instead later.
Make the _query_peer_ utility functions use the gst_pad_peer_query() function to
make sure the probes are emited properly.
See #651514 for details. It's apparently impossible to write code
that avoids both type punning warnings with old g_atomic headers and
assertions in the new. Thus, macros and a version check.
This reverts commit cf4fbc005c.
This change did not improve the situation for bindings because
queries are usually created, then directly passed to a function
and not stored elsewhere, and the writability problem with
miniobjects usually happens with buffers or caps instead.
Improve GstSegment, rename some fields. The idea is to have the GstSegment
structure represent the timing structure of the buffers as they are generated by
the source or demuxer element.
gst_segment_set_seek() -> gst_segment_do_seek()
Rename the NEWSEGMENT event to SEGMENT.
Make parsing of the SEGMENT event into a GstSegment structure.
Pass a GstSegment structure when making a new SEGMENT event. This allows us to
pass the timing info directly to the next element. No accumulation is needed in
the receiving element, all the info is inside the element.
Remove gst_segment_set_newsegment(): This function as used to accumulate
segments received from upstream, which is now not needed anymore because the
segment event contains the complete timing information.
This reverts commit 9ef1346b1f.
Way to much for one commit and I'm not sure we want to get rid of the pad caps
just like that. It's nice to have the buffer and its type in onw nice bundle
without having to drag the complete context with it.
Remove pad_alloc and all references. This can now be done more efficiently and
more flexible with the ALLOCATION query and the bufferpool objects. There is no
reverse negotiation yet but that will be done with an event later.
Use the string provided by the caller for the sinkpad name
if possible. Note that all sanity checking for this name
is already done in GstElement.
Fixes Bug #645931