We reset all the waiting streams, let them push another buffer to
see if they're now active again. This allows faster switching
between streams and prevents deadlocks if downstream does any
waiting too.
Also improve locking a bit, srcresult must be protected by the
multiqueue lock too because it's used/set from random threads.
Add the pad mode to the activate function so that we can reuse the same function
for all activation modes. This makes the core logic smaller and allows for some
elements to make their activation code easier. It would allow us to add more
scheduling modes later without having to add more activate functions.
Remove the getcaps function on the pad and use the CAPS query for
the same effect.
Add PROXY_CAPS to the pad flags. This instructs the default caps event and query
handlers to pass on the CAPS related queries and events. This simplifies a lot
of elements that passtrough caps negotiation.
Make two utility functions to proxy caps queries and aggregate the result. Needs
to use the pad forward function instead later.
Make the _query_peer_ utility functions use the gst_pad_peer_query() function to
make sure the probes are emited properly.
See #651514 for details. It's apparently impossible to write code
that avoids both type punning warnings with old g_atomic headers and
assertions in the new. Thus, macros and a version check.
This reverts commit cf4fbc005c.
This change did not improve the situation for bindings because
queries are usually created, then directly passed to a function
and not stored elsewhere, and the writability problem with
miniobjects usually happens with buffers or caps instead.
Improve GstSegment, rename some fields. The idea is to have the GstSegment
structure represent the timing structure of the buffers as they are generated by
the source or demuxer element.
gst_segment_set_seek() -> gst_segment_do_seek()
Rename the NEWSEGMENT event to SEGMENT.
Make parsing of the SEGMENT event into a GstSegment structure.
Pass a GstSegment structure when making a new SEGMENT event. This allows us to
pass the timing info directly to the next element. No accumulation is needed in
the receiving element, all the info is inside the element.
Remove gst_segment_set_newsegment(): This function as used to accumulate
segments received from upstream, which is now not needed anymore because the
segment event contains the complete timing information.
This reverts commit 9ef1346b1f.
Way to much for one commit and I'm not sure we want to get rid of the pad caps
just like that. It's nice to have the buffer and its type in onw nice bundle
without having to drag the complete context with it.
Remove pad_alloc and all references. This can now be done more efficiently and
more flexible with the ALLOCATION query and the bufferpool objects. There is no
reverse negotiation yet but that will be done with an event later.
Use the string provided by the caller for the sinkpad name
if possible. Note that all sanity checking for this name
is already done in GstElement.
Fixes Bug #645931
Instead return after every iteration, which makes sure that the
stream lock is released for a short time after every iteration,
task state changes are checked, etc and this allows the task
to be stopped properly.
When a downstream element returns GST_FLOW_UNEXPECTED we want to:
* let the dataqueue task running
* forward the flow return upstream.
This allows upstream elements to push EOS, and have that EOS event come
downstream.
Fixes#609274
When we receive an UNEXPECTED flowreturn from downstream, we must not shutdown
the pushing thread because upstream will at some point push an EOS that we still
need to push further downstream.
To achieve this, convert the UNEXPECTED return value to OK. Add a fixme so that
we implement the right logic to propagate the flowreturn upstream at some point.
Also clean up the unit test a little.
Fixes#608136
There's not much point in using GST_DEBUG_FUNCPTR with GObject
virtual functions such as get_property, set_propery, finalize and
dispose, since they'll never be used by anyone anyway. Saves a
few bytes and possibly a tenth of a polar bear.
Add support for buffering mode where we post BUFFERING messages based on the
level of the queues. It currently operates on the first queue that goes over or
under the high/low thresholds.
In buffering mode we want to ignore the max visible items to decide when the
queue is filled. Instead, we only look at the number of bytes and/or time in the
queue.
Don't shadow the sq argument in the underrun_cb function but use
a different variable name to iterate the other queues.
Use the same variable name in the overrun_cb function.
We know earlier on in the code whether we're handling an event or a buffer,
just pass that information through.
This commit and the previous commit reduce instruction fetch:
* when pushing buffer (_chain) by 10%
* when popping buffer (_loop) by 3%
The task will always exist as long as its owner (i.e. the pad) and that
owner's owner (i.e. multiqueue) exist.
Reduces the number of instruction fetches by 36%.