When getting caps of the decode chain, in get_topology, the caps are being
checked if fixed or not. But get_topology will be called when the decode is
chain is being exposed and hence it will always be fixed. Hence removing the
check for fixed caps. Removing gst_pad_get_current_caps for the chain->pad, as
get_pad_caps will again call the same api.
And get_topology can return NULL value if currently shutting down the
pipeline, which on being passed to create message will result in assertion
error. Check if topology is valid before using it
https://bugzilla.gnome.org/show_bug.cgi?id=755918
Avoid overflow in rate calculation. This can cause the resampler to
start on the wrong phase after a rate change.
Avoid overflow in cubic fraction calculation. This can cause noise when
dealing with higher samplerates.
analyze_new_pad() can return a new decode chain, which might have a new
GstDecodePad in the end. We should use those two for expose_pad() and not the
original ones that were passed to analyze_new_pad().
This fails when having a demuxer element that has raw pads immediately or
if a decoder with raw caps is after an adaptive demuxer.
https://bugzilla.gnome.org/show_bug.cgi?id=760949
[..] when resetting group start time. In GES, we are usually connected
to the streamsynchronizer on one audio and one video pad.
When seeking the timeline, both nlecompositions often output their flush_start
before any of them has output its flush_stop.
The current code, when receiving the first flush stop was using the
running time of the start of the second composition, which could
be pretty much anything, and means nothing at that point.
This patch is thread-safe, as STREAM_SYNCHRONIZER_LOCK is taken
both when setting flushing and when checking it.
https://bugzilla.gnome.org/show_bug.cgi?id=750013
When blocking input pads, we also need to properly set the appropriate
pending flag.
Without this, when switching stream types after initial configuration
(like going from Audio+Video to Audio+Video+Sub) playsink would never
wait for *all* input streams to be blocked (it would just wait for the
new input pad (text in this case) to be blocked).
Since the reconfiguration might introduce unlinking/relinking of elements,
we need to ensure that *ALL* input streams are blocked.
Failure to do so would result in having some input streams pushing data
to inactive elements (returning GST_FLOW_FLUSHING) or unlinked pads
(returning GST_FLOW_NOT_LINKED).
A later optimization could involve only blocking the input pads that
might be involved in reconfiguration. But better be safe than sorry for
now :)
Elements usually require that all fields on their caps are present
on the fixed caps they receive. Using intersection won't verify it,
resort to using is_subset() checks.
https://bugzilla.gnome.org/show_bug.cgi?id=760477
Those accept caps are actually checking if downstream supports
some particular caps to check if it need to negotiate a different
format. Checking only the next element with accept-caps is not enough
to guarantee that it is supported.
Using a caps query makes it obtain the supported caps for downstream
as a whole instead of only the next element.
Pass flags in _converter_new() so that we can configure ourselves
differently depending on some options.
SOURCE_WRITABLE -> IN_WRITABLE because the array is called 'in'
Simplify the API, we don't need the consumed and produced output
arguments. The caller needs to use the _get_in_frames/get_out_frames API
to check how much input is needed and how much output will be produced.
In this specific case it wouldn't cause problems as we only ever access the
first array element, but let's make explicit what is happening here.
CID 1346530 and 1346529
The filters' floating references are sinked during set_property() already,
which means that GstBin takes a new reference when adding the filter to it.
Get rid of the additional reference after adding the filter to the bin.
Unconditionally adding the template caps when proxying the caps query will play
havoc with decoders that attempt to choose an output format based on some caps
features. Creating a sink that does not include those caps features and a
decoder/parser/etc that preferentially chooses some specific caps feature when
available, will always return the decoder/parser/etc template caps and choose a
feature that downstream will be unable to support.
Fix by limiting the addition of the template caps to when the result is actually
empty.
https://bugzilla.gnome.org/show_bug.cgi?id=758212
This reverts commit 77dc09c3a9.
It can cause the FLUSH_START/STOP events to go to the sink elements, which
then causes state changes and various other problems. We shouldn't really
flush downstream here, the idea is to do *draining*.
Apart from that the testcase for the original bug here works without this
commit now.
Since the loops increasing count from 0 are always run at least
once (if count < 1), count will always be at least one when
compared to the drop/dup conditions.
Coverity 1139674
Add a property and logic to send a GstNetworkMessage event containing
the message that was received from a client. This can be used to
implement simply bidirectional communication.
Add a property and logic to send a GstNetworkMessageDispatched
event upstream to notify that a buffer has been sent. This can be used
to keep track of what client received what buffers.
Add a property to handle GstNetworkMessage events. These events contain
a buffer that is sent on the socket to allow for simple bidirectional
communication.
Otherwise we'll remove that element while keeping its buffering message in our
list, and because of that never ever report buffering 100% as that element
will always be at a lower percentage.
This fixes e.g. seeking over Period boundaries in DASH and various other
issues when buffering happens between group switches.
Also use a new mutex for protecting the buffering messages. The object lock is
already used by gst_object_has_as_ancestor() and we need to use it now for
checking if the buffering message sender has the to-be-removed element as
ancestor.
When we stop sending because we need more data, still keep a GSource
around to receive data from the clients.
Also handle read and write in the same go.
Make sure that any access to the GstDecodeBin's decode_chain
field is protected using the EXPOSE_LOCK. Also add a simple
reference counter to the GstDecodeChain structure so that when
the type_found signal fires it can hold onto the decode chain
even while the EXPOSE_LOCK is not held. This should fix a
race condition if the type_found signal fires right in the
middle of a state change that messes with the same decode
chain.
https://bugzilla.gnome.org/show_bug.cgi?id=755260
Just setting the ghostpad as flushing wasn't enough. It needs to be
consistent on the internal proxypad also, otherwise you end up in
situations where:
* a pending buffer on the target pad triggers the sticky event
propagation
* the default implementation sees that the proxypad is not flushing,
so it tries to push it to the other pad (the actual ghostpad)
* the ghostpad is flushing, so returns FALSE
* the push_event function sees that pushing the event failed...
* ... and pending buffer push returns GST_FLOW_ERROR, instead of
GST_FLOW_FLUSHING
By using gst_pad_set_active(FALSE), we ensure that both the ghostpad
and the proxypad are flushing/deactivated. The situation above will
no longer occur, and a GST_FLOW_FLUSHING will be returned.