This is a fixup for b2db18cda2
audioconvert: avoid float calculations when mixing integer-formatted channels
The int matrix was using gint and gint32 synonymously, which can theoretically
cause problems if gint and gint32 are actually different types.
https://bugzilla.gnome.org/show_bug.cgi?id=747005
Try harder to look for gvfs backend changes in the right
place, to make sure the plugin gets reloaded when backends
are removed or installed. We watch the gvfs mounts directory
because the files there contain absolute paths to the
backend executables, and those may not be in the usual gio
path.
https://bugzilla.gnome.org/show_bug.cgi?id=747841
We don't expect clients to send us any data, but if they do, just
ignore it. Web browsers might send us an HTTP request for example,
but some will still be happy if we just send them data without
a proper HTTP response.
There was a bug in the reading code path. We only have a small
read buffer and would provoke an EWOULDBLOCK trying to read
because we don't bail out of the loop early enough.
https://bugzilla.gnome.org/show_bug.cgi?id=743834
When shutting down the chain, we can get a deadlock when removing
a pad, if that chain was being busy streaming but blocked (eg, while
waiting for a queue to have free space).
https://bugzilla.gnome.org/show_bug.cgi?id=746480
In case upstream does not provide videorate with framerate information,
it will detect the current framerate from the buffer it received,
but if downstream forces the use of variable framerate (most probably
through the use of a caps filter with framerate = 0 / 1), videorate will
respect that.
And add some unit tests
https://bugzilla.gnome.org/show_bug.cgi?id=734424
In the case the framerate is variable (represented by framerate=0/1),
we currently end up loop pushing the first buffer and then recompute
diff1 and diff2 without updating the videorate->next_ts at all
leading to infinitely looping pushing that first buffer.
In the case of variable framerate, we should just compute the next_ts
as previous_pts + previous_duration.
https://bugzilla.gnome.org/show_bug.cgi?id=734424
The patch calculates a second channel mixing matrix from the current one. The
matrix contains the original values * (2^10) as integers. This matrix is used
when integer-formatted channels are mixed.
On a ARM Cortex-A8, single core, 800MHz this improves performance in a
testcase from 29s to 9s for downmixing 6 channels to stereo.
https://bugzilla.gnome.org/show_bug.cgi?id=747005
If a new pad is added after playbin has been put to READY/NULL it
should ignore new pads as it is shutting down.
This can happen when the pipeline fails to preroll (is still in READY)
and the user gives up on waiting or an error that doesn't reach
the demuxer occurs (on some event handling) and it will continue to
work and exposing pads while playbin has been put to NULL.
Without this check an input-selector is created and set to PAUSED
state, preventing playbin from properly shutting down in case it
has data blocked inside it.
audio_convert_convert unpacks to default format (signed) before calling
quantize, and the unsigned variants were equivalent to signed anyway,
so we just get rid of them.
Since range size is always 2^n, we can simply use modulo (implemented
with a bitmask).
The previous implementation used 64-bit integer division, which is
done in software on ARMv7. Although the divisor was constant, the
division could not be transformed into "multiplication by magic number"
since the dividend was 64-bit.
The now-unused and not-so-fast gst_fast_random_(u)int32_range functions
were removed.
Also, implementing bug fixes:
1) ADD_DITHER_TPDF_HF_I no longer discards bias.
2) We change TPDF's noise range to be the same as RPDF's. Previously,
RPDF's noise ranged:
{ bias - dither, bias + dither }
while TPDF's noise ranged:
{ bias/2 - dither/2, bias/2 + dither/2 - 1 } +
{ bias/2 - dither/2, bias/2 + dither/2 - 1 } =
{ bias - dither, bias + dither - 2 }
Now, both range:
{ bias - dither, bias + dither - 1 }
https://bugzilla.gnome.org/show_bug.cgi?id=746661
This fixes a race where the use-buffering property on a multiqueue was
set before the queue depth was changed from it's high preroll limits to
lower playback limits. This resulted in buffering messages being emitted
by the multiqueue in the short window between use-buffering being
set and the queue depth being reset.
https://bugzilla.gnome.org/show_bug.cgi?id=744308
The variables could have changed when the lock was released
to push a gap event. Streamsynchronizer needs to check them
again before going to sleep.
Bonus: fix a comment typo
multisocketsink now understands the new GstNetControlMessageMeta to allow
sending control messages (ancillary data) with data when writing to Unix
domain sockets.
Thanks to glib's `GSocketControlMessage` abstraction the code introduced
in this commit is entirely portable and doesn't introduce and additional
dependencies or conditionally compiled code, even if it is unlikely to be
of much use on non-UNIX systems.
multisocketsink now understands the new GstNetControlMessageMeta to allow
sending control messages (ancillary data) with data when writing to Unix
domain sockets.
A later commit will introduce a new socketsrc element which will similarly
understand `GstNetControlMessageMeta`. This, when used with a
`GSocketControlMessage` of type `GUnixFDMessage` will allow GStreamer to
send and receive file-descriptions in ancillary data, the first step to
using memfds to implement zero-copy video IPC.
Thanks to glib's `GSocketControlMessage` abstraction the code introduced
in this commit is entirely portable and doesn't introduce and additional
dependencies or conditionally compiled code, even if it is unlikely to be
of much use on non-UNIX systems.
This provides notification that the socket in use was closed by the peer
and gives an opportunity to replace it with a new one which is not
closed, allowing reading from many sockets in order.
I use this in pulsevideo to implement reconnection logic to handle the
pulsevideo service dieing, such that is can be restarted without
disrupting downstream.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=739546
* Don't bother polling, just do a blocking read, the `GCancellable` will
take care of unlocking. This should also be faster on MS Windows where
the GIO documentation for `g_socket_get_available_bytes` states: "Note
that on Windows, this function is rather inefficient in the UDP case".
* Implement `GstPushSrc.fill` rather than `GstPushSrc.create`. This means
that we will be using the downstream allocator which may be more
efficient. It also means that socketsrc is likely to respect its
"blocksize" property (assuming that there is enough data available).
See https://bugzilla.gnome.org/show_bug.cgi?id=739546
`socketsrc` can be considered a source counterpart to `multisocketsink`.
It can be considered a generalization of `tcpclientsrc` and
`tcpserversrc`: it contains all the logic required to communicate over
the socket but none of the logic for creating the sockets/establishing
the connection in the first place, allowing the user to accomplish this
externally in whatever manner they wish making it applicable to other
types of sockets besides TCP.
This commit essentially copies the implementation directly from
tcpserversrc. Later patches will tidy the implementation up and
re-implement `tcpclientsrc` and `tcpserversrc` in terms of `socketsrc`.
See https://bugzilla.gnome.org/show_bug.cgi?id=739546
If a buffer is made up of non-contiguous `GstMemory`s `gst_buffer_map`
has to copy all the data into a new `GstMemory` which is contiguous. By
mapping all the `GstMemory`s individually and then using scatter-gather
IO we avoid this situation.
This is a preparatory step for adding support to multisocketsink for
sending file descriptors, where a GstBuffer may be made up of several
`GstMemory`s, some of which are backed by a memfd or file, but I think this
patch is valid and useful on its own.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=746150
When we modify a GList (via g_list_delete_link), always reassign the
new head to the original GList. Otherwise we end up with
filtered_errors being corrupt (the head might have been the element
removed)
This function is static, and only ever called with the expose lock
taken. It thus has no reason to take this lock itself.
This was introduced by one of my locking fixes from 741355.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Check if dbin->decode_chain is NULL before running drain_and_switch_chains()
because if it is, we shouldn't run that function or it will segfault.
CID #1271074
Otherwise if there are multiple parsers we would most likely break negotiation
of the stream-format/alignment wanted by the decoders as parsers generally
support all possible stream-formats and alignments.
If caps on a newly added pad are NULL, analyze_new_pad will try to
acquire the chain lock to add a probe to the pad so the chain can
be built later. This comes from the streaming thread, in response
to headers or other buffers causing this pad to be added, so the
stream lock is taken.
Meanwhile, another thread might be destroying the chain from a
downward state change. This will cause the chain to be freed with
the chain lock taken, and some elements are set to NULL here, which
can include the parser. This causes pad deactivation, which tries
to take the element's pad's stream lock, deadlocking.
Fix this by keeping track of which elements need setting to NULL,
and only do this after the chain lock is released. Only the chain
manipulation needs to be locked, not the elements' state changes.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
There was a deadlock between a thread changing decodebin/demuxer
state from PAUSED to READY, and another thread pushing data
when starting.
From the stack trace at
https://bug741355.bugzilla-attachments.gnome.org/attachment.cgi?id=292471,
I deduce the following is happening, though I did not reproduce the
problem so I'm not sure this patch fixes it.
The streaming thread (thread 2 in that stack trace) takes the demuxer's
sink pad's stream lock in gst_ogg_demux_perform_seek_pull and will
activate a new chain. This ends up causing the expose lock being taken
in _pad_added_cb in decodebin.
Meanwhile, a state changed is triggered on thread 1, which takes the
expose lock in decodebin in gst_decode_bin_change_state, then frees
the previous chain, which ends up calling gst_pad_stop_task on the
demuxer's task, which in turn takes the demuxer's sink pad's stream
lock, deadlocking as both threads are now waiting for each other.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Also improve the waiting condition for stream switches, which was assuming
before that the condition variable will only stop waiting once when it is
signaled. But the documentation says that there might be spurious wakeups.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Change the GAP events that are currently sent from the chain function of
the current pad to all other EOS pads. They should instead be sent from
their own streaming threads.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Wait in the event function when EOS is received until all pads are EOS
and then forward the EOS event from each pads own event function.
Also send a new GAP event for EOS pads from the event function whenever
going from PLAYING->PAUSED by shortly waking up the GCond. This is needed
to allow sinks to pre-roll again, as they did not receive EOS yet because
we blocked that, but also will never get data again.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
In gst_video_scale_fixate_caps () it can goto done without freeing the memory
of the tmp GstStructure. This makes it go out of scope and leak.
CID #1265766
Ignore chroma subsampling and color matrix transformations like the
old videoscale used to do. This is to make the performance like it was
before.
See https://bugzilla.gnome.org/show_bug.cgi?id=741987
Speex may decide not to consume any samples because it can't write any. I've
seen a hang during draining caused by the resample loop never terminating.
In that case, resampling happened as normal until olen was 0 but ilen was
still 1. _process_native then reduced ichunk to 0, so ilen never decreased
below 1 and the loop never terminated.
Instead of reverting 684cf44 ({audioresample: don't skip input samples),
break only if all output samples have been produced and speex refuses
to consume any more input samples.
https://bugzilla.gnome.org/show_bug.cgi?id=732908
VideRate keeps 1 buffer in order to duplicate base on closest buffer
relative to targeted time. This extra buffer need to be request
otherwise the pipeline may stall when fixed size buffer pool is used.
https://bugzilla.gnome.org/show_bug.cgi?id=738302
Consider pipeline: gst-launch-1.0 playbin uri=http://example.com/a.ogg
Consider 128kbit audio stream.
As soon as uridecodebin detects the bitrate, it configures its input
queue2 max-size to 32000 bytes.
The 2MB buffer in multiqueue is nearly 2 orders of magnitude bigger.
This non-deterministically drives queue2 buffer anywhere from
100% to 0% until multiqueue is filled.
This patch sets multiqueue size to 5 buffers early in no_more_pads_cb.
Partly reverts commit db771185ed.
https://bugzilla.gnome.org/show_bug.cgi?id=740689
Decodebin has already added the element to the bin and should only
select caps compatible pads. It should disable the pad link checks
to avoid doing those again.
https://bugzilla.gnome.org/show_bug.cgi?id=742885
Create a function to do the pad cleanup of the GstSourceCombine struct
and use it to not forget to also cleanup the sink pad and fix a memory
leak.
https://bugzilla.gnome.org/show_bug.cgi?id=741198
In some cases, the user might want the stream outputted by encodebin to
be in the exact same format during all the stream. We should let the
user specify when this is the case. This commit add some API in the
GstEncodingProfile to determine whether the format can be renegotiated
after the encoding started or not.
API:
gst_encoding_profile_set_allow_dynamic_output
gst_encoding_profile_get_allow_dynamic_output
https://bugzilla.gnome.org/show_bug.cgi?id=740214
Before we were setting them to PAUSED and (much) later connecting to
their source pad caps notify signal.
There was a race where that demuxer was pushing a caps and later a buffer
on its source pad when we were not even connected to its source pad caps notify
signal leading to decodebin missing the information and not keeping on
building the pipeline on CAPS event thus the demuxer was posting an ERROR
(not linked) message on the bus. This need to be done for 'simple
demuxers' because those have one ALWAYS source pad, not like usual demuxers
that have several dynamic source pads.
A "simple demuxer" is a demuxer that has one and only one ALWAYS source
pad.
https://bugzilla.gnome.org/show_bug.cgi?id=740693
There was a race where:
1) we would put the element to PAUSED
2) It would get data sent to it from upstream
3) It would thus send caps
3) caps_notify_cb would continue autoplugging
4) caps would flow downstream, the last pad would get exposed
5) we were still not done sending the sticky events
Taking the stream lock on the new element's sinkpad and only
releasing it when sticky events have all been sent prevents
the caps from reaching the source pad of the element before
we're all set.
https://bugzilla.gnome.org/show_bug.cgi?id=740694
Otherwise the following can happen:
1. set mute=true
2. play media1 (Ok)
3. play media without audio (audiochain removed)
4. play media2 (audiochain created, mute=*false*)
https://bugzilla.gnome.org/show_bug.cgi?id=740675
There's no reason why we would have to wait for the next buffer to decide
whether to output the current one or not. We just have to check if the
current one is earlier than our expected next time, which is the previous
frame timestamp plus the expected frame duration.
https://bugzilla.gnome.org/show_bug.cgi?id=740018
If there are two parser elements available for the same media format,
then decodebin is autoplugging an extra capsfilter and parser irrespective
of caps and rank. So restrict the decodebin from autoplugging multiple parser
elements back to back in adjacent positions with in a single DecodeChain
for the same media format.
https://bugzilla.gnome.org/show_bug.cgi?id=738416
timestamp_offset is being declared as an int64 variable,
for which the min
value of G_MININT64 is -9223372036854775808
Changing the minimum and maximum limit for the offset variable.
https://bugzilla.gnome.org/show_bug.cgi?id=738568
The "iradio-mode" property used to have a default FALSE value in HTTP
source elements but now it should default to TRUE or just do not exist
as a property so it is not really needed to set it any more in
uridecodebin.
Apart from that this code could've never worked as uridecodebin looks for a
string-typed iradio-mode property, but it's a boolean in all sources.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=725383
It is better to use storel to splat the variable into the destination.
ORC doesn't know when a variable is last written to so it can't yet optimize
away the copy operation.
Support lanczos scaling method for NV12 and NV21 formats.
Scale the 'Y' plane and scale 'NV' plane.
Implementation for submethods - int16, int32, float and double
https://bugzilla.gnome.org/show_bug.cgi?id=737400
Move the conversion code used in videoconvert to the video library
and expose a simple but generic API to do arbitrary conversion. It can
currently do colorspace conversion but the plan is to add videoscale to
it as well.
See https://bugzilla.gnome.org/show_bug.cgi?id=732415
audioresample and videoscale is something the application will have to do if
required, but we can at least help here by adding the
audioconvert/videoconvert elements.
https://bugzilla.gnome.org/show_bug.cgi?id=735748
When switching URI from about-to-finish, playbin starts decoding the new
URI and the queue2 inside uridecodebin starts emitting buffering messages
immediately. However, the queue(s) inside playsink still have buffers to
play and the pipeline doesn't need to pause for buffering, so we should
not send those buffering messages up to the application, otherwise there
is an audible glitch caused by pausing the pipeline for a very short time.
https://bugzilla.gnome.org/show_bug.cgi?id=727255
when downsampling, the output buffer can be filled before all the input
samples are consumed. this is correct: when downsampling, several input
samples are needed for each output sample, so when only a small number of
input samples are available the number of output samples produced can be 0.
the resampler, however, was discarding those extra input samples instead of
clocking them into its filter history for the next iteration. this patch
fixes this by removing the check that the output buffer is full. the code
now always loops until all input samples are consumed, and relies on the
calling code to have provided a suitably sized location for the output.
note that there are already other checks in place in the calling code to
ensure that this is the case.
https://bugzilla.gnome.org/show_bug.cgi?id=732908
If we had plugins and an error occurred we only include the error message
caused by this, otherwise we will include the codec description as generated
from the caps.
This allows to detect which exact codec was missing instead of getting a
generic "no suitable decoders found" error message.
Otherwise we might change some capsfeatures from ANY to the specific
value from the filter and do not filter those out in case the
sink doesn't support them
https://bugzilla.gnome.org/show_bug.cgi?id=734822
Gracefully handle switching groups that all pads are deadend.
This can happen when quickly switching programs on mpegts as the
output is unaligned it can happen that not enough data was accumulated at
parsers to generate any buffers, causing the stream to receive EOS before
any data can be decoded.
To handle this scenario, the _expose function now also gets if there is
any next group to be exposed along with the list of endpads. If there are
no endpads and there is another group to expose it will switch to this next
group and then retry exposing the streams.
Also, the requirement to only switch from the chain that has the endpad had
to be modified to care for when the drainpad is NULL
https://bugzilla.gnome.org/show_bug.cgi?id=733169
Set up a fakesink with a pad probe to replace the missing encoder to detect
if encoding was really required and only error out in this case. Otherwise
just let passthrough branch work.
This delays the error posting from the set_state function to when buffers
are really flowing. Unit test updated accordingly
https://bugzilla.gnome.org/show_bug.cgi?id=650652
Unsetting DISCONT flag means we need to copy the buffer. This copy operation
mandates that all GstMemory should be copy-able which is not always the case
https://bugzilla.gnome.org/show_bug.cgi?id=727409
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
otherwise we're going to
a) start Parser/Converter before they are linked to their capsfilter,
breaking their negotiation of a proper stream format
b) start demuxers without having connected to their pad-added signals. We
miss pads and in the worst case don't link any pads at all
... and if this fails for whatever reason we skip the element and instead
try with the next element. This allows us to handle elements that fail
when setting caps on them by just skipping to the next alternative element.
They might fail to go to PAUSED, and when connecting them further
we might already expose their srcpads on decodebin if we're unlucky.
This prevents us to handle failures going to PAUSED gracefully.
If the caps query returned us fixed caps this doesn't mean yet
that these caps are actually complete (fields might be missing).
It allows to do us some decisions, but the selection of the next
element should be delayed as only complete caps allow proper selection
of the next element.
Otherwise we might try to continue autoplugging e.g. for a specific
stream-format although the parser could convert to something else, thus giving
us potentially less options for decoders.
We can't convert to ANY capsfeatures, they are only there so that we
can passthrough whatever downstream can support... but we definitely
don't want to return them to upstream.
Canceling the accept/select happens when the source is shut down. This is
not an error and the GST_FLOW_ERROR causes problems when only part of the
pipeline is shut down.
https://bugzilla.gnome.org/show_bug.cgi?id=731567
When playing RTSP streams there will be one decodebin per stream. If some of
them fail because of a missing plugin we should not fail completely but play
the supported streams at least.
https://bugzilla.gnome.org/show_bug.cgi?id=730868
Aggregate buffering messages to only post the lower value
to avoid setting pipeline to playing while any multiqueue
is still buffering.
There are 3 scenarios where the entries should be removed from
the list:
1) When decodebin is set to READY
2) When an element posts a 100% buffering (already implemented)
3) When a multiqueue is removed from decodebin.
For item 3 we don't need to handle it because this should only
happen when either 1 is hapenning or when it is playing a
chained file, for which number 2 should have happened for the
previous stream to finish
https://bugzilla.gnome.org/show_bug.cgi?id=726423
Otherwise we might end up inside the callback without having stored
the probe id... then try to remove that probe (not!) from the callback
and wait forever for the pad to unblock.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
It was already checked in an early out, and as it's only
incremented for at most the size of the passed buffer, it
can only become NULL in an address wraparound.
While there, don't cast away const on a pointer.
Coverity 1139845
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will
e
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will be
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
The typefinder returns LIKELY for as little as one possible
sync and no bad sync (not even taking into account how much
data was looked at for that). It's generally just not fit
for purpose, so should just not return anything like LIKELY
at all ever, even more so since it only recognises one out
of ten H263 files, and likes to mis-detect mp3s as H263.
https://bugzilla.gnome.org/show_bug.cgi?id=700770https://bugzilla.gnome.org/show_bug.cgi?id=725644
If we have the peer caps and a caps filter, return peer_caps +
intersect_first (filter, converter_caps) instead of
intersect_first (filter, peer_caps + converter_caps) and preservers
downstream caps preference order.
https://bugzilla.gnome.org/show_bug.cgi?id=724893
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.
Adaptive streams should download its data inside the demuxer, so
we want to use multiqueue's buffering messages to control the
pipeline flow and avoid losing sync if download rates are low;
https://bugzilla.gnome.org/show_bug.cgi?id=707636
Otherwise there's an interesting race condition when we destroy
the inputselector (actually it will be destroyed later when its state
change message gets destroyed) and afterwards release its sinkpad.
This is the code path when the last channel is removed from the
input selector.
Gave this warning sometimes, for chained oggs or whenever else
we change decode groups:
GStreamer-CRITICAL **: Padname '':sink_0 does not belong to element inputselector0 when removing
MONO and NONE position are the same, for example, but in
general there isn't much to do here for such a conversion.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
If the text pads does not go away we just set the overlay to silent, which
allows us to immediately re-enable subs later again. However before this
change we also released the streamsynchronizer text pads, which deadlocked
because there was still dataflow going on. Just do this only if we remove
the complete chain.
https://bugzilla.gnome.org/show_bug.cgi?id=683504
Change the way autoplug-select is accumulated so that it's possible to have
multiple handlers. The handlers keep getting called as long as they keep
returning GST_AUTOPLUG_SELECT_TRY.
One practical example of when this is needed is when hooking into playbin's
uridecodebin, which is perhaps not very elegant but the only way to influence
which streams playbin autoplugs/exposes.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723096
Discussion on IRC indicated that the main reason for this list was to
prevent demuxers that can trigger a lot of seeking from using
progressive buffering using queue2 (which due to being seekable triggers
that behaviour).
However given that upstream can indicate seeks are possible but should
be avoided via a scheduling query, this extra whitelisting shouldn't be
necessary for well-behaved demuxers.
https://bugzilla.gnome.org/show_bug.cgi?id=704933
Make a little table of conversions and manually score them. Use this
info to define better weights for the scoring algorithm.
give separate scores for doing changes and the impact of the change,
This allows us to avoid conversion when we can but still allow fairly
lossless changes.
The old code did not penalize GRAY conversions, PAL conversions were
punished too low and depth conversions too high.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=722656
Don't try to interpolate the chroma samples, the used algorithm only
works for horizontal cositing. Let's switch to a faster and safer
version until we handle chroma siting correctly in the fastpaths.
Rework the orc code to be around 10% faster and support arbitrary matrices.
Pass the matrix parameters to the YUV->RGB functions to make them work
for all matrices. This enables more and faster fastpath conversions.
See https://bugzilla.gnome.org/show_bug.cgi?id=721701
This fast-path was adding 128 to every component including
alpha while it should only be done for all components except
alpha. This caused wrong alpha values to be generated.
Also remove the high-quality I420 to BGRA fast-path as it needs
the same fix, which causes an additional instruction, which causes
orc to emit more than 96 variables, which then just crashes.
This can only be fixed in orc by breaking ABI and allowing more
variables.
If a pipeline fails to preroll, it might happen that the sinks are
put into READY state from playbin's sink activation, but they are never
set to playsink, so they aren't being managed by a GstBin and will keep
their READY state until they are unreffed, leading to a warning.
Prevent this by always forcing them to NULL when deactivating a group
https://bugzilla.gnome.org/show_bug.cgi?id=708789
Fix component ordering, it's wrong in both the scanline and merge
function so it cancels eachother out and isn't really a except for
loss of precision of the green component.
Fix calculation of the filter weight
Some of the fastpath function can only work with aligned widht/height
so make sure we check this as well when choosing a fastpath.
Add fastpath for I420/YV12 -> BGRx
This commit adds detection of the "dash" and "avc3" compatible brands
in qt_type_find.
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV
box (moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in
the first sample of every fragment (i.e. the first sample in each mdat
box). The principal reason for avc3 is to make it easier for client
implementations, because it removes the requirement to insert the
SPS+PPS in to the decoder pipeline every time there is a representation
change.
https://bugzilla.gnome.org/show_bug.cgi?id=702004