We reset the group start time to the running time of the start of the other
streams that are not flushed. This fixes seeking in gapless mode after the
first track has played.
https://bugzilla.gnome.org/show_bug.cgi?id=750013
If suburidecodebin is failed to negotiate (e.g file does not exist)
then free internal suburi variable so that 'current-suburi' property
returns correct status.
https://bugzilla.gnome.org/show_bug.cgi?id=751118
6a2f017bfa changed it to check the subtitle
factory caps if there is a text-sink but we fail to get its sinkpad. What
actually should be done here is to use the factory caps if there is no
text-sink at all.
https://bugzilla.gnome.org/show_bug.cgi?id=750785
There is the GstVideoMultiviewMode enum and the
GstVideoMultiviewFramePacking, which is a subset of the
multiview modes, with the same values as the corresponding
types from the full enum. Do some casts and use the right
times to avoid implicitly using/passing GstVideoMultiviewFramePacking
when a GstVideoMultiviewMode is needed.
Add GstVideoMultiviewFramePacking enum, and the
video-multiview-mode and video-multiview-flags
properties on playbin.
Use a pad probe to replace the multiview information in
video caps sent out from uridecodebin.
This is a part implementation only - for full
correctness, it should also modify caps in caps events,
accept-caps and allocation queries.
https://bugzilla.gnome.org/show_bug.cgi?id=611157
When traversing the color balance element channel list to find the one that
matches with the playsink proxy, the assignation was set to iterator of the
playsink proxy, not the balance element. Thus, the mapping to the values of
the balance element channel was wrong.
This patch fixes the assignation of the color balance element channel, so the
mapping to the channel of the color balance element is fixed.
https://bugzilla.gnome.org/show_bug.cgi?id=750691
when text playbin is not enabled in the beginning, then
video_srcpad_stream_synchronizer gets linked to videochain->sinkpad
and when we try to enable text bin during play, since it is already linked to videochain,
text chain does not get linked properly. Hence unlinking the same
before linking to text chain
https://bugzilla.gnome.org/show_bug.cgi?id=748908
Summary:
So that the user can easily use the same encoding profile to render
with/without audio/video stream.
API:
gst_encoding_profile_is_disabled
gst_encoding_pofile_set_enabled
https://bugzilla.gnome.org/show_bug.cgi?id=749056
When a stream has a variable framerate, videorate calculates it and
forces it on the output caps. However, the code in _transform_caps()
currently also does that if the transform is going in the opposite
direction (GST_PAD_SRC), so during a renegotiation it tries to force
upstream to use the calculated framerate and it fails.
https://bugzilla.gnome.org/show_bug.cgi?id=750032
This part of pipeline is:
tee name=t ! visualizationbin ! streamsynchronizer name=s
t. ! s.
streamsynchronizer might block and it could starve the visualization
branch of the pipeline when it is enabled.
The visualization bin has queues internally but the other branch
that links the audiotee directly to the synchronizer is vulnerable
to block. Adding a queue between "t. ! s." fixes deadlocks.
https://bugzilla.gnome.org/show_bug.cgi?id=749676
From the API documentation: "Note that it is generally not
a good idea to reuse an existing cancellable for more
operations after it has been cancelled once, as this
function might tempt you to do. The recommended practice
is to drop the reference to a cancellable after cancelling
it, and let it die with the outstanding async operations.
You should create a fresh cancellable for further async
operations."
https://bugzilla.gnome.org/show_bug.cgi?id=739132
From the API documentation: "Note that it is generally not
a good idea to reuse an existing cancellable for more
operations after it has been cancelled once, as this
function might tempt you to do. The recommended practice
is to drop the reference to a cancellable after cancelling
it, and let it die with the outstanding async operations.
You should create a fresh cancellable for further async
operations."
https://bugzilla.gnome.org/show_bug.cgi?id=739132
Rather than one of the input pad video info's.
The test checking this was not constraining the output frame size
to ensure that the out of frame stream was not being displayed.
GST_VIDEO_CONVERTER_OPT_ALPHA_MODE, GST_VIDEO_CONVERTER_OPT_CHROMA_MODE,
GST_VIDEO_CONVERTER_OPT_MATRIX_MODE, GST_VIDEO_CONVERTER_OPT_GAMMA_MODE and
GST_VIDEO_CONVERTER_OPT_PRIMARIES_MODE were G_TYPE_STRING with only a few valid
options. Changed those to real enums.
https://bugzilla.gnome.org/show_bug.cgi?id=749104
Upstream might want to use it to properly map timestamps to running/stream
times, if we just override it with 0 synchronization will be just wrong.
For this we remove some old 0.10 code related to segment accumulation, and
remove some more code that is useless now, and accumulate the group start time
(aka segment.base offset) manually now.
https://bugzilla.gnome.org/show_bug.cgi?id=635701
It's a waste of resources to map it if it won't be converted
or used at all. Since we moved the frame mapping down, we need
to use the GST_VIDEO_INFO accessor macros now in the code above
that instead of the GST_VIDEO_FRAME accessor macros.
https://bugzilla.gnome.org/show_bug.cgi?id=746147
For each frame, compare the frame boundaries, check if the format contains an
alpha channel, check opacity, and skip the frame if it's going to be completely
overwritten by a higher zorder frame. The check is O(n^2), but that doesn't
matter here because the number of sinkpads is small.
More can be done to avoid needless drawing, but this covers the majority of
cases. See TODOs. Ideally, a reverse painter's algorithm should be used for
optimal drawing, but memcpy during compositing is small compared to the CPU used
for frame conversion on each pad.
https://bugzilla.gnome.org/show_bug.cgi?id=746147
Embedded systems often have limited charset conversion
functionality, so don't rely on g_convert() (i.e. iconv)
for UTF-16 to UTF-8 conversions, we can easily enough do
that ourselves by converting to native endianness and
then using GLib's helper functions.
This is a fixup for b2db18cda2
audioconvert: avoid float calculations when mixing integer-formatted channels
The int matrix was using gint and gint32 synonymously, which can theoretically
cause problems if gint and gint32 are actually different types.
https://bugzilla.gnome.org/show_bug.cgi?id=747005
Try harder to look for gvfs backend changes in the right
place, to make sure the plugin gets reloaded when backends
are removed or installed. We watch the gvfs mounts directory
because the files there contain absolute paths to the
backend executables, and those may not be in the usual gio
path.
https://bugzilla.gnome.org/show_bug.cgi?id=747841
We don't expect clients to send us any data, but if they do, just
ignore it. Web browsers might send us an HTTP request for example,
but some will still be happy if we just send them data without
a proper HTTP response.
There was a bug in the reading code path. We only have a small
read buffer and would provoke an EWOULDBLOCK trying to read
because we don't bail out of the loop early enough.
https://bugzilla.gnome.org/show_bug.cgi?id=743834
When shutting down the chain, we can get a deadlock when removing
a pad, if that chain was being busy streaming but blocked (eg, while
waiting for a queue to have free space).
https://bugzilla.gnome.org/show_bug.cgi?id=746480
In case upstream does not provide videorate with framerate information,
it will detect the current framerate from the buffer it received,
but if downstream forces the use of variable framerate (most probably
through the use of a caps filter with framerate = 0 / 1), videorate will
respect that.
And add some unit tests
https://bugzilla.gnome.org/show_bug.cgi?id=734424
In the case the framerate is variable (represented by framerate=0/1),
we currently end up loop pushing the first buffer and then recompute
diff1 and diff2 without updating the videorate->next_ts at all
leading to infinitely looping pushing that first buffer.
In the case of variable framerate, we should just compute the next_ts
as previous_pts + previous_duration.
https://bugzilla.gnome.org/show_bug.cgi?id=734424
The patch calculates a second channel mixing matrix from the current one. The
matrix contains the original values * (2^10) as integers. This matrix is used
when integer-formatted channels are mixed.
On a ARM Cortex-A8, single core, 800MHz this improves performance in a
testcase from 29s to 9s for downmixing 6 channels to stereo.
https://bugzilla.gnome.org/show_bug.cgi?id=747005
If a new pad is added after playbin has been put to READY/NULL it
should ignore new pads as it is shutting down.
This can happen when the pipeline fails to preroll (is still in READY)
and the user gives up on waiting or an error that doesn't reach
the demuxer occurs (on some event handling) and it will continue to
work and exposing pads while playbin has been put to NULL.
Without this check an input-selector is created and set to PAUSED
state, preventing playbin from properly shutting down in case it
has data blocked inside it.
audio_convert_convert unpacks to default format (signed) before calling
quantize, and the unsigned variants were equivalent to signed anyway,
so we just get rid of them.
Since range size is always 2^n, we can simply use modulo (implemented
with a bitmask).
The previous implementation used 64-bit integer division, which is
done in software on ARMv7. Although the divisor was constant, the
division could not be transformed into "multiplication by magic number"
since the dividend was 64-bit.
The now-unused and not-so-fast gst_fast_random_(u)int32_range functions
were removed.
Also, implementing bug fixes:
1) ADD_DITHER_TPDF_HF_I no longer discards bias.
2) We change TPDF's noise range to be the same as RPDF's. Previously,
RPDF's noise ranged:
{ bias - dither, bias + dither }
while TPDF's noise ranged:
{ bias/2 - dither/2, bias/2 + dither/2 - 1 } +
{ bias/2 - dither/2, bias/2 + dither/2 - 1 } =
{ bias - dither, bias + dither - 2 }
Now, both range:
{ bias - dither, bias + dither - 1 }
https://bugzilla.gnome.org/show_bug.cgi?id=746661
This fixes a race where the use-buffering property on a multiqueue was
set before the queue depth was changed from it's high preroll limits to
lower playback limits. This resulted in buffering messages being emitted
by the multiqueue in the short window between use-buffering being
set and the queue depth being reset.
https://bugzilla.gnome.org/show_bug.cgi?id=744308
Also:
- Don't modify size on early buffer
The size is the size of the buffer, not of remaining part.
- Use the input caps when manipulating the input buffer
Also store in in the sink pad
- Reply to the position query in bytes too
- Put GAP flag on output if all inputs are GAP data
- Only try to clip buffer if the incoming segment is in time or samples
- Use incoming segment with incoming timestamp
Handle non-time segments and NONE timestamps
- Don't reset the position when pushing out new caps
- Make a number of member variables private
- Correctly handle case where no pad has a buffer
If none of the pads have buffers that can be handled, don't claim to be EOS.
- Ensure proper locking
- Only support time segments
https://bugzilla.gnome.org/show_bug.cgi?id=740236
When the timeout is reached, only ignore pads with no buffers, iterate
over the other pads until all buffers have been read. This is important
in the cases where the input buffers are smaller than the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Correctly calculate alpha in a few places by dividing by 255,
not 256.
Fix the argb and bgra blending functions to avoid an off-by-one
error in the calculations, so painting with alpha = 0xff doesn't
ever bleed through from behind
The variables could have changed when the lock was released
to push a gap event. Streamsynchronizer needs to check them
again before going to sleep.
Bonus: fix a comment typo
multisocketsink now understands the new GstNetControlMessageMeta to allow
sending control messages (ancillary data) with data when writing to Unix
domain sockets.
Thanks to glib's `GSocketControlMessage` abstraction the code introduced
in this commit is entirely portable and doesn't introduce and additional
dependencies or conditionally compiled code, even if it is unlikely to be
of much use on non-UNIX systems.
multisocketsink now understands the new GstNetControlMessageMeta to allow
sending control messages (ancillary data) with data when writing to Unix
domain sockets.
A later commit will introduce a new socketsrc element which will similarly
understand `GstNetControlMessageMeta`. This, when used with a
`GSocketControlMessage` of type `GUnixFDMessage` will allow GStreamer to
send and receive file-descriptions in ancillary data, the first step to
using memfds to implement zero-copy video IPC.
Thanks to glib's `GSocketControlMessage` abstraction the code introduced
in this commit is entirely portable and doesn't introduce and additional
dependencies or conditionally compiled code, even if it is unlikely to be
of much use on non-UNIX systems.
This provides notification that the socket in use was closed by the peer
and gives an opportunity to replace it with a new one which is not
closed, allowing reading from many sockets in order.
I use this in pulsevideo to implement reconnection logic to handle the
pulsevideo service dieing, such that is can be restarted without
disrupting downstream.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=739546
* Don't bother polling, just do a blocking read, the `GCancellable` will
take care of unlocking. This should also be faster on MS Windows where
the GIO documentation for `g_socket_get_available_bytes` states: "Note
that on Windows, this function is rather inefficient in the UDP case".
* Implement `GstPushSrc.fill` rather than `GstPushSrc.create`. This means
that we will be using the downstream allocator which may be more
efficient. It also means that socketsrc is likely to respect its
"blocksize" property (assuming that there is enough data available).
See https://bugzilla.gnome.org/show_bug.cgi?id=739546
`socketsrc` can be considered a source counterpart to `multisocketsink`.
It can be considered a generalization of `tcpclientsrc` and
`tcpserversrc`: it contains all the logic required to communicate over
the socket but none of the logic for creating the sockets/establishing
the connection in the first place, allowing the user to accomplish this
externally in whatever manner they wish making it applicable to other
types of sockets besides TCP.
This commit essentially copies the implementation directly from
tcpserversrc. Later patches will tidy the implementation up and
re-implement `tcpclientsrc` and `tcpserversrc` in terms of `socketsrc`.
See https://bugzilla.gnome.org/show_bug.cgi?id=739546
If a buffer is made up of non-contiguous `GstMemory`s `gst_buffer_map`
has to copy all the data into a new `GstMemory` which is contiguous. By
mapping all the `GstMemory`s individually and then using scatter-gather
IO we avoid this situation.
This is a preparatory step for adding support to multisocketsink for
sending file descriptors, where a GstBuffer may be made up of several
`GstMemory`s, some of which are backed by a memfd or file, but I think this
patch is valid and useful on its own.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=746150
Actually accumulate the sample counter to check the accumulated error
between actual timestamps and expected ones instead of just resetting
the error back to 0 with every new buffer.
Also don't reset discont_time whenever we don't resync. The whole point of
discont_time is to remember when we first detected a discont until we actually
act on it a bit later if the discont stayed around for discont_wait time.
https://bugzilla.gnome.org/show_bug.cgi?id=746032
This allows us to handle new segment events correctly; either by dropping
buffers or inserting silence; for example if the offset is changed on an srcpad
connected to audiomixer.
This reverts commit d387cf67df.
The analysis was wrong: The first 20ms of latency are introduced by the source
already and put into the latency query, making it only necessary to cover the
additional 20ms of audiomixer inside audiomixer.
Let's assume a source that outputs outputs 20ms buffers, and audiomixer having
a 20ms output buffer duration. However timestamps don't align perfectly, the
source buffers are offsetted by 5ms.
For our ASCII art picture, each letter is 5ms, each pipe is the start of a
20ms buffer. So what happens is the following:
0 20 40 60
OOOOOOOOOOOOOOOO
| | | |
5 25 45 65
IIIIIIIIIIIIIIII
| | | |
This means that the second output buffer (20 to 40ms) only gets its last 5ms
at time 45ms (the timestamp of the next buffer is the time when the buffer
arrives). But if we only have a latency of 20ms, we would wait until 40ms
to generate the output buffer and miss the last 5ms of the input buffer.
When we modify a GList (via g_list_delete_link), always reassign the
new head to the original GList. Otherwise we end up with
filtered_errors being corrupt (the head might have been the element
removed)
This function is static, and only ever called with the expose lock
taken. It thus has no reason to take this lock itself.
This was introduced by one of my locking fixes from 741355.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Check if dbin->decode_chain is NULL before running drain_and_switch_chains()
because if it is, we shouldn't run that function or it will segfault.
CID #1271074
Otherwise if there are multiple parsers we would most likely break negotiation
of the stream-format/alignment wanted by the decoders as parsers generally
support all possible stream-formats and alignments.
If caps on a newly added pad are NULL, analyze_new_pad will try to
acquire the chain lock to add a probe to the pad so the chain can
be built later. This comes from the streaming thread, in response
to headers or other buffers causing this pad to be added, so the
stream lock is taken.
Meanwhile, another thread might be destroying the chain from a
downward state change. This will cause the chain to be freed with
the chain lock taken, and some elements are set to NULL here, which
can include the parser. This causes pad deactivation, which tries
to take the element's pad's stream lock, deadlocking.
Fix this by keeping track of which elements need setting to NULL,
and only do this after the chain lock is released. Only the chain
manipulation needs to be locked, not the elements' state changes.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
There was a deadlock between a thread changing decodebin/demuxer
state from PAUSED to READY, and another thread pushing data
when starting.
From the stack trace at
https://bug741355.bugzilla-attachments.gnome.org/attachment.cgi?id=292471,
I deduce the following is happening, though I did not reproduce the
problem so I'm not sure this patch fixes it.
The streaming thread (thread 2 in that stack trace) takes the demuxer's
sink pad's stream lock in gst_ogg_demux_perform_seek_pull and will
activate a new chain. This ends up causing the expose lock being taken
in _pad_added_cb in decodebin.
Meanwhile, a state changed is triggered on thread 1, which takes the
expose lock in decodebin in gst_decode_bin_change_state, then frees
the previous chain, which ends up calling gst_pad_stop_task on the
demuxer's task, which in turn takes the demuxer's sink pad's stream
lock, deadlocking as both threads are now waiting for each other.
https://bugzilla.gnome.org/show_bug.cgi?id=741355
Also improve the waiting condition for stream switches, which was assuming
before that the condition variable will only stop waiting once when it is
signaled. But the documentation says that there might be spurious wakeups.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Change the GAP events that are currently sent from the chain function of
the current pad to all other EOS pads. They should instead be sent from
their own streaming threads.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
Wait in the event function when EOS is received until all pads are EOS
and then forward the EOS event from each pads own event function.
Also send a new GAP event for EOS pads from the event function whenever
going from PLAYING->PAUSED by shortly waking up the GCond. This is needed
to allow sinks to pre-roll again, as they did not receive EOS yet because
we blocked that, but also will never get data again.
https://bugzilla.gnome.org/show_bug.cgi?id=736655
There's no reason why audiomixer should override the segment
base of upstream with whatever value it got from a SEEK event,
or even worse... with 0 if there was no SEEK event yet. This
broke synchronization if upstream provided a segment base other
than 0, e.g. when using pad offsets.
Also that this code did things conditional on the element's state
should've been a big warning already that something is just wrong.
If this breaks anything else now, let's fix it properly :)
Also don't do fancy segment position trickery when receiving a
segment event. It's just not correct.
In gst_video_scale_fixate_caps () it can goto done without freeing the memory
of the tmp GstStructure. This makes it go out of scope and leak.
CID #1265766
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Ignore chroma subsampling and color matrix transformations like the
old videoscale used to do. This is to make the performance like it was
before.
See https://bugzilla.gnome.org/show_bug.cgi?id=741987
Speex may decide not to consume any samples because it can't write any. I've
seen a hang during draining caused by the resample loop never terminating.
In that case, resampling happened as normal until olen was 0 but ilen was
still 1. _process_native then reduced ichunk to 0, so ilen never decreased
below 1 and the loop never terminated.
Instead of reverting 684cf44 ({audioresample: don't skip input samples),
break only if all output samples have been produced and speex refuses
to consume any more input samples.
https://bugzilla.gnome.org/show_bug.cgi?id=732908
VideRate keeps 1 buffer in order to duplicate base on closest buffer
relative to targeted time. This extra buffer need to be request
otherwise the pipeline may stall when fixed size buffer pool is used.
https://bugzilla.gnome.org/show_bug.cgi?id=738302
Consider pipeline: gst-launch-1.0 playbin uri=http://example.com/a.ogg
Consider 128kbit audio stream.
As soon as uridecodebin detects the bitrate, it configures its input
queue2 max-size to 32000 bytes.
The 2MB buffer in multiqueue is nearly 2 orders of magnitude bigger.
This non-deterministically drives queue2 buffer anywhere from
100% to 0% until multiqueue is filled.
This patch sets multiqueue size to 5 buffers early in no_more_pads_cb.
Partly reverts commit db771185ed.
https://bugzilla.gnome.org/show_bug.cgi?id=740689
Decodebin has already added the element to the bin and should only
select caps compatible pads. It should disable the pad link checks
to avoid doing those again.
https://bugzilla.gnome.org/show_bug.cgi?id=742885
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
Create a function to do the pad cleanup of the GstSourceCombine struct
and use it to not forget to also cleanup the sink pad and fix a memory
leak.
https://bugzilla.gnome.org/show_bug.cgi?id=741198
In some cases, the user might want the stream outputted by encodebin to
be in the exact same format during all the stream. We should let the
user specify when this is the case. This commit add some API in the
GstEncodingProfile to determine whether the format can be renegotiated
after the encoding started or not.
API:
gst_encoding_profile_set_allow_dynamic_output
gst_encoding_profile_get_allow_dynamic_output
https://bugzilla.gnome.org/show_bug.cgi?id=740214
Before we were setting them to PAUSED and (much) later connecting to
their source pad caps notify signal.
There was a race where that demuxer was pushing a caps and later a buffer
on its source pad when we were not even connected to its source pad caps notify
signal leading to decodebin missing the information and not keeping on
building the pipeline on CAPS event thus the demuxer was posting an ERROR
(not linked) message on the bus. This need to be done for 'simple
demuxers' because those have one ALWAYS source pad, not like usual demuxers
that have several dynamic source pads.
A "simple demuxer" is a demuxer that has one and only one ALWAYS source
pad.
https://bugzilla.gnome.org/show_bug.cgi?id=740693
There was a race where:
1) we would put the element to PAUSED
2) It would get data sent to it from upstream
3) It would thus send caps
3) caps_notify_cb would continue autoplugging
4) caps would flow downstream, the last pad would get exposed
5) we were still not done sending the sticky events
Taking the stream lock on the new element's sinkpad and only
releasing it when sticky events have all been sent prevents
the caps from reaching the source pad of the element before
we're all set.
https://bugzilla.gnome.org/show_bug.cgi?id=740694
Otherwise the following can happen:
1. set mute=true
2. play media1 (Ok)
3. play media without audio (audiochain removed)
4. play media2 (audiochain created, mute=*false*)
https://bugzilla.gnome.org/show_bug.cgi?id=740675
There's no reason why we would have to wait for the next buffer to decide
whether to output the current one or not. We just have to check if the
current one is earlier than our expected next time, which is the previous
frame timestamp plus the expected frame duration.
https://bugzilla.gnome.org/show_bug.cgi?id=740018
If there are two parser elements available for the same media format,
then decodebin is autoplugging an extra capsfilter and parser irrespective
of caps and rank. So restrict the decodebin from autoplugging multiple parser
elements back to back in adjacent positions with in a single DecodeChain
for the same media format.
https://bugzilla.gnome.org/show_bug.cgi?id=738416
timestamp_offset is being declared as an int64 variable,
for which the min
value of G_MININT64 is -9223372036854775808
Changing the minimum and maximum limit for the offset variable.
https://bugzilla.gnome.org/show_bug.cgi?id=738568
Audiomixer blocksize, cant be 0, hence adjusting the minimum value to 1
timeout value of aggregator is defined with MAX of MAXINT64,
but it cannot cross G_MAXLONG * GST_SECOND - 1
Hence changed the max value of the same
https://bugzilla.gnome.org/show_bug.cgi?id=738845
The "iradio-mode" property used to have a default FALSE value in HTTP
source elements but now it should default to TRUE or just do not exist
as a property so it is not really needed to set it any more in
uridecodebin.
Apart from that this code could've never worked as uridecodebin looks for a
string-typed iradio-mode property, but it's a boolean in all sources.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=725383
It is better to use storel to splat the variable into the destination.
ORC doesn't know when a variable is last written to so it can't yet optimize
away the copy operation.
Support lanczos scaling method for NV12 and NV21 formats.
Scale the 'Y' plane and scale 'NV' plane.
Implementation for submethods - int16, int32, float and double
https://bugzilla.gnome.org/show_bug.cgi?id=737400
Move the conversion code used in videoconvert to the video library
and expose a simple but generic API to do arbitrary conversion. It can
currently do colorspace conversion but the plan is to add videoscale to
it as well.
See https://bugzilla.gnome.org/show_bug.cgi?id=732415
audioresample and videoscale is something the application will have to do if
required, but we can at least help here by adding the
audioconvert/videoconvert elements.
https://bugzilla.gnome.org/show_bug.cgi?id=735748
When switching URI from about-to-finish, playbin starts decoding the new
URI and the queue2 inside uridecodebin starts emitting buffering messages
immediately. However, the queue(s) inside playsink still have buffers to
play and the pipeline doesn't need to pause for buffering, so we should
not send those buffering messages up to the application, otherwise there
is an audible glitch caused by pausing the pipeline for a very short time.
https://bugzilla.gnome.org/show_bug.cgi?id=727255
when downsampling, the output buffer can be filled before all the input
samples are consumed. this is correct: when downsampling, several input
samples are needed for each output sample, so when only a small number of
input samples are available the number of output samples produced can be 0.
the resampler, however, was discarding those extra input samples instead of
clocking them into its filter history for the next iteration. this patch
fixes this by removing the check that the output buffer is full. the code
now always loops until all input samples are consumed, and relies on the
calling code to have provided a suitably sized location for the output.
note that there are already other checks in place in the calling code to
ensure that this is the case.
https://bugzilla.gnome.org/show_bug.cgi?id=732908
If we had plugins and an error occurred we only include the error message
caused by this, otherwise we will include the codec description as generated
from the caps.
This allows to detect which exact codec was missing instead of getting a
generic "no suitable decoders found" error message.
Otherwise we might change some capsfeatures from ANY to the specific
value from the filter and do not filter those out in case the
sink doesn't support them
https://bugzilla.gnome.org/show_bug.cgi?id=734822
Gracefully handle switching groups that all pads are deadend.
This can happen when quickly switching programs on mpegts as the
output is unaligned it can happen that not enough data was accumulated at
parsers to generate any buffers, causing the stream to receive EOS before
any data can be decoded.
To handle this scenario, the _expose function now also gets if there is
any next group to be exposed along with the list of endpads. If there are
no endpads and there is another group to expose it will switch to this next
group and then retry exposing the streams.
Also, the requirement to only switch from the chain that has the endpad had
to be modified to care for when the drainpad is NULL
https://bugzilla.gnome.org/show_bug.cgi?id=733169
Set up a fakesink with a pad probe to replace the missing encoder to detect
if encoding was really required and only error out in this case. Otherwise
just let passthrough branch work.
This delays the error posting from the set_state function to when buffers
are really flowing. Unit test updated accordingly
https://bugzilla.gnome.org/show_bug.cgi?id=650652
Unsetting DISCONT flag means we need to copy the buffer. This copy operation
mandates that all GstMemory should be copy-able which is not always the case
https://bugzilla.gnome.org/show_bug.cgi?id=727409
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
We now add all our elements to uridecodebin *after*
GstBin::change_state(READY->PAUSED), so we need to post async-start
and async-done messages ourselves if we want to work async.
https://bugzilla.gnome.org/show_bug.cgi?id=733495
otherwise we're going to
a) start Parser/Converter before they are linked to their capsfilter,
breaking their negotiation of a proper stream format
b) start demuxers without having connected to their pad-added signals. We
miss pads and in the worst case don't link any pads at all
... and if this fails for whatever reason we skip the element and instead
try with the next element. This allows us to handle elements that fail
when setting caps on them by just skipping to the next alternative element.
They might fail to go to PAUSED, and when connecting them further
we might already expose their srcpads on decodebin if we're unlucky.
This prevents us to handle failures going to PAUSED gracefully.
If the caps query returned us fixed caps this doesn't mean yet
that these caps are actually complete (fields might be missing).
It allows to do us some decisions, but the selection of the next
element should be delayed as only complete caps allow proper selection
of the next element.
Otherwise we might try to continue autoplugging e.g. for a specific
stream-format although the parser could convert to something else, thus giving
us potentially less options for decoders.
We can't convert to ANY capsfeatures, they are only there so that we
can passthrough whatever downstream can support... but we definitely
don't want to return them to upstream.
Canceling the accept/select happens when the source is shut down. This is
not an error and the GST_FLOW_ERROR causes problems when only part of the
pipeline is shut down.
https://bugzilla.gnome.org/show_bug.cgi?id=731567
When playing RTSP streams there will be one decodebin per stream. If some of
them fail because of a missing plugin we should not fail completely but play
the supported streams at least.
https://bugzilla.gnome.org/show_bug.cgi?id=730868
Aggregate buffering messages to only post the lower value
to avoid setting pipeline to playing while any multiqueue
is still buffering.
There are 3 scenarios where the entries should be removed from
the list:
1) When decodebin is set to READY
2) When an element posts a 100% buffering (already implemented)
3) When a multiqueue is removed from decodebin.
For item 3 we don't need to handle it because this should only
happen when either 1 is hapenning or when it is playing a
chained file, for which number 2 should have happened for the
previous stream to finish
https://bugzilla.gnome.org/show_bug.cgi?id=726423
Otherwise we might end up inside the callback without having stored
the probe id... then try to remove that probe (not!) from the callback
and wait forever for the pad to unblock.
They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
It was already checked in an early out, and as it's only
incremented for at most the size of the passed buffer, it
can only become NULL in an address wraparound.
While there, don't cast away const on a pointer.
Coverity 1139845
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will
e
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will be
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
The typefinder returns LIKELY for as little as one possible
sync and no bad sync (not even taking into account how much
data was looked at for that). It's generally just not fit
for purpose, so should just not return anything like LIKELY
at all ever, even more so since it only recognises one out
of ten H263 files, and likes to mis-detect mp3s as H263.
https://bugzilla.gnome.org/show_bug.cgi?id=700770https://bugzilla.gnome.org/show_bug.cgi?id=725644
If we have the peer caps and a caps filter, return peer_caps +
intersect_first (filter, converter_caps) instead of
intersect_first (filter, peer_caps + converter_caps) and preservers
downstream caps preference order.
https://bugzilla.gnome.org/show_bug.cgi?id=724893
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.
Adaptive streams should download its data inside the demuxer, so
we want to use multiqueue's buffering messages to control the
pipeline flow and avoid losing sync if download rates are low;
https://bugzilla.gnome.org/show_bug.cgi?id=707636
Otherwise there's an interesting race condition when we destroy
the inputselector (actually it will be destroyed later when its state
change message gets destroyed) and afterwards release its sinkpad.
This is the code path when the last channel is removed from the
input selector.
Gave this warning sometimes, for chained oggs or whenever else
we change decode groups:
GStreamer-CRITICAL **: Padname '':sink_0 does not belong to element inputselector0 when removing
MONO and NONE position are the same, for example, but in
general there isn't much to do here for such a conversion.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
If the text pads does not go away we just set the overlay to silent, which
allows us to immediately re-enable subs later again. However before this
change we also released the streamsynchronizer text pads, which deadlocked
because there was still dataflow going on. Just do this only if we remove
the complete chain.
https://bugzilla.gnome.org/show_bug.cgi?id=683504
Change the way autoplug-select is accumulated so that it's possible to have
multiple handlers. The handlers keep getting called as long as they keep
returning GST_AUTOPLUG_SELECT_TRY.
One practical example of when this is needed is when hooking into playbin's
uridecodebin, which is perhaps not very elegant but the only way to influence
which streams playbin autoplugs/exposes.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723096
Discussion on IRC indicated that the main reason for this list was to
prevent demuxers that can trigger a lot of seeking from using
progressive buffering using queue2 (which due to being seekable triggers
that behaviour).
However given that upstream can indicate seeks are possible but should
be avoided via a scheduling query, this extra whitelisting shouldn't be
necessary for well-behaved demuxers.
https://bugzilla.gnome.org/show_bug.cgi?id=704933
Make a little table of conversions and manually score them. Use this
info to define better weights for the scoring algorithm.
give separate scores for doing changes and the impact of the change,
This allows us to avoid conversion when we can but still allow fairly
lossless changes.
The old code did not penalize GRAY conversions, PAL conversions were
punished too low and depth conversions too high.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=722656
Don't try to interpolate the chroma samples, the used algorithm only
works for horizontal cositing. Let's switch to a faster and safer
version until we handle chroma siting correctly in the fastpaths.
Rework the orc code to be around 10% faster and support arbitrary matrices.
Pass the matrix parameters to the YUV->RGB functions to make them work
for all matrices. This enables more and faster fastpath conversions.
See https://bugzilla.gnome.org/show_bug.cgi?id=721701
This fast-path was adding 128 to every component including
alpha while it should only be done for all components except
alpha. This caused wrong alpha values to be generated.
Also remove the high-quality I420 to BGRA fast-path as it needs
the same fix, which causes an additional instruction, which causes
orc to emit more than 96 variables, which then just crashes.
This can only be fixed in orc by breaking ABI and allowing more
variables.
If a pipeline fails to preroll, it might happen that the sinks are
put into READY state from playbin's sink activation, but they are never
set to playsink, so they aren't being managed by a GstBin and will keep
their READY state until they are unreffed, leading to a warning.
Prevent this by always forcing them to NULL when deactivating a group
https://bugzilla.gnome.org/show_bug.cgi?id=708789
Fix component ordering, it's wrong in both the scanline and merge
function so it cancels eachother out and isn't really a except for
loss of precision of the green component.
Fix calculation of the filter weight
Some of the fastpath function can only work with aligned widht/height
so make sure we check this as well when choosing a fastpath.
Add fastpath for I420/YV12 -> BGRx
This commit adds detection of the "dash" and "avc3" compatible brands
in qt_type_find.
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV
box (moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in
the first sample of every fragment (i.e. the first sample in each mdat
box). The principal reason for avc3 is to make it easier for client
implementations, because it removes the requirement to insert the
SPS+PPS in to the decoder pipeline every time there is a representation
change.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
Increase the number of temporary lines that we need, it is possible that the
up and downsampling offsets are out of phase and that we need to keep some
extra lines around. Also copy the unhandled output lines for the next round
instead of overwriting them.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=706823
When playing mp3 files from a smb server, we get 64k read requests
that mostly overlap. Without using the cache to partially satisfy
these, we send these requests straight to the server, resulting in
a lot more network traffic than necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=705415
Each write will update the last_activity_time and otherwise we would
compare against a too old current time and immediately timeout because
current time is smaller than last activity time (overflow).
Each write will update the last_activity_time and otherwise we would
compare against a too old current time and immediately timeout because
current time is smaller than last activity time (overflow).
Remove dodgy code that detects mp3 with as little as
a valid frame sync at the beginning. This was only used
in some unit tests in -good where there were only a few
bytes after the id3 tag. We now require at least two
frame headers.
Fixes mis-dection of text files with UTF-16 LE BOM as mp3.
https://bugzilla.gnome.org/show_bug.cgi?id=681368
We have to hold the streams-lock when iterating over all pads,
also the stream-lock of the pad is already locked when we receive
EOS.
Call gst_pad_event_default() for the correct default handling of
events.
This commit adds a streamcombinerpad with an is_eos field.
When streamcombiner receives an EOS on one of its pads, it
forwards it all its other pads are EOS.
This commit also removes the notion of "stream-switching-eos".
In gst_sub_parse_dispose() parser_type will be UNKNOWN,
so these deinit calls were never executed. And we should
clean up the parser state in the downwards state change
anyway.
To celebrate 2013.gnome.asia, updated sami parser for gstreamer 1.x. :D
Remove conditional block for check libxml usage and
implement a simple html markup parser for the sami
parser.
https://bugzilla.gnome.org/show_bug.cgi?id=693056
Otherwise we will remove the bus that would proxy messages to playsink
and never set it again. If the sink is already in playsink, all failures
are fatal anyway as it's either a sink that worked before or one that
was set by the user.
https://bugzilla.gnome.org/show_bug.cgi?id=701997
playbin will now only activate the sinks in a single place and
will never change the states of any sinks that are owned by
playsink.
Also handle text-sinks the same way as audio/video sinks inside
playbin.
With the current test, we get into problems when we try to typefind
a MPEG stream from a small amount of data, which can happen when
we get data pushed from a HTTP source. We thus make a second test
to give higher probability if all the potential headers were either
pack or pes headers (ie, no potential header was unrecognized).
This fixes an issue with a MPEG1/MP2 stream being properly discovered
as video/mpeg from a file, but as audio/mpeg from souphttpsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=703256
This makes sure the application gets any context related messages and
can do whatever is required to a) get the sink a context or b) share
the context with other elements in the pipeline.
The proxying is necessary because the sink is not a child element of
playbin, but instead will at a later point be a child of some bin
inside playsink.
https://bugzilla.gnome.org/show_bug.cgi?id=700967
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
We found a case where untranslated values were being passed from the
proxy to the underlying channel, causing bad color balance values
in some setups.
Thanks to Sebastian Dröge for clarifying how the code works, and
suggesting the fix.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=701202
This allows to chose something else than input-selector
for multiple audio/video/text streams, e.g. an adder could
be used for audio.
It is needed for example to implement some of the more
advanced HTML5 video features.
https://bugzilla.gnome.org/show_bug.cgi?id=698851
Add the actual decoder/parser/etc caps at the very end to
make sure we don't cause empty caps to be returned, e.g.
if a parser asks us but a decoder is required after it
because no sink can handle the format directly.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Otherwise we accumulate more and more queue2 elements, and let each
of them start a thread doing nothing but waiting each time uridecodebin
goes to PAUSED.
https://bugzilla.gnome.org/show_bug.cgi?id=699794
This makes it possible to take advantage of the O(log n) lookups
of GSequence on the ~1000 element lists and only do iterations
on <10 element lists. Previously the code iterated over ~1000 element
lists multiple times.
Autoplug the decoder elements and sink elements based on
the number of common capsfeatures if the ranks are the same.
This will also helps to autoplug the h/w_decoder and h/w_renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=698712
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
Checks if the received XML is a smoothstreaming manifest
in both UTF8 and UTF16 formats. The check is made for a
SmoothStreamingMedia top level element.
Conflicts:
gst/typefind/gsttypefindfunctions.c
If a source element could be created for a URI, but all elements rejected
the URI for some reason, propagate the error from the URI handler instead
of reporting a 'no uri handler found for protocol xyz' error, which is
confusing. Fixes error reporting with dvb:// URIs when the channel config
file could not be found or not be parsed or the channel isn't listed.
https://bugzilla.gnome.org/show_bug.cgi?id=678892
Use a scheduling query to check if the source element has some
bandwidth limitations. If this is the case on-disk buffering might be
used. If the source element doesn't handle the scheduling query then
fallback to checking the URI protocol against the hardcoded list of
protocols known to handle buffering already.
Fixes bug 693484.
The compare_factories_func() should return negative value
if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
This allows getting a pad for a specific encoding profile, which can
be useful when there are several stream profiles of the same type.
Also update the encodebin unit tests so that we check that the returned
pad has the right caps.
https://bugzilla.gnome.org/show_bug.cgi?id=689845
Before it was done the other way around and that can trigger the assert that
already is in place. This also makes more sense; when seeking to time x, we want
then sample that is <= that pos.
Try to select the conversion that would result in the minimal amount of quality
loss. Quality loss is calculated rather arbitrarily but it avoids doing
something really stupid in most cases.
This reverts commit adc9694ed7.
No need to restrict the conversion, we can handle interlace correctly. We
basically unpack each field, then convert each field to the target colorspace
and pack and interleave each field to the target format. We also disable any
fast path that can't deal with interlaced formats.
Do not use the buffer start offset when it is invalid, otherwise a
discontinuity is detected on the next buffer, and the subtitle parser
reset and some subtitle lines are not shown.
Also remove unused next_offset field.
https://bugzilla.gnome.org/show_bug.cgi?id=693981
subtitleoverlay handles any caps, not just the ones
for which a subtitle parser/renderer exist. It will
just ignore any unsupported streams instead of causing
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=688476
Add all the caps that we can convert to to the filter caps,
otherwise downstream might just return EMPTY caps because
it doesn't handle the filter caps but we could still convert
to these caps, causing us to return EMPTY caps although
conversion would be possible.
https://bugzilla.gnome.org/show_bug.cgi?id=688803
changed: gst_video_scale_set_info in gst/videoscale/gstvideoscale.c
DAR on sink side now calculated with PAR on sink side
ratio of output width/height now calculated with inverse PAR
additional condition that borders are 0:0 for passthrough mode
https://bugzilla.gnome.org/show_bug.cgi?id=696019
Ensure the detection of svc and mvc as a part of h264 stream.
Once the typefinder detect a subset_sequence_parameter_set(ssps),
then each nal unit with type 14 or 20 should be detected as a
part of h264 stream thereafter.
https://bugzilla.gnome.org/show_bug.cgi?id=694346
Previously adder was only sending the flush-stop, when it saw the flushing seek.
If one sends a flushing see direcly to an element upstream of adder, it would
fail to unflush the downstream pads.
This ensures the ghost pad will not stay in flushing mode
when it receives a flush stop event, and generally behave
badly.
This fixes at least one case of a dynamic decodebin2 + encodebin
pipeline finding a source that has not prerolled when it should
have been (due to the ghostpad staying in flushing mode).
We were setting the query-func on the sink-pad, which got overwritten when
adding the new pad to collect pads. Instead register our query-func with the
collect pads object. This fixes filter caps. Add a test for it.
A return value of FALSE here indicates that we don't have control-values. In
0.10 we were returning the default value of the property. Now we don't fill an
array with defaults in the ControlBinding, but leave it up to the element to
handle this case.
The codec data blob we get from matroskademux with the SSA/ASS
init section is supposed to be valid UTF-8. If it's not, just
continue with the bits that are valid UTF-8 instead of erroring
out. We don't actually parse the init section yet anyway..
https://bugzilla.gnome.org/show_bug.cgi?id=607630
The behaviour is sensibly changed here. Instead of purely falling when a
preset is set on the #GstEncodingProfile, we now make sure that the
element that is plugged corresponds to the one specified as preset. Then,
if we have a preset_name, we use it, if it fails, we fail (we might rather
just keep working even without setting the element properties?)
+ Add tests that it behave correctly
When the input buffers for a stream don't have a duration set,
timestamp_end might still be GST_CLOCK_TIME_NONE. When advancing
EOSed streams via GAP events (with other streams not yet EOS), we
would then use the invalid timestamp_end to calculate the duration
of the gap. This in turn would make baseaudiosink abort, because it
would try to allocate memory for a trizillion samples.
So if buffers don't have a duration set, assume a duration of
one second for stream catch-up purposes, just so we can still
continue to catch up in those cases. And make sure that
timestamp_end is valid before doing calculations with it.
http://bugzilla.gnome.org/show_bug.cgi?id=678530
Make AAC LOAS typefinding a bit more reliable; don't report
a LIKELY probability already after just two sync points, but
scan for a few more consecutive frames and determine probability
based on how many we found. Fixes mis-detection of wavpack file.
https://bugzilla.gnome.org/show_bug.cgi?id=687674
Check for second block sync and return different
probabilities depending on what we found (trumping
the AAC loas typefinder's LIKELY probability after
finding a second frame sync in this particular case).
https://bugzilla.gnome.org/show_bug.cgi?id=687674
Previously we could've chosen another format with the same
depth even if the input format was possible.
Also make sure to chose according to the order in the
caps.
Enhance current code to prefer an exact match on sample depth if
possible. Also ignore GST_AUDIO_FORMAT_FLAG_UNPACK when checking
equality on the flags.
This is an adaptation of patch #3 from Jyri Sarha
( http://lists.xiph.org/pipermail/speex-dev/2011-September/008240.html ),
but without the NEON optimizations (these come in a separate commit).
The idea is to replace SATURATE32(PSHR32(x, shift), a) operations with a
combined SATURATE32PSHR(x, shift, a) macro that can be optimized for
specific platforms (and also avoids rare rounding errors).
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>