This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will
e
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
This provides an audio-filter and video-filter property to allow
applications to set filter elements/bins. The idea is that these will be
applied if possible -- for non-raw sinks, the filters will be skipped.
If the application wishes to force the application of the filters, this
can be done by setting the new flag introduced on playsink -
GST_PLAY_FLAG_FORCE_FILTERS.
https://bugzilla.gnome.org/show_bug.cgi?id=679031
2 seconds might be too small for some container formats, e.g.
MPEGTS with some video codec and AAC/ADTS audio with 700ms
long buffers. The video branch of multiqueue can run full while
the audio branch is completely empty, especially because there
are usually more queues downstream on the audio branch.
Usually these buffers are multiple seconds large, and having a maximum
of 5 buffers in the multiqueue there can use a lot of memory. Lower
this to 2 for adaptive streaming demuxers.
The typefinder returns LIKELY for as little as one possible
sync and no bad sync (not even taking into account how much
data was looked at for that). It's generally just not fit
for purpose, so should just not return anything like LIKELY
at all ever, even more so since it only recognises one out
of ten H263 files, and likes to mis-detect mp3s as H263.
https://bugzilla.gnome.org/show_bug.cgi?id=700770https://bugzilla.gnome.org/show_bug.cgi?id=725644
If we have the peer caps and a caps filter, return peer_caps +
intersect_first (filter, converter_caps) instead of
intersect_first (filter, peer_caps + converter_caps) and preservers
downstream caps preference order.
https://bugzilla.gnome.org/show_bug.cgi?id=724893
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.
Adaptive streams should download its data inside the demuxer, so
we want to use multiqueue's buffering messages to control the
pipeline flow and avoid losing sync if download rates are low;
https://bugzilla.gnome.org/show_bug.cgi?id=707636
Otherwise there's an interesting race condition when we destroy
the inputselector (actually it will be destroyed later when its state
change message gets destroyed) and afterwards release its sinkpad.
This is the code path when the last channel is removed from the
input selector.
Gave this warning sometimes, for chained oggs or whenever else
we change decode groups:
GStreamer-CRITICAL **: Padname '':sink_0 does not belong to element inputselector0 when removing
MONO and NONE position are the same, for example, but in
general there isn't much to do here for such a conversion.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
If the text pads does not go away we just set the overlay to silent, which
allows us to immediately re-enable subs later again. However before this
change we also released the streamsynchronizer text pads, which deadlocked
because there was still dataflow going on. Just do this only if we remove
the complete chain.
https://bugzilla.gnome.org/show_bug.cgi?id=683504
Change the way autoplug-select is accumulated so that it's possible to have
multiple handlers. The handlers keep getting called as long as they keep
returning GST_AUTOPLUG_SELECT_TRY.
One practical example of when this is needed is when hooking into playbin's
uridecodebin, which is perhaps not very elegant but the only way to influence
which streams playbin autoplugs/exposes.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=723096
Discussion on IRC indicated that the main reason for this list was to
prevent demuxers that can trigger a lot of seeking from using
progressive buffering using queue2 (which due to being seekable triggers
that behaviour).
However given that upstream can indicate seeks are possible but should
be avoided via a scheduling query, this extra whitelisting shouldn't be
necessary for well-behaved demuxers.
https://bugzilla.gnome.org/show_bug.cgi?id=704933
Make a little table of conversions and manually score them. Use this
info to define better weights for the scoring algorithm.
give separate scores for doing changes and the impact of the change,
This allows us to avoid conversion when we can but still allow fairly
lossless changes.
The old code did not penalize GRAY conversions, PAL conversions were
punished too low and depth conversions too high.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=722656
Don't try to interpolate the chroma samples, the used algorithm only
works for horizontal cositing. Let's switch to a faster and safer
version until we handle chroma siting correctly in the fastpaths.
Rework the orc code to be around 10% faster and support arbitrary matrices.
Pass the matrix parameters to the YUV->RGB functions to make them work
for all matrices. This enables more and faster fastpath conversions.
See https://bugzilla.gnome.org/show_bug.cgi?id=721701
This fast-path was adding 128 to every component including
alpha while it should only be done for all components except
alpha. This caused wrong alpha values to be generated.
Also remove the high-quality I420 to BGRA fast-path as it needs
the same fix, which causes an additional instruction, which causes
orc to emit more than 96 variables, which then just crashes.
This can only be fixed in orc by breaking ABI and allowing more
variables.
If a pipeline fails to preroll, it might happen that the sinks are
put into READY state from playbin's sink activation, but they are never
set to playsink, so they aren't being managed by a GstBin and will keep
their READY state until they are unreffed, leading to a warning.
Prevent this by always forcing them to NULL when deactivating a group
https://bugzilla.gnome.org/show_bug.cgi?id=708789
Fix component ordering, it's wrong in both the scanline and merge
function so it cancels eachother out and isn't really a except for
loss of precision of the green component.
Fix calculation of the filter weight
Some of the fastpath function can only work with aligned widht/height
so make sure we check this as well when choosing a fastpath.
Add fastpath for I420/YV12 -> BGRx
This commit adds detection of the "dash" and "avc3" compatible brands
in qt_type_find.
Amendment 2 of ISO/IEC 14496-15 (AVC file format) is defining a new
structure for fragmented MP4 called "avc3". The principal difference
between AVC1 and AVC3 is the location of the codec initialisation
data (e.g. SPS, PPS). In AVC1 this data is placed in the initial MOOV
box (moov.trak.mdia.minf.stbl.stsd.avc1) but in AVC3 this data goes in
the first sample of every fragment (i.e. the first sample in each mdat
box). The principal reason for avc3 is to make it easier for client
implementations, because it removes the requirement to insert the
SPS+PPS in to the decoder pipeline every time there is a representation
change.
https://bugzilla.gnome.org/show_bug.cgi?id=702004
Increase the number of temporary lines that we need, it is possible that the
up and downsampling offsets are out of phase and that we need to keep some
extra lines around. Also copy the unhandled output lines for the next round
instead of overwriting them.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=706823
When playing mp3 files from a smb server, we get 64k read requests
that mostly overlap. Without using the cache to partially satisfy
these, we send these requests straight to the server, resulting in
a lot more network traffic than necessary.
https://bugzilla.gnome.org/show_bug.cgi?id=705415
Each write will update the last_activity_time and otherwise we would
compare against a too old current time and immediately timeout because
current time is smaller than last activity time (overflow).
Each write will update the last_activity_time and otherwise we would
compare against a too old current time and immediately timeout because
current time is smaller than last activity time (overflow).
Remove dodgy code that detects mp3 with as little as
a valid frame sync at the beginning. This was only used
in some unit tests in -good where there were only a few
bytes after the id3 tag. We now require at least two
frame headers.
Fixes mis-dection of text files with UTF-16 LE BOM as mp3.
https://bugzilla.gnome.org/show_bug.cgi?id=681368
We have to hold the streams-lock when iterating over all pads,
also the stream-lock of the pad is already locked when we receive
EOS.
Call gst_pad_event_default() for the correct default handling of
events.
This commit adds a streamcombinerpad with an is_eos field.
When streamcombiner receives an EOS on one of its pads, it
forwards it all its other pads are EOS.
This commit also removes the notion of "stream-switching-eos".
In gst_sub_parse_dispose() parser_type will be UNKNOWN,
so these deinit calls were never executed. And we should
clean up the parser state in the downwards state change
anyway.
To celebrate 2013.gnome.asia, updated sami parser for gstreamer 1.x. :D
Remove conditional block for check libxml usage and
implement a simple html markup parser for the sami
parser.
https://bugzilla.gnome.org/show_bug.cgi?id=693056
Otherwise we will remove the bus that would proxy messages to playsink
and never set it again. If the sink is already in playsink, all failures
are fatal anyway as it's either a sink that worked before or one that
was set by the user.
https://bugzilla.gnome.org/show_bug.cgi?id=701997
playbin will now only activate the sinks in a single place and
will never change the states of any sinks that are owned by
playsink.
Also handle text-sinks the same way as audio/video sinks inside
playbin.
With the current test, we get into problems when we try to typefind
a MPEG stream from a small amount of data, which can happen when
we get data pushed from a HTTP source. We thus make a second test
to give higher probability if all the potential headers were either
pack or pes headers (ie, no potential header was unrecognized).
This fixes an issue with a MPEG1/MP2 stream being properly discovered
as video/mpeg from a file, but as audio/mpeg from souphttpsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=703256
This makes sure the application gets any context related messages and
can do whatever is required to a) get the sink a context or b) share
the context with other elements in the pipeline.
The proxying is necessary because the sink is not a child element of
playbin, but instead will at a later point be a child of some bin
inside playsink.
https://bugzilla.gnome.org/show_bug.cgi?id=700967
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
We found a case where untranslated values were being passed from the
proxy to the underlying channel, causing bad color balance values
in some setups.
Thanks to Sebastian Dröge for clarifying how the code works, and
suggesting the fix.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=701202
This allows to chose something else than input-selector
for multiple audio/video/text streams, e.g. an adder could
be used for audio.
It is needed for example to implement some of the more
advanced HTML5 video features.
https://bugzilla.gnome.org/show_bug.cgi?id=698851
Add the actual decoder/parser/etc caps at the very end to
make sure we don't cause empty caps to be returned, e.g.
if a parser asks us but a decoder is required after it
because no sink can handle the format directly.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Otherwise we accumulate more and more queue2 elements, and let each
of them start a thread doing nothing but waiting each time uridecodebin
goes to PAUSED.
https://bugzilla.gnome.org/show_bug.cgi?id=699794
This makes it possible to take advantage of the O(log n) lookups
of GSequence on the ~1000 element lists and only do iterations
on <10 element lists. Previously the code iterated over ~1000 element
lists multiple times.
Autoplug the decoder elements and sink elements based on
the number of common capsfeatures if the ranks are the same.
This will also helps to autoplug the h/w_decoder and h/w_renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=698712
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
Checks if the received XML is a smoothstreaming manifest
in both UTF8 and UTF16 formats. The check is made for a
SmoothStreamingMedia top level element.
Conflicts:
gst/typefind/gsttypefindfunctions.c
If a source element could be created for a URI, but all elements rejected
the URI for some reason, propagate the error from the URI handler instead
of reporting a 'no uri handler found for protocol xyz' error, which is
confusing. Fixes error reporting with dvb:// URIs when the channel config
file could not be found or not be parsed or the channel isn't listed.
https://bugzilla.gnome.org/show_bug.cgi?id=678892
Use a scheduling query to check if the source element has some
bandwidth limitations. If this is the case on-disk buffering might be
used. If the source element doesn't handle the scheduling query then
fallback to checking the URI protocol against the hardcoded list of
protocols known to handle buffering already.
Fixes bug 693484.
The compare_factories_func() should return negative value
if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
This allows getting a pad for a specific encoding profile, which can
be useful when there are several stream profiles of the same type.
Also update the encodebin unit tests so that we check that the returned
pad has the right caps.
https://bugzilla.gnome.org/show_bug.cgi?id=689845
Before it was done the other way around and that can trigger the assert that
already is in place. This also makes more sense; when seeking to time x, we want
then sample that is <= that pos.
Try to select the conversion that would result in the minimal amount of quality
loss. Quality loss is calculated rather arbitrarily but it avoids doing
something really stupid in most cases.
This reverts commit adc9694ed7.
No need to restrict the conversion, we can handle interlace correctly. We
basically unpack each field, then convert each field to the target colorspace
and pack and interleave each field to the target format. We also disable any
fast path that can't deal with interlaced formats.
Do not use the buffer start offset when it is invalid, otherwise a
discontinuity is detected on the next buffer, and the subtitle parser
reset and some subtitle lines are not shown.
Also remove unused next_offset field.
https://bugzilla.gnome.org/show_bug.cgi?id=693981
subtitleoverlay handles any caps, not just the ones
for which a subtitle parser/renderer exist. It will
just ignore any unsupported streams instead of causing
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=688476
Add all the caps that we can convert to to the filter caps,
otherwise downstream might just return EMPTY caps because
it doesn't handle the filter caps but we could still convert
to these caps, causing us to return EMPTY caps although
conversion would be possible.
https://bugzilla.gnome.org/show_bug.cgi?id=688803
changed: gst_video_scale_set_info in gst/videoscale/gstvideoscale.c
DAR on sink side now calculated with PAR on sink side
ratio of output width/height now calculated with inverse PAR
additional condition that borders are 0:0 for passthrough mode
https://bugzilla.gnome.org/show_bug.cgi?id=696019