In gst_sub_parse_dispose() parser_type will be UNKNOWN,
so these deinit calls were never executed. And we should
clean up the parser state in the downwards state change
anyway.
To celebrate 2013.gnome.asia, updated sami parser for gstreamer 1.x. :D
Remove conditional block for check libxml usage and
implement a simple html markup parser for the sami
parser.
https://bugzilla.gnome.org/show_bug.cgi?id=693056
Otherwise we will remove the bus that would proxy messages to playsink
and never set it again. If the sink is already in playsink, all failures
are fatal anyway as it's either a sink that worked before or one that
was set by the user.
https://bugzilla.gnome.org/show_bug.cgi?id=701997
playbin will now only activate the sinks in a single place and
will never change the states of any sinks that are owned by
playsink.
Also handle text-sinks the same way as audio/video sinks inside
playbin.
With the current test, we get into problems when we try to typefind
a MPEG stream from a small amount of data, which can happen when
we get data pushed from a HTTP source. We thus make a second test
to give higher probability if all the potential headers were either
pack or pes headers (ie, no potential header was unrecognized).
This fixes an issue with a MPEG1/MP2 stream being properly discovered
as video/mpeg from a file, but as audio/mpeg from souphttpsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=703256
This makes sure the application gets any context related messages and
can do whatever is required to a) get the sink a context or b) share
the context with other elements in the pipeline.
The proxying is necessary because the sink is not a child element of
playbin, but instead will at a later point be a child of some bin
inside playsink.
https://bugzilla.gnome.org/show_bug.cgi?id=700967
Otherwise we're going to deadlock forever because no autoplugging
happens without having caps, but caps can never be send because
we're blocking.
Serialized queries before caps should never be sent unless really
necessary.
We found a case where untranslated values were being passed from the
proxy to the underlying channel, causing bad color balance values
in some setups.
Thanks to Sebastian Dröge for clarifying how the code works, and
suggesting the fix.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=701202
This allows to chose something else than input-selector
for multiple audio/video/text streams, e.g. an adder could
be used for audio.
It is needed for example to implement some of the more
advanced HTML5 video features.
https://bugzilla.gnome.org/show_bug.cgi?id=698851
Add the actual decoder/parser/etc caps at the very end to
make sure we don't cause empty caps to be returned, e.g.
if a parser asks us but a decoder is required after it
because no sink can handle the format directly.
Otherwise we will only block after the serialized, non-sticky event
after the CAPS event or the first buffer. If we're waiting for another
pad to finish autoplugging after we got final caps on this pad, it
will mean that we will let the ALLOCATION query pass although the
pad is not exposed yet.
Otherwise we accumulate more and more queue2 elements, and let each
of them start a thread doing nothing but waiting each time uridecodebin
goes to PAUSED.
https://bugzilla.gnome.org/show_bug.cgi?id=699794
This makes it possible to take advantage of the O(log n) lookups
of GSequence on the ~1000 element lists and only do iterations
on <10 element lists. Previously the code iterated over ~1000 element
lists multiple times.
Autoplug the decoder elements and sink elements based on
the number of common capsfeatures if the ranks are the same.
This will also helps to autoplug the h/w_decoder and h/w_renderer.
https://bugzilla.gnome.org/show_bug.cgi?id=698712
Remove the byte limit for adaptive http streaming. Because some fragments might
be very big, we might need a lot of buffering. I also suspect another problem
where data is actually missing and things go out of sync somehow.
When we disable buffering in the more upstream multiqueue elements,
we need to also update the queue limits. In particular, the max_size_time should
be set to 0 or else we might simply deadlock.
When we have a scenario of demuxers linked to demuxers, decodebin2
will create multiqueue at different levels of the pipeline. The problem
is that only the lowest multiqueue's should do the buffering messaging,
as they will handle with the raw streams data.
When all multiqueues are doing buffering, the upper ones can handle
large buffers that easily fill them, moving from 0% to 100% from
buffer to buffer, causing too much buffering messages to be posted.
This hangs the pipeline unnecessarily and might lead to deadlocks.
Decodebin2's chains store a next_groups list that was being handled as
it could only have a single element. This is true for most of the
chaining streams scenarios where streams change not very often.
In more stressfull changing scenarios, like adaptive streams, those
changes can happen very often, and in short time intervals. This could
confuse decodebin2 as this list was always being used as a single
element list.
This patches makes it handle as a real list, using iteration instead
of picking the first element as the correct one always.
Even if the chain hasn't been 'handled' in this switching round,
report it as drained so upper chains/groups know abou it.
This makes switching happen on upper levels of the groups/chain
trees
Checks if the received XML is a smoothstreaming manifest
in both UTF8 and UTF16 formats. The check is made for a
SmoothStreamingMedia top level element.
Conflicts:
gst/typefind/gsttypefindfunctions.c
If a source element could be created for a URI, but all elements rejected
the URI for some reason, propagate the error from the URI handler instead
of reporting a 'no uri handler found for protocol xyz' error, which is
confusing. Fixes error reporting with dvb:// URIs when the channel config
file could not be found or not be parsed or the channel isn't listed.
https://bugzilla.gnome.org/show_bug.cgi?id=678892
Use a scheduling query to check if the source element has some
bandwidth limitations. If this is the case on-disk buffering might be
used. If the source element doesn't handle the scheduling query then
fallback to checking the URI protocol against the hardcoded list of
protocols known to handle buffering already.
Fixes bug 693484.
The compare_factories_func() should return negative value
if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
The _decode_bin_compare_factories_func() should return negative
value if the rank of both PluginFeatures are equal and the name of
first PluginFeature comes before the second one (== ascending order).
This allows getting a pad for a specific encoding profile, which can
be useful when there are several stream profiles of the same type.
Also update the encodebin unit tests so that we check that the returned
pad has the right caps.
https://bugzilla.gnome.org/show_bug.cgi?id=689845
Before it was done the other way around and that can trigger the assert that
already is in place. This also makes more sense; when seeking to time x, we want
then sample that is <= that pos.
Try to select the conversion that would result in the minimal amount of quality
loss. Quality loss is calculated rather arbitrarily but it avoids doing
something really stupid in most cases.
This reverts commit adc9694ed7.
No need to restrict the conversion, we can handle interlace correctly. We
basically unpack each field, then convert each field to the target colorspace
and pack and interleave each field to the target format. We also disable any
fast path that can't deal with interlaced formats.
Do not use the buffer start offset when it is invalid, otherwise a
discontinuity is detected on the next buffer, and the subtitle parser
reset and some subtitle lines are not shown.
Also remove unused next_offset field.
https://bugzilla.gnome.org/show_bug.cgi?id=693981
subtitleoverlay handles any caps, not just the ones
for which a subtitle parser/renderer exist. It will
just ignore any unsupported streams instead of causing
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=688476
Add all the caps that we can convert to to the filter caps,
otherwise downstream might just return EMPTY caps because
it doesn't handle the filter caps but we could still convert
to these caps, causing us to return EMPTY caps although
conversion would be possible.
https://bugzilla.gnome.org/show_bug.cgi?id=688803
changed: gst_video_scale_set_info in gst/videoscale/gstvideoscale.c
DAR on sink side now calculated with PAR on sink side
ratio of output width/height now calculated with inverse PAR
additional condition that borders are 0:0 for passthrough mode
https://bugzilla.gnome.org/show_bug.cgi?id=696019
Ensure the detection of svc and mvc as a part of h264 stream.
Once the typefinder detect a subset_sequence_parameter_set(ssps),
then each nal unit with type 14 or 20 should be detected as a
part of h264 stream thereafter.
https://bugzilla.gnome.org/show_bug.cgi?id=694346
Previously adder was only sending the flush-stop, when it saw the flushing seek.
If one sends a flushing see direcly to an element upstream of adder, it would
fail to unflush the downstream pads.
This ensures the ghost pad will not stay in flushing mode
when it receives a flush stop event, and generally behave
badly.
This fixes at least one case of a dynamic decodebin2 + encodebin
pipeline finding a source that has not prerolled when it should
have been (due to the ghostpad staying in flushing mode).
We were setting the query-func on the sink-pad, which got overwritten when
adding the new pad to collect pads. Instead register our query-func with the
collect pads object. This fixes filter caps. Add a test for it.
A return value of FALSE here indicates that we don't have control-values. In
0.10 we were returning the default value of the property. Now we don't fill an
array with defaults in the ControlBinding, but leave it up to the element to
handle this case.
The codec data blob we get from matroskademux with the SSA/ASS
init section is supposed to be valid UTF-8. If it's not, just
continue with the bits that are valid UTF-8 instead of erroring
out. We don't actually parse the init section yet anyway..
https://bugzilla.gnome.org/show_bug.cgi?id=607630
The behaviour is sensibly changed here. Instead of purely falling when a
preset is set on the #GstEncodingProfile, we now make sure that the
element that is plugged corresponds to the one specified as preset. Then,
if we have a preset_name, we use it, if it fails, we fail (we might rather
just keep working even without setting the element properties?)
+ Add tests that it behave correctly
When the input buffers for a stream don't have a duration set,
timestamp_end might still be GST_CLOCK_TIME_NONE. When advancing
EOSed streams via GAP events (with other streams not yet EOS), we
would then use the invalid timestamp_end to calculate the duration
of the gap. This in turn would make baseaudiosink abort, because it
would try to allocate memory for a trizillion samples.
So if buffers don't have a duration set, assume a duration of
one second for stream catch-up purposes, just so we can still
continue to catch up in those cases. And make sure that
timestamp_end is valid before doing calculations with it.
http://bugzilla.gnome.org/show_bug.cgi?id=678530
Make AAC LOAS typefinding a bit more reliable; don't report
a LIKELY probability already after just two sync points, but
scan for a few more consecutive frames and determine probability
based on how many we found. Fixes mis-detection of wavpack file.
https://bugzilla.gnome.org/show_bug.cgi?id=687674
Check for second block sync and return different
probabilities depending on what we found (trumping
the AAC loas typefinder's LIKELY probability after
finding a second frame sync in this particular case).
https://bugzilla.gnome.org/show_bug.cgi?id=687674
Previously we could've chosen another format with the same
depth even if the input format was possible.
Also make sure to chose according to the order in the
caps.
Enhance current code to prefer an exact match on sample depth if
possible. Also ignore GST_AUDIO_FORMAT_FLAG_UNPACK when checking
equality on the flags.
This is an adaptation of patch #3 from Jyri Sarha
( http://lists.xiph.org/pipermail/speex-dev/2011-September/008240.html ),
but without the NEON optimizations (these come in a separate commit).
The idea is to replace SATURATE32(PSHR32(x, shift), a) operations with a
combined SATURATE32PSHR(x, shift, a) macro that can be optimized for
specific platforms (and also avoids rare rounding errors).
Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
There were two issues with the previous decodebin2 group switching algorithm:
Issue 1: It operated with no memory of what has been drained or not, leading to
multiple checks for chains/groups that were already drained.
Issue 2: When receiving an EOS, it only detected that a higher-level chain
was drained if it contained the pad receiving the EOS.
The following modifications have been applied:
- a new drained property has been added to GstDecodeChain
- both drained properties of chain/group are set as soon as they are detected
- the algorithm now tests agains these values
See https://bugzilla.gnome.org/show_bug.cgi?id=685938
Should fix "cannot register existing type `GstPlaybinSelectorPad'" warnings
and subsequent errors when creating multiple players at the same time.
Conflicts:
gst/playback/gststreamselector.c
GstId3Mux sink pad is an always (static) pad. Thus releasing it
as if a request pad triggers:
(sound-juicer:11826): GStreamer-CRITICAL **:
gst_element_release_request_pad: assertion `GST_PAD_PAD_TEMPLATE (pad)
== NULL || GST_PAD_TEMPLATE_PRESENCE (GST_PAD_PAD_TEMPLATE (pad)) ==
GST_PAD_REQUEST' failed
https://bugzilla.gnome.org/show_bug.cgi?id=685110
Need to store the old running time and frame numbers when renegotiating and
start from 0 again when a new caps is set, preventing that framerate changes
cause timestamping issues.
For example, if a stream pushed 10 buffers on framerate=2/1, its
running time will be 5s. If a new framerate of 1/1 is set, it would
make the running time go to 10s as it would count those 10 buffers
as being sent on this new framerate.
Fixes camerbin unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=682973
../../../gst-plugins-base/gst/audioresample/gstaudioresample.c: In function 'gst_audio_resample_dump_drain':
../../../gst-plugins-base/gst/audioresample/gstaudioresample.c:729:9: warning: variable 'in_len' set but not used [-Wunused-but-set-variable]
streams with non-TIME segments will not have timestamps ...
... and therefore will never unblock the other streams.
Fixes blocking issue when using playbin suburi feature
People expect audiorate to fix things up and not make things worse
by default, so let's default to a similar tolerance as audiosinks
do. Should help with transcoding and the like, though one might
possible still want higher values then.
We can't just make a vfunc that takes a union of int
and pointer as argument, and then set up subclass-specific
action signals and signals that take int (in multifdsink's
case) or a GSocket * (in multisocketsink's case), and then
expect everything to Just Work. This blows up spectacularly
on PPC G4 for some reason.
Fixes multifdsink unit test on PPC, and fixes aborts in
multisocketunit test (now hangs in gst_pad_push - progress).
* Update outgoing segment.base with accumulated time, ensuring all
streams are synchronized.
* Only consider streams as "new" is they have a STREAM_START event
with a different seqnum.
* Use GstStream segment.base instead of separate variable to store
the past running time.
* Disable passthrough
* Switch to glib 2.32 GMutex/GCond
* Avoid getting pad parent the expensive way
* Minor other fixes
Make sure to send a CAPS event downstream when we get our
first input caps. This fixes not-negotiated errors and
adder use with downstream elements other than fakesink.
Even gst-launch-1.0 audiotestsrc ! adder ! pulsesink works now.
Also, flag the other sink pads as FIXED_CAPS when we receive
the first CAPS event on one of the sink pads (in addition to
setting those caps on the the sink pads), so that a caps query
will just return the fixed caps from now on.
There's still a race between other upstreams checking if
caps are accepted and sending a first buffer with possibly
different caps than the first caps we receive on some other
pad, but such is life.
Also need to take into account optional fields better/properly.
https://bugzilla.gnome.org/show_bug.cgi?id=679545
Fix invalid memory access caused by broken pointer arithmetic.
If we have a uint16_t *tmpbuf and add n * dest->stride to it, we
skip twice as much as we intended to because dest->stride is in
bytes and not in pixels. This made us write beyond the end of
our allocated temp buffer, and made the unit test crash.
Make function pointers NULL when nothing needs to be done.
Pass target pixels to dither and matrix functions so that we can later make
them operate on the target buffer memory directly.
This allows the following use-cases to expose the group and pads
before an ALLOCATION query comes through:
* Single stream use-cases
* Multi stream use-cases where all streams sent the CAPS event before
the first ALLOCATION query
Some cases will still make the initial ALLOCATION query fail though,
which isn't optimal, but not fatal (it will recover when pads are
exposed, a RECONFIGURE event is sent upstream and elements can
re-send an ALLOCATION query which will reach downstream elements).
https://bugzilla.gnome.org/show_bug.cgi?id=680262
A caps event is also used to establish that a stream has prerolled.
Without this, we end up allowing negotiation queries to fail, ending
in decoders (and other elements) to not be configured right from the
start with the most optimal settings.
videoconvert.c: In function 'videoconvert_convert_new':
videoconvert.c:287:11: error: 'Kr' may be used uninitialized in this function
videoconvert.c:287:15: error: 'Kb' may be used uninitialized in this function