when downsampling, the output buffer can be filled before all the input
samples are consumed. this is correct: when downsampling, several input
samples are needed for each output sample, so when only a small number of
input samples are available the number of output samples produced can be 0.
the resampler, however, was discarding those extra input samples instead of
clocking them into its filter history for the next iteration. this patch
fixes this by removing the check that the output buffer is full. the code
now always loops until all input samples are consumed, and relies on the
calling code to have provided a suitably sized location for the output.
note that there are already other checks in place in the calling code to
ensure that this is the case.
https://bugzilla.gnome.org/show_bug.cgi?id=732908
The source pad might be flushing while negotiating, resulting in
set_caps or the ALLOCATION query failing. In this case set the
reconfigure flag on the source pad so that negotiation is retried on the
next buffer.
If we had plugins and an error occurred we only include the error message
caused by this, otherwise we will include the codec description as generated
from the caps.
This allows to detect which exact codec was missing instead of getting a
generic "no suitable decoders found" error message.
Adds a new test to textoverlay to make sure it can properly handle
elements that have ANY caps but fail to add the overlay meta in
the allocation query.
This test verifies that textoverlay won't use the caps features even
knowing that the overlay meta is accepted when querying the downstream
caps because it also needs downstream to confirm by putting the meta
in the allocation query.
https://bugzilla.gnome.org/show_bug.cgi?id=735800
When downstream claims to accept the overlay meta but fails to
provide it in the allocation query, properly fallback to setting
a new caps without the overlay meta as that is not going to be used.
Only do this if the original caps doesn't have the overlay already,
otherwise there isn't much that can be done.
https://bugzilla.gnome.org/show_bug.cgi?id=735800
Setting segment.base in the segment sent from gst_ogg_demux_handle_page() is
enough to ensure that chained oggs are played corretly (see bgo#706569).
Tweaking the base in gst_ogg_pad_submit_packet() as well result in delays when
playing a file with start != -1.
https://bugzilla.gnome.org/show_bug.cgi?id=735808
It's not done in any other code calling negotiate and will cause deadlocks
as it is sending events and queries in the pipeline.
Specifically this pipeline was deadlocking:
gst-launch-1.0 videotestsrc ! textoverlay ! textoverlay ! fakesink
Base time should be accumulated so non flushing seeks have the expected base.
Not accumulating result in segments appearing as "too late" and so are not
played by the sink.
https://bugzilla.gnome.org/show_bug.cgi?id=735509
Fixes a crash when controlsrc, readsrc or writesrc are modified from
gst_rtsp_source_dispatch_read/write and gst_rtsp_watch_reset at the
same time.
https://bugzilla.gnome.org/show_bug.cgi?id=735569
Otherwise we might change some capsfeatures from ANY to the specific
value from the filter and do not filter those out in case the
sink doesn't support them
https://bugzilla.gnome.org/show_bug.cgi?id=734822
This avoids a race where we would get new tag but we are already
prerolled and analyzing results.
It is the way it is supposed to be handled as stated in comment:
"If preroll is complete, drop these tags - the collected information is
possibly already being processed and adding more tags would be racy"
Reset last_timestamp_out when applying the output segment
change, to avoid decoder confusion over new timestamp timelines when
a seamless segment change happens.
Move some locks/unlocks to later when they're actually needed.
https://bugzilla.gnome.org/show_bug.cgi?id=734617
Gracefully handle switching groups that all pads are deadend.
This can happen when quickly switching programs on mpegts as the
output is unaligned it can happen that not enough data was accumulated at
parsers to generate any buffers, causing the stream to receive EOS before
any data can be decoded.
To handle this scenario, the _expose function now also gets if there is
any next group to be exposed along with the list of endpads. If there are
no endpads and there is another group to expose it will switch to this next
group and then retry exposing the streams.
Also, the requirement to only switch from the chain that has the endpad had
to be modified to care for when the drainpad is NULL
https://bugzilla.gnome.org/show_bug.cgi?id=733169
As was done for the base video decoder in commit 695675, don't
flush out the decoder on a new SEGMENT event. Segment events
may be a new segment, but are also often segment updates for
the current segment where the old data should be kept. For new
segments, a STREAM_START event will already trigger a drain, but
make sure to flush any remaining partial data then as well.
https://bugzilla.gnome.org/show_bug.cgi?id=734666
Make textoverlay negotiate caps more correctly.
1) Check what caps we received in the video-sink
2) If it already has the overlay meta -> use it directly
3) If it doesn't, textoverlay try adding the overlay meta and using it,
if downstream doesn't support it, just use what is received in the
video-sink
4) Check if the allocation query also supports the meta to enable
really using it
Before it wasn't really doing renegotiation of any kind, just
re-checking if it should use the overlay meta or not
Also had to update the caps in the test as memory:SystemMemory seems
to be required when you use a caps feature otherwise intersection/subset
checks will fail.
https://bugzilla.gnome.org/show_bug.cgi?id=733916
Set up a fakesink with a pad probe to replace the missing encoder to detect
if encoding was really required and only error out in this case. Otherwise
just let passthrough branch work.
This delays the error posting from the set_state function to when buffers
are really flowing. Unit test updated accordingly
https://bugzilla.gnome.org/show_bug.cgi?id=650652
This fixes the reverse playback scenario when upstream is not fully
parsing the stream and does not send every keyframe chain separately
with the DISCONT flag on the keyframe.
To explain this, let's suppose we have this stream:
0 1 2 3 4 5 6 7 8
K K K
In most circumstances, the upstream parser will chain in the
decoder the buffers in the following order:
6 7 8 3 4 5 0 1 2
D D D
In this case, GstVideoDecoder will flush the parse queue every time
it receives discont (D) and we will eventually get in the output queue:
(flush here) 8 7 6 (flush here) 5 4 3 (flush here) 2 1 0
In case the upstream parser doesn't do this work, though,
GstVideoDecoder will receive the whole stream at once and will flush
the parse queue afterwards:
0 1 2 3 4 5 6 7 8
D
During the flush, it will look backwards for keyframes and will
decode in this order:
6 7 8 3 4 5 0 1 2
This is the same order that it would receive from upstream if
upstream was parsing and looking for the keyframes, only that now
there is no flushing of the output queue in between keyframes,
which will result in the output queue looking like this:
2 1 0 6 5 3 8 7 6
This will confuse downstream obviously and will play incorrectly.
This patch forces the decoder to flush the output queue every time
it picks a new keyframe to decode, so it will end up decoding 6 7 8
and then flushing before picking 3 for decoding, so the output will
get 8 7 6 before 6 5 3 and the video will play back correctly.
https://bugzilla.gnome.org/show_bug.cgi?id=734441
Can't print a GstMemory via GST_PTR_FORMAT, it will crash
inside GObject checking if it's a GObject, and we can't
check generically whether it's a derived GstMemory type,
as boxed types don't allowe derivation.