We now take the maximum of 2*jitter and 0.5*packet_spacing for the extra
delay. If jitter is very low, this should prevent unnecessary retransmission
requests to some degree.
https://bugzilla.gnome.org/show_bug.cgi?id=748041
When we are in passthrough, the transform function doesn't run and if the
passthrough check is in this function it will never be deactivated. Fix this by
checking directly whenever a gain is changed.
Also set the passthrough to TRUE at init because the gains default to 0, so we
can passthrough until any gain property is changed.
https://bugzilla.gnome.org/show_bug.cgi?id=748068
Prevents an extra unref of GstBuffer when passing a non-icy stream through
icydemux with metadata-interval set to 0.
Reproducible with:
gst-launch-1.0 filesrc location=~/testsong.mp3 ! \
'application/x-icy,metadata-interval=(int)0' ! icydemux ! decodebin ! wavenc ! \
filesink location=~/testsong.wav
https://bugzilla.gnome.org/show_bug.cgi?id=748024
because _release_pad tries to release it from ctx->sinkpad, which is
multiqueue's sink pad, and currently fails because the probe is not
installed there
This also happens in the very beginning when we receive the first packet, a
warning would be very confusing here. In all places where we should warn about
this, we would've printed a warning already before.
Right above we consider lost_packet packets, each of them having duration,
as lost and triggered their timers immediately. Below we use expected_dts
to schedule retransmission or schedule lost timers for the packets that
come after expected_dts.
As we just triggered lost_packets packets as lost, there's no point in
scheduling new timers for them and we can just skip over all lost packets.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
Resetting the jitterbuffer drops all packets and other things, and will cause
a discontinuity in the packets received by the depayloaders. They should now
also flush anything they had pending as the new data will start at a different
position.
https://bugzilla.gnome.org/show_bug.cgi?id=739868
When doing key uint seek, qtdemux calls gst_qtdemux_adjust_seek
to get proper offset. And then this offset is set to
segment.position and segment.time in gst_qtdemux_perform_seek but
segment.start is not updated.
After that, application sends segment query,
qtdemux sets start and stop to query using gst_segment_to_stream_time. Due
to the wrong value in segment.start, the stop position is smaller than
it should.
https://bugzilla.gnome.org/show_bug.cgi?id=746822
We always write the CTTS in qtmux. Ideally we only want to do that
for streams that need DTS, it should be present on the track information
rather than be decided based on each buffer
As qt uses durations, it doesn't matter, only the difference
between consecutive buffers is important. Also, collectpads
already replaces PTS/DTS with the running times for them.
Instead of checking various state variables around the muxer,
track the current muxing mode in a single 'mux_mode' enum.
Add some implementation notes about the different mux modes
gst_segment_do_seek() does that for us already, and doing it twice
will break non-flushing seeks in interesting ways. Leftover from 1.0
porting.
Also copy over segment offset and applied_rate, just in case.