Previously the sequence number kept track of by GstRTPBasePayload would
only be set when going from READY to PAUSED state. This meant that a
downstream element that attempted to configure a basepayloader by
setting seqnum-offset e.g. in its sinkpad's caps template would have
trouble configuring the basepayloader. The reason was that the caps
event which arrives with the desired value for seqnum-offset did not
arrive at the basepayloader until caps negotiation took place,
significantly later than the transition from READY to PAUSED.
The result after this patch is that the default value for the
seqnum-offset property, or later set values for this property, will take
effect when going from READY to PAUSED like before. In addition the an
arriving caps event will also affect the basepayloaders configured
sequence number as the event arrives.
The payload type field in an RTP packet header is 7 bits wide, hence the
boundary values ought to be 0x00 and 0x7f, not the previously stated
values 0x00 and 0x80.
If we are using an adaptive stream demuxer, which outputs a non-container
stream, we are putting another multiqueue after the *parser* following
the adaptive stream demuxer. We do not want to add another instance of
the same parser right after this multiqueue.
Otherwise we will emit buffering messages not just from the last
multiqueue but also from previous multiqueues... confusing the
application with different percentages during pre-rolling.
For adaptive streaming demuxer we insert a multiqueue after
this demuxer. This multiqueue will get one fragment per buffer.
Now for the case where we have a container stream inside these
buffers, another demuxer will be plugged and after this second
demuxer there will be a second multiqueue. This second multiqueue
will get smaller buffers and will be the one emitting buffering
messages.
If we don't have a container stream inside the fragment buffers,
we'll insert a multiqueue below right after the next element after
the adaptive streaming demuxer. This is going to be a parser or
decoder, and will output smaller buffers.
Adaptive streams should download its data inside the demuxer, so
we want to use multiqueue's buffering messages to control the
pipeline flow and avoid losing sync if download rates are low;
https://bugzilla.gnome.org/show_bug.cgi?id=707636
When seeking back to original state after duration seeks, let
upstream know that we want the whole file, including the last
byte that wasn't requested on the duration seeks.
https://bugzilla.gnome.org/show_bug.cgi?id=724633
Two new functions have been added,
gst_rtsp_connection_set_tls_database() and
gst_rtsp_connection_get_tls_database(). The certificate database will be
used when a certificate can't be verified with the default database.
https://bugzilla.gnome.org/show_bug.cgi?id=724393
This was a regression introduced by f52fd7a68, where we started using
the stride to encode the dimensions in tiles. This patch simply updates
offset and size calculation as described in the documentation,
part-mediatype-video-raw.txt.
Otherwise there's an interesting race condition when we destroy
the inputselector (actually it will be destroyed later when its state
change message gets destroyed) and afterwards release its sinkpad.
This is the code path when the last channel is removed from the
input selector.
Gave this warning sometimes, for chained oggs or whenever else
we change decode groups:
GStreamer-CRITICAL **: Padname '':sink_0 does not belong to element inputselector0 when removing
MONO and NONE position are the same, for example, but in
general there isn't much to do here for such a conversion.
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
Fixes problem in audioconvert, which would end up using
a mixmatrix when converting between different mono format
because it thinks MONO positioning is different from
unpositioned channels, which is not the case in this
special case. The mixmatrix would end up being 0.0 so
audioconvert would convert to silence samples.
https://bugzilla.gnome.org/show_bug.cgi?id=724509
If the text pads does not go away we just set the overlay to silent, which
allows us to immediately re-enable subs later again. However before this
change we also released the streamsynchronizer text pads, which deadlocked
because there was still dataflow going on. Just do this only if we remove
the complete chain.
https://bugzilla.gnome.org/show_bug.cgi?id=683504
Storing a 64 bit integer in a 32 bit integer and then checking
for the error cases might not be ideal.
error: comparison of constant -9223372036854775808 with
expression of type 'guint' (aka 'unsigned int') is always true
video time uses the 'segment' and the text time should use
the 'text_segment'.
If different segments are used for video and text it would
lead to out of sync video/subtitles.