Call reset-sync on the rtpbin before we go to playing. This makes us require SR
packets for all streams again before we attempt to sync them. If we don't reset,
it might be that we combine SR packets from before and after the PAUSE/PLAYING
state change and end up with huge bogus offsets.
Don't throw away the first RTCP packet if it arrives before the first
RTP packet but remember and use it to signal sync once we get the
RTP packet.
See https://bugzilla.gnome.org/show_bug.cgi?id=691400
When we go to paused, we first flush the connection and then send the pause
command. As a result of the flushing, the scheduled paused command can get
lost. Wait until the connection is completely flushed and the rtsp task is
waiting before issuing the paused or playing request.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=702705
This can only reliably work if demuxers have a
separate streaming thread per srcpad. This should be
done in a demuxer base class, which integrates parts
of multiqueue
https://bugzilla.gnome.org/show_bug.cgi?id=701856
bug #700505
Following a representation change that causes a resolution change,
the video decoder fails to decode correctly. Dashdemux detects the
representation change and pushes a new caps event and an
initialization segment (a new moov atom) to the downstream qtdemux,
but it doesn't handle this new moov yet, it will only parse the
first one it receives.
This commit changes qtdemux to accept a new moov in a dash bitstream
switching scenario.
Lock the state of the all our elements and manage their states
outselves. Because we are working async, we can't rely on the state
change function to set the state at the right time or to return the
right return value from the state change function.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=702046
Use special variants for the case when we don't change the panorama (pan=0.0).
Simplify the processing functions by passing the panorama value directy instead
of the instance. Use orc for clearing buffers too.
When the segment start is not 0, this created a situation where
the output_end_time is inferior to output_start_time, and the duration
of the next buffer ended up underflowing.
https://bugzilla.gnome.org/show_bug.cgi?id=701385
This reverts commit 61666898cf.
This commit changes what the set_sps_pps() function does, not it doesn't
set caps anymore (and should have been renamed). The main problem is that
not all call sites are updated and thus leak the string.
It is not needed to do a state change from the _play() function on
ourselves. The state change function already did that and we don't want to
interfere with that (or use hacks to avoid interference).
Also send stream-start and segment event on the RTCP pad.
We don't need to send anything on the sync_src pad because we
already forwarded all incomming events.
The previous implementation had the formatting of SDP attributes happen
in each RTP payloader, now instead the constituent values are propagated
as caps fields. This allows for applications to do SDP offer/answer
based on caps negotiation.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=700748
The previous implementation had the formatting of SDP attributes happen
in each RTP payloader, now instead the constituent values are propagated
as caps fields. This allows for applications to do SDP offer/answer
based on caps negotiation.
Keep parsing a-framerate, x-framerate and x-dimensions in rtpjpegdepay
to be backwards compatible with previous payloaders.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=700748
There is no reason to send a flush-stop when receiving a seek event.
In the case of a flushing seek, we could eventually want to, but in
the code path were we check if the seek is "flushing", we have the
following comment that makes sense:
"we can't send FLUSH_STOP here since upstream could start pushing data
after we unlock mix->collect.
We set flush_stop_pending to TRUE instead and send FLUSH_STOP after
forwarding the seek upstream or from gst_videomixer_collected,
whichever happens first."
https://bugzilla.gnome.org/show_bug.cgi?id=684237
In case qtdemux is handling a mss stream, do not mark the stream to wait
for EOS after a segment. Even if it seems to be the last one according to
the current streams information.
MSS handling is different here because there is another demuxer driving
the pipeline
The samplerate field in the STSD atom is not right for some ALAC files
(usually when audio is 96kHz/24bits), so the audio caps must be
extracted from the codec data.
https://bugzilla.gnome.org/show_bug.cgi?id=700382
I took the opportunity to simplify that code a bit. We now use
gst_buffer_make_writable() to make the buffer writable and map twice the
same buffer, with first map being read/write, and second read only. This
get rid of the critical:
GStreamer-CRITICAL **: gst_structure_set_name: assertion `IS_MUTABLE
https://bugzilla.gnome.org/show_bug.cgi?id=700044
The exist one case where that we endup with original caps in ret, in which
case we are not guaratied to have writable caps. Simply ensure this is the
caps are writable before entering the loop.
https://bugzilla.gnome.org/show_bug.cgi?id=700044
This reverts commit 84ae670ab4.
Actually this is not how it is supposed to work. videomixer
creates a [0,-1] segment and then puts frames of the different
streams there based on their running times in their own segments.
For receiving video data via RTSP when the video is sent via
multicast there is no way to specify the udpsrc buffer-size.
On windows the native network buffer is not large and with video
i-frames being huge the buffer is to small and you get i-frame corruption,
it looks terrible, and there is no (easy) way to set the udpsrc buffer-size.
https://bugs.freedesktop.org/show_bug.cgi?id=52264
Whenever the demuxer has a new caps on a stream, it should set the
new_caps variable to true and a new caps event will be pushed before
the next buffer
qtdemux takes its buffers from a GstAdapter. Those buffers are created
from the larger buffer that it obtained from upstream and they carry
the same flags, including DISCONT if it is set. In these cases, all
buffers that qtdemux is going to push would be marked as DISCONT.
This scenario can make parsers/decoders flush on every buffer leading
to no decoding at all hapenning. This patch prevents this by unsetting
the flag if it shouldn't be set.
* Explicitly init variables for fragmented formats at init
* Do not use GstClockTime type if the variable isn't a timestamp
* Fix a style/readability issue at an if block
* Group 2 mss mode conditional blocks together to improve readability
Conflicts:
gst/isomp4/qtdemux.c
This can confuse downstream when they get a byte segment after receiving
the natural time segment from qtdemux that it sends when starting to
push buffers. This is specially the case with parsers that try to
convert the position from byte to time format and might miss the
correct position for playback to start.
Reset different variables on state changes to ready and when
handling a flush-stop. For handling flush stops we should check
if there is an upstream adaptive demuxer driving the pipeline as this
means that qtdemux will get a new moov atom. For 'standard' isomedia
streams this isn't true and qtdemux should keep the previous moov
information around.
Conflicts:
gst/isomp4/qtdemux.c
Whenever dashdemux switches bitrates it sends a new moov with the
new stream configuration. qtdemux should now handle this by splitting
the exposing and configuration of streams into separate functions. When
the stream is new it is configured and exposed, when it is a new bitrate
of an existing stream it is only reconfigured.
Conflicts:
gst/isomp4/qtdemux.c
smoothstreaming streams should be handled as a special kind of
fragmented isomedia. In MSS the fragments will not contain a
'moov' atom with the media descriptions, this has to be extracted
from the caps.
Additionally, there should be another demuxer upstream that is likely
going to be the one to answer/act on queries and events, so qtdemux has
to forward those upstream.