If no clock was provided directly by rtspsrc. This behaviour was removed
by f8013487c9 and results in rtspsrc not
providing the system clock via the rtpjitterbuffer.
As a result, if another element like an audio sink, provides a clock,
the pipeline would select that (when going to PAUSED/PLAYING again later).
Audio clocks usually don't progress in PAUSED, and thus our live source
won't be able to use the clock to produce data, making the sink never
preroll and everything is stuck.
... unless the muxer uses the same audio pad template name as
splitmuxsink. We can't request a pad called "audio_0" on a muxer that
wants pads to be "sink_%d".
In push mode we process as much as possible in the adapter. When we receive
a DISCONT buffer which we can't match to an actual sample (based on the existing
sample table) and there is still data remaining in the incoming adapter,there is
one of two cases happening:
1) We are doing reverse playback, in which case we should flush out all pending
data
2) We have leftover data from the previous incoming buffer... which we can't do
anything about.
For the second case, make sure we flush out the remaining data so that we can start
parsing again from scratch.
https://bugzilla.gnome.org/show_bug.cgi?id=781319
They should have ideally the same timescale of the video track, which we
can't guarantee here as in theory timecode configuration and video
framerate could be different. However we should set a correct timescale
based on the framerate given in the timecode configuration, and not just
use the framerate numerator.
Make sure offset and neededbytes are properly resetted when all
streams are EOS in push-mode.
Avoids cases when some data might still be pushed by upstream (because
it didn't yet see the resulting GST_FLOW_EOS yet) and qtdemux gets
completely lost.
https://bugzilla.gnome.org/show_bug.cgi?id=781266
buf is the current pad->last_buf value. If ever it gets copied/unreffed,
we need to make sure to write back the new pointer to the last_buf
variable.
Fixes using wrong pointer values in the case of decrasing DTS value
Before pushing a sample, check if there was a change in the current
stsd entry. This patch also assumes that the first stsd entry is
used as default for the first sample. It might cause an uneeded
caps renegotiation when this isn't the case.
stsd can have multiple format entries, parse them all.
This is required to play DVB DASH profile that uses multiple entries
to identify the different available bitrates/options on dash streams
The stream format-specific data is not stored into QtDemuxStreamStsdEntry
Instead of using the stsd as a base pointer, use the actual stsd
entry as the stsd can have multiple entries. This is rarely used
for file playback but is a possible profile with in DVB DASH specs.
This still doesn't support stsd with multiple entries but makes it
easier to do so.
This is needed for V4L2_OUTPUT interface, and is harmless of
V4L2_CAPTURE interfaces. This will fix timestamp in cases like:
v4l2src io-mode=dmabuf ! v4l2videoNenc output-io-mode=dmabuf-import ! ...
Same apply for userptr.
https://bugzilla.gnome.org/show_bug.cgi?id=781119
Running `gst-validate-launcher -t validate.file.playback.change_state_intensive.vorbis_vp8_1_webm`
on odroid XU4 (s5p-mfc v4l2 driver) often leads to:
ERROR:../subprojects/gst-plugins-good/sys/v4l2/gstv4l2videodec.c:215:gst_v4l2_video_dec_stop: assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
This happens when the following race happens:
- T0: Main thread
- T1: Upstream streaming thread
- T2. v4l2dec processing thread)
[The decoder is in PAUSED state]
T0. The validate scenario runs `Executing (36/40) set-state: state=null repeat=40`
T1- The decoder handles a frame
T2- A decoded frame is push downstream
T2- Downstream returns FLUSHING as it is already flushing changing state
T2- The decoder stops its processing thread and sets `->processing = FALSE`
T1- The decoder handles another frame
T1- `->process` is FALSE so the decoder restarts its streaming thread
T0- In v4l2dec-> stop the processing thread is stopped
NOTE: At this point the processing thread loop never started.
T0- assertion failed: (g_atomic_int_get (&self->processing) == FALSE)
Here I am removing the whole ->processing logic to base it all on the
GstTask state to avoid duplicating the knowledge.
https://bugzilla.gnome.org/show_bug.cgi?id=778830
AudioSpecifigConfig is used in a variety of AAC streams but was
being parsed differently. Instead, make everyone use the same parsing.
* Remove unused 'bits' field (it was always set to 0 if present)
* Add proper GAConfig parsing (to know the number of samples per frame
if present).
Fixes wrong rate/channels configuration in streams coming from qtdemux
https://bugzilla.gnome.org/show_bug.cgi?id=780966
Without a specified framerate from the sink, the decoder frame interval
should be set using the framerate of the encoded video stream.
Therefore, the v4l2object should be able to change the framerate on the
output if the V4L2 device accepts it.
This is also necessary for mem2mem encoders so that their bitrate
calculation code may work correctly and they may report the correct
frame duration on the capture queue.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
If the duration of the v4l2object is GST_CLOCK_TIME_NONE, because the
sink did not specify a framerate in the caps and the driver accepts the
framerate, the decoder element uses GST_CLOCK_TIME_NONE to calculate and
set the element latency.
While this is a bug of the capture driver, the decoder element should
not use the invalid duration to calculate a latency, but print a warning
instead.
https://bugzilla.gnome.org/show_bug.cgi?id=779466
The correct behaviour of anything stuck in the ->render() function
between ->unlock() and ->unlock_stop() is to call
gst_base_sink_wait_preroll() and only return an error if this returns an
error, otherwise, it must continue where it left off!
https://bugzilla.gnome.org/show_bug.cgi?id=774945