When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
This should result in no worse accuracy than the base parse element, and may
result in better accuracy. In particular, the number of bytes processed at any
given point, as accumulated by baseparse, can be only accurate to
(1 / # of frames) bytes per second, and if we try to seek immediately after
pausing the pipeline to a large offset, this small inaccuracy can propagate to
something noticeable.
The use case that prompted this patch is a 45-minute MPEG-1 layer 3 file, which
has a constant bit rate but no seek tables. Trying to seek the pipeline
immediately after pauisng it, without the ACCURATE flag, to a location 41
minutes in, yields a location that is, even with <https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/374>,
still audibly incorrect. This patch yields a much closer position, no longer
audibly incorrect, and likely within a frame of the most correct position.
By the time sink_event is called, the pad's current caps have
already been updated. To address this, implement sink_event_pre_queue,
and check if the pad can be renegotiated there.
Fixes#707
- Refactored the planar transform method to support all video formats
that are stored planar, independent of the used subsampling
- Added support for Y444
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory and when AVC (length-prefixed) alignment is used.
This has quite an impact on performance on systems with limited amount of
resources. With this patch the whole GstBuffer will not be mapped at once,
instead each individual GstMemory will be iterated and mapped separately.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
In this mode each field is carried using its own buffer.
Allow deinterlace to negotiate caps with the Interlaced feature and
adjust the algorithm fetching lines.
Fix#620
Output frames used to have their interlace mode set to the same one as
the input. This breaks their field and comp heights when deinterlacing
an alternate stream.
Each FLAC metadata block starts with a flag denoting whether it is the
last metadata block. The existing flacparse code moves any existing
VORBISCOMMENT block to immediately follow the STREAMINFO block without
changing any block's last-metadata-block flag. If no VORBISCOMMENT block
exists, it created one with the last-metadata-block flag set to true.
This results in gstflacdec sometimes giving bad headers to libflac when
trying to play perfectly valid FLAC files depending on the file's
metadata ordering. Depending on the contents of the other metadata
blocks, current versions of libflac may or may not return
FLAC__STREAM_DECODER_ERROR_STATUS_BAD_HEADER when given this broken
metadata. This is most noticeable with files that have a large cover art
image attached where VORBISCOMMENT is the very last metadata block with
no PADDING afterwards.
This patch changes that behavior so that:
1. For FLAC files that already have a VORBISCOMMENT block, the metadata
order is preserved.
2. For FLAC files that do not have a VORBISCOMMENT block, the generated
dummy VORBISCOMMENT is placed immediately after STREAMINFO and
inherits the last-metadata-block flag from STREAMINFO.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/484
Sometimes the headers contain useless, wrong or zero values for e.g. the
sample size with these formats. There's only a single valid value for
them so let's set these instead.
We do not have a way to know the format modifiers to use with string
functions provided by the system. G_GUINT64_FORMAT and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
```
../gst/rtpmanager/gstrtpjitterbuffer.c: In function 'gst_jitter_buffer_sink_parse_caps':
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: unknown conversion type character 'l' in format [-Werror=format=]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
In file included from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/rtp/gstrtpbuffer.h:27,
from ../gst/rtpmanager/gstrtpjitterbuffer.c:108:
/home/nirbheek/cerbero/build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: too many arguments for format [-Werror=format-extra-args]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
```
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/379
Previously we saved the buffer_timestamp straight into
mux->cluster_time. Since the cluster time saved into the file does not
have as high precision as GstClockTime depending on the timecodescale
the rounding of relative_timestamp was invalid as mux->cluster_time
which it was calculated relative to was not equal to the cluster time
written to the matroska file.
Example of "mkvinfo -v" of how it looks before and after this change in
an scenario where previously timestamps got out of order because of this
issue.
Notice the timestamp of the SimpleBlock right before and right after the
Cluster now being in order. The consequence of this however is that the
cluster timestamp is not necessarily the same as the timestamp of the
first buffer in the cluster however (in case it's rounded up).
Before
| + SimpleBlock (track number 1, 1 frame(s), timecode 126.922s = 00:02:06.922)
| + Frame with size 432
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.933s = 00:02:06.933)
| + Frame with size 329
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.955s = 00:02:06.955)
| + Frame with size 333
|+ Cluster
| + Cluster timecode: 126.954s
| + Cluster previous size: 97344
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 126.954s = 00:02:06.954)
| + Frame with size 61239
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.975s = 00:02:06.975)
| + Frame with size 338
After
| + SimpleBlock (track number 1, 1 frame(s), timecode 135.456s = 00:02:15.456)
| + Frame with size 2260
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.468s = 00:02:15.468)
| + Frame with size 332
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 335
|+ Cluster
| + Cluster timecode: 135.489s
| + Cluster previous size: 158758
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 88070
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.511s = 00:02:15.511)
| + Frame with size 336
Comparing gst_rtspsrc_loop_interleaved and gst_rtspsrc_loop_udp, and investigating on timeout issues, it sounds like a piece of code has been originally copied from udp to the interleaved one. The timeout variable is never used inside the interleaved one. No side effect has been seen in the removed function calls.
The debug message removed is pointless as the timeout used is "src->tcp_timeout" that is fixed.
The presence of the two timeout drove my team in investigating if the reference to the tcp_timeout was correct (it is). Hence we removed the misleading reference to the local timeout variable.
The VP Codec Configuration Box (vpcC) contains vp9 profile and
colorimetry information. Especially the profile information might
be useful for downstream to select capable decoder element.
For live streams, if we keep the stream for a long time, the timestamp
will be larger than max_uint32. In that case, timestamp should be handled
as a rollover timestamp rather than a backward timestamp.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
Instead of having chunks with one sample per raw audio sample, have
chunks with a single sample that contains lots of raw audio samples. If
necessary these are still split again later when reading the stream.
With this we are allocating a lot less memory for the parsed sample
tables and can play files that previously triggered our limit of 200MB
for the sample table. For example, one file here would previously
allocate 3.5GB for the sample table and now only allocates 70KB.
Outputting 48000 buffers per second is not a good idea performance-wise.
If a container sample is less than 1024 raw audio frames, combine
multiple samples to get at least 1024 raw audio samples as long as
they're stored contiguous in the file.
For the other direction, if a container sample contains more than 4096
samples there is already code for splitting them up.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=692750
When the server replies with a range "now-", it is presumed to
be a "live" stream and we should request a similar range.
This was the case prior to my refactoring to make use of
gst_rtsp_range_to_string in 5f1a732bc7,
this commit restores the behaviour for that case.
The proper way of capping on max-streams is to do it in rtpssrcdemux.
This patch uses the newly introduced property on rtpssrcdemux. Previous
behavior would not prevent rtpssrcdemux spawning new pads for every new
ssrc and potentialy causing performance trouble during teardown.
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.
gstrtspsrc uses a queue, set_get_param_q, to store set param and get
param requests. The requests are put on the queue by calling
get_parameters() and set_parameter(). A thread which executs in
gst_rtspsrc_thread() then pops requests from the queue and processes
them. The crash occured because the queue became empty and a NULL
request object was then used. The reason that the queue became empty
is that it was popped even when the thread was NOT processing a get
parameter or set parameter command. The fix is to make sure that the
queue is ONLY popped when the command being processed is a set
parameter or get parameter command.
If not configuring the sinks via the "location" property this can be
useful to know for which sink the fragment was actually opened/closed,
especially if finalization of the fragments is happening asynchronously.
When connected to an upstream rtpfunnel element, payload-type,
ssrc and clock-rate will not be present in the received caps.
rtprtxsend can already deal with only the clock rate being
present there, a new property is exposed to allow users to
provide a payload-type -> clock-rate map, this enables the
use of the max-size-time property for bundled streams.
ffmpeg is doing the same and various files in the wild have bogus
information in the sample description if the same information is also
duplicated afterwards in the v1/v2 sound sample desription.
Previously we only did this for non-raw audio due to
https://bugzilla.gnome.org/show_bug.cgi?id=374914
but this specific file is already worked around differently. It still
works after this change.
Also remove ad-hoc GST_READ_DOUBLE_BE re-implementation and move the
switch for legacy audio formats after reading all the sample
descriptions as we want to override the values from there.
By default imagefreeze will still reject new buffers after the first one
and immediately return GST_FLOW_EOS but the new allow-replace property
allows to change this.
Whenever updating the buffer we now also keep track of the configured
caps of the buffer and from the source pad task negotiate correctly
based on the potentially updated caps.
Only the very first time negotiation of a framerate with downstream is
performed, afterwards only the caps themselves apart from the framerate
are updated.
Elements emitting frames through several srcpads should use a
flow combiner to aggregate the chain returns and therefore only return
GST_FLOW_NOT_LINKED to upstream when all the downstream pads have
received GST_FLOW_NOT_LINKED.
In addition to that, in order to handle pads being relinked downstream,
the flow combiner should be reset in response to RECONFIGURE events.
This ensures that a both srcpads process a chain operation before a
GST_FLOW_NOT_LINKED can be propagated upstream (which would usually stop
the pipeline).
Otherwise, in a configuration with two srcpads, only one linked at a
time, after the relink the element could chain data through the now
unlinked pad and the flow combiner would resolve as GST_FLOW_NOT_LINKED
(stopping the pipeline) just because the now linked pad has not been
chained yet to update the flow combiner.
This patch adds handling of RECONFIGURE events to qtdemux. Also, since
this event handling causes the flow combiner to be used from a thread
other than the qtdemux streaming thread, usages of the flow combiner
has been guarded by the object lock.
The key is to make sure the jitterbuffer is set to NULL *before* the
ptdemux.
The race that existed would basically happen when ptdemux had reached
READY, and the jitterbuffer would then push a buffer, triggering a new
pad with a new payloadtype being added and ghosted to the rtpbin itself.
However, the srcpad of the ptdemux would now be inactive, and all the
sticky-event pushed on it would be swallowed, not allowing any to reach
the ghost-pad. Then the buffer in-flight would come to the ghostpad,
and we would assert that a buffer arrived before the necessary
events.
By simply re-ordering the state-changes, we ensure that there will be
no buffer racing into the ptdemux while its state is being changed,
and the problem disappears completely.
Notice also that there is not point in disconnecting the signals on the
ptdemux before this point, since we need the push-thread to settle
down before we can do this in a non-racy way.
Applications might handle locations and generally configuration of the
sink by themselves instead of having splitmuxsink set the location on
the sink. Nonetheless it makes sense to increment the fragment_id that
is passed to the signal so that applications know which fragment is
requested.
Add parsed=true to output caps, as we always output
whole frames, timestamped and all. Means also that
the output can be decoded by avdec_mjpeg wihout
plugging an extra parser (which has no rank).
Add a property that makes it possible for an application to set the
DateUTC header field in matroska files. This is useful for live feeds,
where the DateUTC header can be set to a UTC timestamp, matching the
beginning of the file.
Needs gstreamer!323
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/481
An application might try to access splitmuxsink from sync message handler
by g_object_{get,set} which takes lock also. In general, we don't
take lock around message handler.
By passing `NULL` to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The do_expected_timeout() function may release the JBUF_LOCK, so we need
to check if nothing wanted the timer thread to exit after this call.
The side effect was that we may endup going back into waiting for a timer
which will cause arbitrary delay on tear down (or deadlock when test
clock is used).
Fixes#653
JBUF_WAIT_QUEUE drops the JBUF_LOCK, which means the stop condition
for the chain function may have changed (change_state to NULL). Check
this immediately after the wait so that we don't delay shutting down.
When in-place, running an allocation is not useful since videocrop
is not implicated in the allocation. So only force the allocation
query for the case it was in passthrough. This is needed since the
change in the crop region will likely pull us out of this mode. For the
case we where neither in passthrough or in-place, the allocation query
is already ran by the baseclass, so nothing special is needed.
This fixes performance issues when changing the crop region per frame.
This was reproduced using videocrop2-test.
One of the jitterbuffers functions is to try and make sense of weird
network behavior.
It is quite unhelpful for the jitterbuffer to start dropping packets
itself when what you are trying to achieve is better network resilience.
In the case of a skew, this could often mean the sender has restarted
in some fashion, and then dropping the very first buffer of this "new"
stream could often mean missing valuable information, like in the case
of video and I-frames.
This patch simply reverts back to the old behavior, prior to 8d955fc32b
and includes the simplest test I could write to demonstrate the behavior,
where a single packet arrives "perfectly", then a 50ms gap happens,
and then two more packets arrive in perfect order after that.
# Conflicts:
# tests/check/elements/rtpjitterbuffer.c
The memory leak occurs in the case when the buffer has been
added to the fragment_buffers array of the current pad and
never been sent because of the push failure of the previous
buffers: moof or mdat header or fragmented buffer(s).
This commit adds custom element messages for when gstrtpjitterbuffer
drops an incoming rtp packets due to for example arriving too late.
Applications can listen to these messages on the bus which enables
actions to be taken when packets are dropped due to for example high
network jitter.
Two properties has been added, one to enable posting drop messages and
one to set a minimum time between each message to enable throttling the
posting of messages as high drop rates.
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, we won't be able
to expose any pad (there are no tracks) so we can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
The approach here is to to add details to the ERROR message with a
`redirect-location` field which elements like playbin handle and use right
away.
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov
When the queue is full (and adding more packets would risk a seqnum
roll-over), the best approach is to just start pushing out packets
from the other side. Just pushing out the packets results in the
timers being left hanging with old seqnums, so it's safer to just
execute them immediately in this case. It does limit the timer space
to the time it takes to receiver about 32k packets, but without
extended sequence number, this is the best RTP can do.
This also results in the test no longer needed to have timeouts or
timers as pushing packets in drives everything.
Fixes#619
This basically add ability to choose between inserting from head, tail
or in-place in order to try and minimize the distance to walk through in
the timer queue. This removes an overhead we had seen on high drop rate.
The timer passed to update_timers may be from the stats timer. At the
moment, we could endup rescheduling (reusing) that timer onto the normal
timer queue, unschedul it as if it was from the normal timer queue or
duplicate it into the stats timer queue again. This was protected before
as the with the fact the stats timer didn't have a valid idx.
As the offset is already applied now, we need to update and reschedule
all timers each time the offset is changed. I'm not sure who expect this
to be retro-actively applied, but there was a unit test for it.
If the jitterbuffer head change, there is no need to systematically
wakeup the timer thread. The timer thread will be waken up on if
an earlier timeout has been pushed. This prevent some more spurious
wakeup when the system is loaded. As a side effect, cranking the clock
may set the clock at an earlier position.
In this patch we now make use of the new RtpTimerQueue instead of the
old GArray. This required a lot of changes all over the place, some of
the important changes are that `timer->timeout` is no longer a PTS but
the actual timeout. This was required to get the RtpTimerQueue sorting
right. The applied offset is saved as `timer->offset`, this allow
retreiving back the PTS when needed.
The clockid updates only happens once per incoming packet. If the
currently schedule timer is before the earliest timer in the queue, we
no longer wakeup the thread. This way, if other timers get setup in the
meantime, this will reduce the number of wakup.
The timer loop code has been mostly rewritten, though the behaviour of
running the lost timers first has been kept (even though there is no
test to show what would be the side effect of doing this differently).
Fixes#608
Implement a single timer queue for all timers. The goal is to always use
ordered queues for storing timers. This way, extracting timers for
execution becomes O(1). This also allow separating the clock wait
scheduling from the timer itself and ensure that we only wake up the
timer thread when strictly needed.
The knew data structure is still O(n) on insertions and reschedule,
but we now use proximity optimization so that normal cases should be
really fast. The GList structure is also embeded intot he RtpTimer
structure to reduce the number of allocations.
This moves the RtpJitterBufferStructure type, alloc, free into
rtpjitterbuffer.c/h implementation. jitterbuffer.c strictly rely on
the fact this structure is compatible with GList, and so it make more
sense to keep encapsulate it. Also, anything that could possibly
reduce the amount of code in the element is a win.
In order to support that move, a function pointer to free the data
was added. This also allow making the free function option when
flushing the jitterbuffer.
This helps understanding which function modify the Timerdata
and which one does not. This is not always obvious from thelper
name considering recalculate_timer() does not.
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:55,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/tag/tag.h:25,
from ../gst/isomp4/qtdemux.c:56:
In function ‘qtdemux_inspect_transformation_matrix’,
inlined from ‘qtdemux_parse_trak’ at ../gst/isomp4/qtdemux.c:10676:5,
inlined from ‘qtdemux_parse_tree’ at ../gst/isomp4/qtdemux.c:14210:5:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:645:5: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
645 | gst_debug_log ((cat), (level), __FILE__, GST_FUNCTION, __LINE__, \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
646 | (GObject *) (object), __VA_ARGS__); \
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:1062:35: note: in expansion of macro ‘GST_CAT_LEVEL_LOG’
1062 | #define GST_DEBUG_OBJECT(obj,...) GST_CAT_LEVEL_LOG (GST_CAT_DEFAULT, GST_LEVEL_DEBUG, obj, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c:10294:5: note: in expansion of macro ‘GST_DEBUG_OBJECT’
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c: In function ‘qtdemux_parse_tree’:
../gst/isomp4/qtdemux.c:10294:64: note: format string is defined here
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~
Add a property which explicitly maps splitmuxsink pads to the
muxer pads they should connect to, overriding the implicit logic
that tries to match pads but yields arbitrary names.
When running in async-finalize mode, request new pads from the muxer
using the same names as old pads, instead of letting the muxer assign
new ones based on the pad template name.
The output segment is only used in ONVIF mode.
The previous behaviour was to output a segment computed from
the Range response sent by the server.
In ONVIF mode, servers will start serving from the appropriate
synchronization point (keyframe), and the Range in response will
start at that position.
This means rtspsrc can now perform truly accurate seeks in that
mode, by clipping the output segment to the values requested in
the seek. The decoder will then discard out of segment buffers
and playback will start without artefacts at the exact requested
position, similar to the behaviour of a demuxer when an accurate
seek is requested.
In push mode (streaming), if the audio size is smaller than segment buffer size, it would be ignored.
This happens because when the plugin receives an EOS signal while a single audio chunk that is less than the segment buffer size is buffered, it does not
flush this chunk. The fix is to flush the data chunk when it receives an EOS signal and has a single (first) chunk buffered.
How to reproduce:
1. Run gst-launch with tcp source
```
gst-launch-1.0 tcpserversrc port=3000 ! wavparse ignore-length=0 ! audioconvert ! filesink location=bug.wav
```
2. Send a wav file with unspecified data chunk length (0). Attached a test file
```
cat test.wav | nc localhost 3000
```
3. Compare the length of the source file and output file
```
ls -l test.wav bug.wav
-rw-rw-r-- 1 amr amr 0 Aug 15 11:07 bug.wav
-rwxrwxr-x 1 amr amr 3564 Aug 15 11:06 test.wav
```
The expected length of the result of the gst-lauch pipeline should be the same as the test file minus the headers (44), which is ```3564 - 44 = 3520``` but the actual output length is ```0```
After the fix:
```
ls -l test.wav fix.wav
-rw-rw-r-- 1 amr amr 3520 Aug 15 11:09 fix.wav
-rwxrwxr-x 1 amr amr 3564 Aug 15 11:06 test.wav
```
If VP8 is not encoded with error resilience enabled then any packet loss
causes very bad artefacts when decoding and waiting for the next
keyframe instead improves user experience considerably.
Various audio formats require an audio lead-in to decode it properly.
Most parsers would take care of it, but when a container like matroska is
involved, the demuxer handles the seeking and without its own lead-in
handling would never even pass the lead-in data to the parser.
This commit provides an initial implementation of that for audio/mpeg,
audio/x-ac3 and audio/x-eac3 by calculating the worst case lead-in time
needed from known samplerate, potential lead-in frames need and the
maximum blocksize possible for the format (as we don't parse that out
exactly in matroskademux) and seeking that much earlier in case of
accurate seeks. This is especially important for NLE use-cases with GES.
If accurate seeking to a position that happens to have a video keyframe,
it'll go back to the previous keyframe than needed, but with typical
video files that's the best we can do anyway without falling back to
scanning the clusters, as typically only keyframes are indexed in
Cueing Data.
If the media doesn't have a CUE, then we bisect for the cluster to seek
to with the same modified time as well in case of accurate seeking,
ensuring sufficient lead-in. This code path is typically hit only with
(suboptimal) audio-only matroska files, e.g. when created with ffmpeg,
which doesn't add a CUE for audio-only mkv muxing.
The send path in rtpsession processes the buffer list along the way,
sharing info and stats between packets in the same list, because it
assumes that all packets in a buffer list are from the same frame.
However, in the receiving path packets can arrive in all sorts of
arrangements:
- different sources,
- different frames (different timestamps),
- different types (multiplexed RTP and RTCP, invalid RTP packets).
so a more general approach should be used to correctly support buffer
lists in the receive path.
It turns out that it's simpler and more robust to process buffers
individually inside the rtpsession element even if they come in a buffer
list, and then reassemble a new buffer list when pushing the buffers
downstream.
This avoids complicating the existing code to make all functions
buffer-list-aware with the risk of introducing regressions,
To support buffer lists in the receive path and reduce the "push
overhead" in the pipeline, a new private field named processed_list is
added to GstRtpSessionPrivate, it is set in the chain_list handler and
used in the process_rtp callback; this is to achieve the following:
- iterate over the incoming buffer list;
- process the packets one by one;
- add the valid ones to a new buffer list;
- push the new buffer list downstream.
The processed_list field is reset before pushing a buffer list to be on
the safe side in case a single buffer was to be pushed by upstream
at some later point.
NOTE:
The proposed modifications do not change the behavior of the send path.
The process_rtp callback is called in rtpsource.c by the push_rtp
callback (via source_push_rtp) only when the source is not internal.
So even though push_rtp is also called in the send path, it won't end up
using process_rtp in this case because the source would be internal in
the send path.
The reasoning from above may suggest a future refactoring: push_rtp
might be split to better differentiate the send and receive path.
In push mode (streaming), if the last audio payload chunk is less than the segment rate buffer size, it would be ignored since the plugin waits until it has at least segment rate bufer size of audio.
The fix is to introduce a flushing flag that indicates that no more audio will be available so that the plugin can recognize this condition and flush the data is has even if it is less
than the desired segment rate buffer size.
This is useful to support the ONVIF case: when is-live is set to
FALSE and onvif-rate-control is no, the client can control the
rate of delivery and arrange for the server to block and still
keep sending when unblocked, without requiring back and forth
PAUSE / PLAY requests. This enables, amongst other things, fast
frame stepping on the client side.
When is-live is FALSE, we don't use a manager at all. This case
was actually already pretty well handled by the current code. The
standard manager, rtpbin, is simply no longer needed in this case.
Applications can instantiate a downloadbuffer after rtspsrc if
needed.
Refactor the code for parsing and generating the Range, taking
advantage of existing API in GstRtspTimeRange.
Only use the TCP protocol in that mode, as per the specification.
Generate an accurate segment when in that mode, and signal to the
depayloader that it should not generate its own segment, through
the "onvif-mode" field in the caps, see
<https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/merge_requests/328>
for more information.
Translate trickmode seek flags to their ONVIF representation
Expose an onvif-rate-control property
Forwarding a single segment event from the pad that first gets
chained is incorrect: when that first event was sent by an element
such as x264enc, with its offset start, we end pushing out of segment
buffers for the other pad(s).
Instead, everytime the active pad changes, forward the appropriate
segment event.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/issues/1028
When it is not clear yet if a packet relative to a source should be
pushed, the packet is put into a queue, this happens in two cases:
- the source is still in probation;
- there is a large jump in seqnum, and it is not clear what
the cause is, future packets will help making a guess.
In either case stats about received packets are not updated at all; and
even if they were, when init_seq() is called it resets all receiver
stats, effectively loosing any possible stat about previously received
packets.
Fix this by taking into account the queued packets and update the stats
when calling init_seq().
Since commit c971d1a9a (rtpsource: refactor bitrate estimation,
2010-03-02) bytes_received filed in RTPSourceStats is set but then never
used again, expose it so that it can be used by user code to verify how
many bytes have been received.
According to RFC3550 lower-level headers should be considered for
bandwidth calculation.
See https://tools.ietf.org/html/rfc3550#section-6.2 paragraph 4:
Bandwidth calculations for control and data traffic include
lower-layer transport and network protocols (e.g., UDP and IP) since
that is what the resource reservation system would need to know.
Fix the source data to accommodate that.
Assume UDPv4 over IP for now, this is a simplification but it's good
enough for now.
While at it define a constant and use that instead of a magic number.
NOTE: this change basically reverts the logic of commit 529f443a6
(rtpsource: use payload size to estimate bitrate, 2010-03-02)
adjust/port from rtph264pay and allow sending the configuration data at
every IDR
The payloader was stripping the configuration data when the
config-interval was set to 0. The code was written in such a way !(a >
0) that it stripped the config when it was set at -1 (send config_data
as soon as possible).
This resulted in some MPEG4 streams where no GOP/VOP-I was detected to
be sent out without configuration.
In reverse playback, we don't want to rely on the position of the current
keyframe to decide a stream is EOS: the last GOP we push will start with
a keyframe, which position is likely to be outside of the segment.
Instead, let the normal seek_to_previous_keyframe mechanism do its job,
it works just fine.
If a key unit seek is performed with a time position that matches
the offset of a keyframe, but not its actual PTS, we need to
adjust the segment nevertheless.
For example consider the following case:
* stream starts with a keyframe at 0 nanosecond, lasting 40 milliseconds
* user does a key unit seek at 20 milliseconds
* we don't adjust the segment as the time position is "over" a keyframe
* we push a segment that starts at 20 milliseconds
* we push a buffer with PTS == 0
* an element downstream (eg rtponviftimestamp) tries to calculate the
stream time of the buffer, fails to do so and drops it
When the seek event contains a (newly-added) trickmode interval,
and TRICKMODE_KEY_UNITS was requested, only let through keyframes
separated with the required interval
The primary video stream is used to select fragment cut points
at keyframe boundaries. Auxilliary video streams may be
broken up at any packet - so fragments may not start with a keyframe
for those streams.
The time_position field of the stream is offset by the media_start
of its QtDemuxSegment compared to the start of the GstSegment of
the demuxer, take it into account when making comparisons.
If the conflict is detected when sending a packet, then also send an
upstream event to tell the source to reconfigure itself.
Also ignore the collision if we see more than one collision from the same
remote source to avoid problems on loops.
Add a new property "do-aggregate"* to the H.264 RTP payloader which
enables STAP-A aggregation as per [RFC-6184][1]. With aggregation enabled,
packets are bundled instead of sent immediately, up until the MTU size.
Bundles also end at access unit boundaries or when packets have to be
fragmented.
*: The property-name is kept generic since it might apply more widely,
e.g. STAP-B or MTAP.
[1]: https://tools.ietf.org/html/rfc6184#section-5.7
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/434
If an rtx packet arrives that hasn't been requested (it might
have been requested from prior to a reset), ignore it so that
it doesn't inadvertently trigger a clock skew.
In the case of reordered packets, calculating skew would cause
pts values to be off. Only calculate skew when packets come
in as expected. Also, late RTX packets should not trigger
clock skew adjustments.
Fixes#612
mpegaudioparse suggests MP3 needs 10 or 30 frames of lead-in (depending on
mpegaudioversion, which we don't know here), thus provide at least 30 frames
lead-in for such cases as a followup to commit cbfa4531ee.
The pre_push_frame default clipping behaviour was introduced in 2010
with commit 30be03004e and modified with commit 4163969a24 in 2011,
when most parsers didn't implement a pre_push_frame yet. Not having it
meant that clipping was done by default. Those that did implement a
pre_push_frame (flacparse and mpegaudioparse) at the time, had the flag
adjusted as part of the 2011 refactor work.
All other parsers got a pre_push_frame vfunc implementation only in
2013, but seem to have forgot to keep the clipping behaviour, as
was done automatically when a pre_push_frame implementation doesn't
exist for the parser. aacparse lost it with commit 91d4abcea in
July 2013; the others in Dec 2013 as part of AUDIO_CODEC tag posting
in commits 6f89b430e, d2ab5199b, 29f2cae12, 753d3c23a and 292780574.
Otherwise it can happen that we receive a caps event, then another caps
event and only then buffers. We would then send out the first caps event
in the stream but mark buffers with the caps version of the second caps
event.
Otherwise it can happen that we already collected 7 caps, miss the 8th
caps packet (packet loss) and then re-use the 1st caps for the following
buffers instead of the 8th caps which will likely cause errors further
downstream unless both caps are accidentally the same.
Keeping old caps around does not seem to have any value other than
potentially causing errors. We would always receive new caps whenever
they change (even if they were previous ones) and it's very unlikely
that they happen to be exactly the same as the previous ones.
Also after having received new caps or a buffer with a next caps
version, no buffers with old caps version will arrive anymore.
Make sure to clear any master clock on the media_clock
before unreffing it to release the timer callback that's
updating the clock and keeping it reffed.
Clear the mastering_display_info_present field explicitly
after reallocating the track context into a video context
to avoid uninitialised warnings in valgrind
If, say, a rtx-packet arrives really late, this can have a dramatic
effect on the jitterbuffer clock-skew logic, having it being reset
and losing track of the current dts-to-pts calculations, directly affecting
the packets that arrive later.
This is demonstrated in the test, where a RTX packet is pushed in really
late, and without this patch the last packet will have its PTS affected
by this, where as a late RTX packet should be redundant information, and
not affect anything.
This patch corrects the delay set on EXPECTED timers that are added when
processing gaps. Previously the delay could be too small so that
'timout + delay' was much less than 'now', causing the following retries
to be scheduled too early. (They were sent earlier than
rtx-retry-timeout after the previous timeout.)
Turns out that the "big-gap"-logic of the jitterbuffer has been horribly
broken.
For people using lost-events, an RTP-stream with a gap in sequencenumbers,
would produce exactly that many lost-events immediately.
So if your sequence-numbers jumped 20000, you would get 20000 lost-events
in your pipeline...
The test that looks after this logic "test_push_big_gap", basically
incremented the DTS of the buffer equal to the gap that was introduced,
so that in fact this would be more of a "large pause" test, than an
actual gap/discontinuity in the sequencenumbers.
Once the test was modified to not increment DTS (buffer arrival time) with
a similar gap, all sorts of crazy started happening, including adding
thousands of timers, and the logic that should have kicked in, the
"handle_big_gap_buffer"-logic, was not called at all, why?
Because the number max_dropout is calculated using the packet-rate, and
the packet-rate logic would, in this particular test, report that
the new packet rate was over 400000 packets per second!!!
I believe the right fix is to don't try and update the packet-rate if
there is any jumps in the sequence-numbers, and only do these calculations
for nice, sequential streams.
gst_splitmux_src_activate_part() configures the pad information
before starting the pad task, but occasionally the changes it makes
to the pad are not seen in the pad task because they're not
protected by the right locking. Use the pad's object lock to
protect those variables.
Fix a deadlock around the pads list by using an RW lock to
allow simultaneous readers. The pad list doesn't really changes
except at startup and shutdown.
Make the debug output less confusing by not mentioning a src
pad when doing calculations on the sink pad side.
Improve debug around why a GOP is considered overflowing a fragment
AAC and various other audio codecs need a couple frames of lead-in to
decode it properly. The parser elements like aacparse take care of it
via gst_base_parse_set_frame_rate, but when inside a container, the
demuxer is doing the seek segment handling and never gives lead-in
data downstream.
Handle this similar to going back to a keyframe with video, in the
same place. Without a lead-in, the start of the segment is silence,
when it shouldn't, which becomes especially evident in NLE use cases.
In this change we now protect the internal srcpads list using the
stream lock and limit usage of the internal stream lock to
preventing data flowing on the other src pad type while creating
and signalling the new pad.
This fixes a deadlock with RTPBin shutdown lock. These two locks would
end up being taken in two different order, which caused a deadlock. More
generally, we should not rely on a streamlock when handling out-of-band
data, so as a side effect, we should not take a stream lock when
iterating internal links.
This means we can use some newer features and get rid of some
boilerplate code using the G_DECLARE_* macros.
As discussed on IRC, 2.44 is old enough by now to start depending on it.
It must be accurate for all samples to work in Final Cut properly, so
the best we can do is to assume that all samples are the same as the
first. Bigger samples are truncated, smaller samples are padded.
This takes the timestamp of the earliest stream and offsets it so that
it starts at 0. Some software (VLC, ffmpeg-based) does not properly
handle Matroska files that start at timestamps much bigger than zero.
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/449
There is only a single sink element in async-finalize mode, and we would
keep the running time from previous fragments set in that case. As we
don't ever set the running time for the very last fragment on EOS, this
would mean that the closing time reported for the very last fragment is
the same as the closing time of the previous fragment.
This is a tiny clarification as the storage was loosely named "storage".
This change clarify that the storage is specificaly used for received RTP
packets. This is unlike the storage found in rtprtxsend that stores a
backlog of sent RTP packets.
We recently added code to remove outdate NACK to avoid using bandwidth
for packet that have no chance of arriving on time. Though, this had a
side effect, which is that it was to get an early RTCP packet with no
feedback into it. This was pretty useless but also had a side effect,
which is that the RTX RTT value would never be updated. So we we stared
having late RTX request due to high RTT, we'd never manage to recover.
This fixes the regression by making sure we keep at least one NACK in
this situation. This is really light on the bandwidth and allow for
quick recover after the RTT have spiked higher then the jitterbuffer
capacity.
The second udpsrc (rtcp) might not have seen the segment event if it was
not enabled or if rtcp is not available on the server. So if the
application tries to send an EOS event it will try to set an invalid
seqnum to the event.
Right now, we may call on-new-ssrc after we have processed the first
RTP packet. This prevents properly configuring the source as some
property like "probation" are copied internally for use as a
decreasing counter. For this specific property, it prevents the
application from disabling probation on auxiliary sparse stream.
Probation is harmful on sparse streams since the probation algorithm
assume frequent and contiguous RTP packets.
Scaletempo doesn't support non-interleaved layout. Not explicitely stating this
would trigger critical warnings and a caps negotiation failure when scaletempo
is used as playbin audio-filter.
Patch suggested by George Kiagiadakis <george.kiagiadakis@collabora.com>.
Fixes#591
Fix doc chunks to not use that syntax for links that have the
url as description, it will be put verbatim into the xml/*.xml
file and then the expat parser will throw a syntax error like:
File "../../common/mangle-db.py", line 71, in <module>
main()
File "../../common/mangle-db.py", line 69, in main
patch (details.replace("-details", ""), os.path.basename(details))
File "../../common/mangle-db.py", line 20, in patch
doc = xml.dom.minidom.parse(related)
File "/usr/lib/python2.7/xml/dom/minidom.py", line 1918, in parse
return expatbuilder.parse(file)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 924, in parse
result = builder.parseFile(fp)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 207, in parseFile
parser.Parse(buffer, 0)
xml.parsers.expat.ExpatError: not well-formed (invalid token): line 84, column 7
If the incoming frame buffer has GST_BUFFER_FLAG_DISCONT set this should
be preserved and set for the first output buffer too, like other
payloaders do.
Spotted with gst-validate-1.0 when adding integration tests for
rtpsession, a minimal test to reproduce the issue is:
$ gst-validate-1.0 videotestsrc num-buffers=1 ! rtpvrawpay ! identity ! fakesink
Starting pipeline
Pipeline started
warning : Buffer didn't have expected DISCONT flag333 speed: 1.000000 />
Detected on <identity0:sink>
Detected on <identity0:src>
Detected on <fakesink0:sink>
Description : Buffers after SEGMENT and FLUSH must have a DISCONT flag
Issues found: 1
=======> Test PASSED (Return value: 0)
This introduce a new signal on RTSession, on-sending-nacks is emited
right before the list of seqnums to be nacked are processed and
transformed into FB Nack. This allow implementing custom nacks
handling through another mechanism with APP feedback.
In order to do that, we now split the nacks registration from the actual
FB nack packet construction. We then try and add as many FB Nacks as
possible into the active packets and leave the remaining seqnums in the
RTPSource. In order to avoid sending outdated NACK later on, we save the
seqnum calculated deadline and cleanup the outdated seqnums before the
next RTCP send.
Fixes#583
Calling rtp_session_send_rtcp before marking the source as requiring a
pli/fir/nack meant the rtcp_thread could be scheduled and start running
before the source was updated. This meant the request would not be sent
early but instead was transmitted with the next regular RTCP packet.
Add test for nack generation.
If the current time is equal to the early rtcp time deadline, there is
no need to schedule a timer. This ensure that immediate feedback is
really immediate and simplify implementing unit tests with the test
clock, which stops perfectly on the timeout time.
This fix has been extracted from Pexip feature patch called
"rtpsession: Allow instant transmission of RTCP packets"
When used in combination with a rtponviftimestamp element
downstream, forwarding this flag ensures it gets correctly
serialized in the ONVIF header extension.
A missing colon after G_DEFINE_TYPE declaration was confusing gst-indent
and causing problem in the pre-commit hook.
Add the missing colon and fix the following function declaration to
follow the normal GStreamer style.
One comments in gst_rtp_session_chain_send_rtp_common() is referring to
groups in a buffer list, however this concept of "group" comes from
GStreamer 0.10 and does not exist anymore in GStreamer 1.0, so update the
comment to refer to buffers instead.
The update_receiver_stats() function is called also when sending packets
in rtp_source_send_rtp(), and sending packets may happen using a buffer
list rather than individual buffers.
So update the stats using the actual number of packets sent.
NOTE: this is fine for the receive path too (rtp_process_send_rtp)
because the receive path does not support buffer lists and
pinfo->packets would always be equal to 1 in this case.
This is needed for the case you don't know in advance all the sessions
you will be using, but would like to place all the related AUX element
in the same GstBin. As per current implementation, each time an sender
AUX bin is requested and returned, RTPBin will walk the src pads and
create sessions for these pads.
In the current implementation, if a src pad already have a sessions, it
returns an error and stops. As a side effect, if an AUX bin is reused in
a following AUX bin request, it can only work if the pads are created on
the last request.
This change simply relax the restriction in order to keep walking, and
just ensure that all newly created pads have a sessions.
Need to respect return of gst_video_guess_framerate() to ensure
non-zero denominator.
This patch is to fix below error with an abnormal (but has valid frame) file.
(gst-play-1.0:17940): GStreamer-CRITICAL **: passed '0' as denominator for `GstFraction'
This problem was found in Test. 2 of the YouTube 2018 EME
tests[1]. The code was accidentally not finding an mp4a's esds atom in
the sample description table when the stream was encrypted. It assumed
that if the stream is protected, then only an enca atom will be found
here. What happens with YouTube is they often provide protected
content with a few seconds of clear content, and then switch to the
encrypted stream.
The failure case here was an incorrect codec_data field being sent
into aacparse. The advertisement of stereo audio @ 44.1kHz for the
mp4a (unprotected) stream was incorrect. As usual, the esds contained
the real values here which were mono at 22050 Hz.
Here's what the MP4 tree looks like for these types of files,
demonstrating why the code was making a wrong assumption (or maybe
YouTube is being unusual),
[ftyp] size=8+16
...
[moov] size=8+1571
...
[trak] size=8+559
...
[stsd] size=12+234
entry-count = 2
[enca] size=8+147
channel_count = 2
sample_size = 16
sample_rate = 44100
[esds] size=12+27
...
...
[mp4a] size=8+67
channel_count = 2
sample_size = 16
sample_rate = 44100
[esds] size=12+27
...
In addition to fixing this, the checks for esds atoms in mp4a and mp4v
have been made symmetrical. While I haven't seen a test case for video
with the same problem, it seemed better to make the same checks. This
also fixes a crash reported from another user[2], they also noted the
asymmetry with mp4v and mp4a.
[1] https://yt-dash-mse-test.commondatastorage.googleapis.com/unit-tests/2018.html?test_type=encryptedmedia-test
[2] https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/398
This can happen in various error cases that could happen between the
creation of the element in question and the adding to the rtspsrc.
It causes an ugly critical warning right now but is otherwise harmless.
The imagefreeze element can be handy for benchmarking downstream
elements because it re-uses the same buffer memory and introduces less
overhead compared to always creating new frames with videotestsrc.
However it's not possible to make imagefreeze send EOS when using
gst-launch-1.0.
Add a num-buffers property to make it look more like a source in the
above scenario.
In commit 28e5f9098 (rtpbin: use PacketInfo for the sender, 2013-09-13)
the rtp_source_send_rtp signature changed but the documentation was not
adjusted to match the new one.
Update the documentation to match the function signature.
Some functions now accept a generic 'gpointer data' parameter because
they can work either on a single buffer or a buffer list.
However the comments were still referring to the old 'GstBuffer *buffer'
parameter, so update the comments to match the actual functions
signature.
So far we assumed that if all sources are bye, this meant we needed to
send an EOS on the RTCP sink. The problem is that this case may happens
if we only had one internal source and it detected a collision.
So now we limit the EOS forwarding to when there is a send_rtp_sink pad
and that this pad has received EOS. We don'tcheck the recv_rtp_sink
since the code does not wait for the bye to be send before sending EOS
to the RTCP src pad.
RF64 encode support was added to wavenc quite some time
ago, but not declared in wavparse. It seems wavparse can
decode it though, so add it to the sink pad.
The RF64 support was added in
https://bugzilla.gnome.org/show_bug.cgi?id=735627
This is useful when implementing custom retransmission mechanism like
RIST to prevent RTCP from being produces for the retransmitted SSRC.
This would also be used in general for various purpose when customizing
an RTP base pipeline.
If it goes over 2^15 packets, it will think it has rolled over
and start dropping all packets. So make sure the seqnum distance is not too big.
But let's not limit it to a number that is too small to avoid emptying it
needlessly if there is a spurious huge sequence number, let's allow at
least 10k packets in any case.
Recent changes in ccextractor were attaching timecode meta to the closed
caption track. We shouldn't write timecode information for the closed
caption trak.
And let it the oportunity to get its other pad linked
Example:
```
$ gst-launch-1.0 uridecodebin uri=file:///home/thiblahute/gst-validate.save/gst-integration-testsuites/testsuites/../medias/defaults/flv/819290236.flv caps=audio/x-raw expose-all-streams=FALSE ! fakesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstFlvDemux:flvdemux0: Internal data stream error.
Additional debug info:
../subprojects/gst-plugins-good/gst/flv/gstflvdemux.c(2760): gst_flv_demux_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin0/GstFlvDemux:flvdemux0:
streaming stopped, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
```
Modify the caps string to allow width and height greater than 4096.
There is no need to restrict it since the matroska format allows the
width and height values to be up to eight bytes long, and this also
applies to the webm subset of the format.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/550
When multiple nals are aggrgated, the marker bit should be associated only
with the last NAL of the packet. Otherwise we may break rendering in with
AU alignment.
And also add a property for setting this. By default it has the same
value as the metadatacreator metadata.
Various software is using encoder instead of metadatacreator, others are
using them both for different purposes. As such it's useful to have
support for setting both here.
Blocking in change_state() is a recipe for disaster, even more so if
we wait for another thread that also calls into various element API and
could then lead to deadlocks on e.g. the state lock.
EA608 closed caption tracks are a bit special in that each sample
can contain CCs for multiple frames, and CCs can be omitted and have to
be inferred from the duration of the sample then.
As such we take the framerate from the (first) video track here for
CEA608 as there must be one CC byte pair for every video frame
according to the spec.
For CEA708 all is fine and there is one sample per frame.
The duration field being a uint64, is stored in 8 bytes, not 4. So the offset of
the following field, language code, needs to be updated accordingly so that the
parsed language code is not garbage.
The documentation of "port-range" implies that passing NULL should be
valid, but currently it is not. Without this check, the sscanf() call
will crash.
This reverts commit dcd3ce9751.
This functionality was implemented for gstopenwebrtc, but it
turned out this was not actually needed for webrtc bundling
support, as shown in webrtcbin. It also doesn't correspond
to any standards.
This is an API break, but nothing should actually depend on
this, at least not for its initial purpose.
Changes in rtpbin.c were reverted manually, to preserve some
refactoring that had occurred in the original commit.
Fixes#537
This macro is not longer used. It was secretly checking if that nal was
a slice, and confusingly name to that one may think it was checking if
the nal is an AUD.
The code was reading the timestamp from the adapter before pushing the
new buffer into it. As a side effect, if the adapter was empty, we'd end
up using an older timestamp. In alignment=au, it means that all
timestamp was likely one frame in the past, while in alignment=nal, with
multiple slices per frame, the first slice would have the timestamp of
the previous one.
The marker bit is used for efficient decoding. The assumption that
it should be set on the AUD is wrong, since the AUD is conceptually
starts the frame, while the marker is to indicate the end.
So properly set the marker bit as soon as we know we are ending an
AU and also whenever upstream have set the GST_BUFFER_FLAG_MARKER
flag.
The code was reading the timestamp from the adapter before pushing the
new buffer into it. As a side effect, if the adapter was empty, we'd end
up using an older timestamp. In alignment=au, it means that all
timestamp was likely one frame in the past, while in alignment=nal, with
multiple slices per frame, the first slice would have the timestamp of
the previous one.
The marker bit is used for efficient decoding. The assumption that
it should be set on the AUD is wrong, since the AUD is conceptually
starts the frame, while the marker is to indicate the end.
So properly set the marker bit as soon as we know we are ending an
AU and also whenever upstream have set the GST_BUFFER_FLAG_MARKER
flag.
Don't allow external encoder to use one of the reserved NAL type
implicated in NAL aggreation. These out-of-spec NAL types, if passed
from the outside world will lead to an invalid RTP payload being
created.
When the EOS event is received, run all timers immediately and avoid
pushing the EOS downstream before this has been run. This ensures that
the lost packet statistics are accurate.
After EOS is received, it is pointless to wait for further events,
specially waiting on timers. This patches fixes two cases where we could
wait instead of returning GST_FLOW_EOS and trigger a spin of the loop
function when EOS is queued, regardless if this EOS is the queue head or
not.
stream.segment should be updated with the values of the current edit
list, also when a new `moov` is received. Unfortunately this was not
being the case because of an early return.
As a consequence of this bugs, no end of movie clipping was being
performed on the new moov and no segment event was being emitted.
When performing stream switching (e.g. in MSE) the new moov may have a
different edit list. This is often the case when switching between
baseline H.264 (which lacks B-frames) and more demanding profiles. For
this reason it's important to emit a new segment in order to be able
to get matching stream times.
This patch moves the initialization of QtDemuxStream.segment from
gst_qtdemux_add_stream() to _create_stream(). This ensures the segment
is always initialized when the stream is created.
Otherwise the segment format is left as GST_FORMAT_UNDEFINED in the case
were a track is reparsed and qtdemux_reuse_and_configure_stream() is
called instead of gst_qtdemux_add_stream(). (See
qtdemux_expose_streams() in the non streams-aware case.)
This is an extra internal recurisve lock use to avoid having to take
both sink pad streams lock all the time. This patch renamed it
INTERLNAL_STREAM_LOCK/UNLOCK() to avoid confusion with possible upstream
GST_PAD API.
This reverts "6f3734c305 rtpssrcdemux: Only forward stick events while
holding the sinkpad stream lock" and actually hold on the internal
stream lock. This prevents in some needed case having a second
streaming thread poping in and messing up event ordering.
While forwarding serialized event, we use gst_pad_forward() function.
In the forward callback (GstPadForwardFunction) we always return
TRUE. Returning true there will stop the dispatching procedure. As a
side effect, only one events is receiving the events. This breaks
when sending EOS from the applicaiton, it also breaks the latency
tracer.
This patch enables matroskademux to receive seeks before it reaches
GST_MATROSKA_READ_STATE_DATA.
Closes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/514
This also enables receiving seeks in the element READY state.
When such a seek is received, it is stored to be later handled when
GST_MATROSKA_READ_STATE_DATA is reached.
Reset RTPSession when rtpsession changes state from PAUSED to READY.
Without this change, a stored last_rtptime in RTPSource could interfere
with RTP timestamp generation in RTCP Sender Report.
Fixes#510