As we override the GLib item with our own structure, we cannot use any
function from GList or GQueue that would try to free the RTPJitterBufferItem.
In this patch, we move away from g_queue_new() which forces using
g_queue_free(). This this function could use g_slice_free() if there is any items
left in the queue. Passing the wrong size to GSLice may cause data corruption
and crash.
A better approach would be to use a proper intrusive linked list
implementation but that's left as an exercise for the next person
running into crashes caused by this.
Be ware that this regression was introduced 6 years ago in the following
commit [0], the call to flush() looked useless, as there was a g_queue_free()
afterward.
Signed-off-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
[0] 479c7642fd
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/573>
The calculated threshold for timecode might be varying depending on
"max-size-timecode" and framerate.
For instance, with framerate 29.97 (30000/1001) and
"max-size-timecode=00:02:00;02", every fragment will have identical
number of frames 3598. However, when "max-size-timecode=00:02:00;00",
calculated next keyframe via gst_video_time_code_add_interval()
can be different per fragment, but this is the nature of timecode.
To compensate such timecode drift, we should keep track of expected
timecode of next fragment based on observed timecode.
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
This patch will now set the maximum of buffers to 32, allowing to grow the
pool for drivers that supports that and will respect the minimum buffers
reported by the driver. This was made to fix a stall with the virtio CODEC
driver.
Fixes#672
If the start of the GOP is >= the requested running time, put it into a
new fragment. That is, split-at-running-time would always ensure that a
split happens as early as possible after the given running time.
Previously it was comparing against the current incoming timestamp,
which does not tell us what we actually want to know as it has no direct
relation to the GOP start/end.
There are often only two buffers queued in the kernel so no new buffers are
requested.
With every qbuf, the kernel receives a new DMABUF for the specified index.
This most likely differs from the last DMABUF and the old cached entry is
released. This results in a lot of map/unmap overhead if the kernel driver
needs a mapping for the buffer.
With a larger queue, it's quite likely, that both old and new DMABUFs are
also mapped for another index. So the map/unmap is skipped, because the
mapping is reference counted.
The corresponding allocated buffers don't contain any actual memory, so
allocating them is quite cheep. So the log message is updated to clarify
this.
This is disabled by default as it unnecessarily creates bigger headers
but it is something that is required by some applications and most
notably the Apple ProRes spec.
Request pads can released at any time, so make sure to hold
the object lock when iterating the element sinkpads list where
that's safe, or to use other safe pad iteration patterns in
other places.
When choosing a best pad, return a reference to the pad to make sure it
stays alive for output in the aggregator srcpad task.
Should fix a spurious valgrind error in the CI flvmux tests and some
other potential problems if the request sink pads are released while
the element is running..
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/714
Even if timecode trak is officially unsupported in non-mov flavors,
some software still supports it, e.g. Final Cut Pro X:
https://developer.apple.com/library/archive/technotes/tn2174/_index.html
The user might still expect to see the timecode information in the
non-mov file despite it being officially unsupported , because other
software e.g. QuickTime will create a timecode trak even in mp4 files.
Furthermore, software that supports timecode trak in non-mov flavors
will also display the file duration in "timecode units" instead of real
clock time, which is not necessarily the same for 29.97 fps and friends.
This might confuse users, who see a different duration for the same
framerate and amount of frames depending on whether the container is mp4
or mov.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/512
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory.
This has quite an impact on performance on systems with limited amount
of resources. With this patch the whole GstBuffer will not be mapped at
once, instead each individual GstMemory will be iterated and mapped
separately.
There is a use-case for a server to re-payload opus going through it.
Problem was that the payloader requires channels in the caps, but
this is not something the depayloader can parse out of the stream, meaning
caps-negotiation would fail.
Removing the requirement of channels in the template-caps fixes this.
RTP session starts a new thread for RTCP and names it
"rtpsession-rtcp-thread" which happens to be longer than the maximum 16B
allowed by pthread_setname_np and causes the naming to fail.
See docs for more details.
This commit simply shortens the thread's name so it can actually be set.
Mainly generalize all the latest tests that have found various stalls
in the jitterbuffer, so that they only consist of a series of packets
with various seqnum/rtptime/rtx combinations, arriving at a specific time.
This means future tests can be more easily written to prove certain
behavior does not cause stalls.
Also fix the warning on windows:
warning C4244: 'initializing': conversion from 'double' to 'gint', possible loss of data
Changing the types from boolean to guint due to the ++ operand used on
them, and only call JBUF_SIGNAL_QUEUE after settling down,
or else you end up signaling the waiting code in chain() for every buffer
pushed out.
We don't do any blending by ourselves since a while now.
Note that this is a regression in "supported" formats: previously
ARGB64 was supported, for example, but in practice it caused blending to
not take place at all.
When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
When using this mode each frame is split in two fields, each one being
transferred using its own buffer.
This is implemented with the V4L2_FIELD_ALTERNATE field format in v4l2.
This mode is enabled using a caps filter such as
"v4l2src ! video/x-raw\(format:Interlaced\)"
Here are the main changes related to this feature:
- use the INTERLACED caps feature with this mode.
- in this mode both fields of a given frame have the same sequence/offset
so adjust the algorithm checking for lost field/frame accordingly.
- double pool's min number of buffers as each frame requires 2 buffers.
Fix#504
Co-authored-by: Zeeshan Ali <zeenix@collabora.co.uk>
Use GST_VIDEO_INFO_FIELD_HEIGHT() instead of GST_VIDEO_INFO_HEIGHT()
when we actually want the field height rather than the frame height.
So far both are equals but that won't longer be the case when
implementing alternate interlace mode.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
Without a qmlglsink, we need to retrieve the window system display
ourselves rather than relying solely on qmlglsink to have priority on
the choice of display.
deleting a QOpenGLFrameBufferObject needs to occur on the same thread it
was created on in order to actually free the relevant resources
immediately. Otherwise, they will be queued for deletion and not freed
until the associated QOpenGLContext is destroyed.