Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
When using this mode each frame is split in two fields, each one being
transferred using its own buffer.
This is implemented with the V4L2_FIELD_ALTERNATE field format in v4l2.
This mode is enabled using a caps filter such as
"v4l2src ! video/x-raw\(format:Interlaced\)"
Here are the main changes related to this feature:
- use the INTERLACED caps feature with this mode.
- in this mode both fields of a given frame have the same sequence/offset
so adjust the algorithm checking for lost field/frame accordingly.
- double pool's min number of buffers as each frame requires 2 buffers.
Fix#504
Co-authored-by: Zeeshan Ali <zeenix@collabora.co.uk>
Use GST_VIDEO_INFO_FIELD_HEIGHT() instead of GST_VIDEO_INFO_HEIGHT()
when we actually want the field height rather than the frame height.
So far both are equals but that won't longer be the case when
implementing alternate interlace mode.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
Without a qmlglsink, we need to retrieve the window system display
ourselves rather than relying solely on qmlglsink to have priority on
the choice of display.
deleting a QOpenGLFrameBufferObject needs to occur on the same thread it
was created on in order to actually free the relevant resources
immediately. Otherwise, they will be queued for deletion and not freed
until the associated QOpenGLContext is destroyed.
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
This should result in no worse accuracy than the base parse element, and may
result in better accuracy. In particular, the number of bytes processed at any
given point, as accumulated by baseparse, can be only accurate to
(1 / # of frames) bytes per second, and if we try to seek immediately after
pausing the pipeline to a large offset, this small inaccuracy can propagate to
something noticeable.
The use case that prompted this patch is a 45-minute MPEG-1 layer 3 file, which
has a constant bit rate but no seek tables. Trying to seek the pipeline
immediately after pauisng it, without the ACCURATE flag, to a location 41
minutes in, yields a location that is, even with <https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/374>,
still audibly incorrect. This patch yields a much closer position, no longer
audibly incorrect, and likely within a frame of the most correct position.
By the time sink_event is called, the pad's current caps have
already been updated. To address this, implement sink_event_pre_queue,
and check if the pad can be renegotiated there.
Fixes#707