The calculated threshold for timecode might be varying depending on
"max-size-timecode" and framerate.
For instance, with framerate 29.97 (30000/1001) and
"max-size-timecode=00:02:00;02", every fragment will have identical
number of frames 3598. However, when "max-size-timecode=00:02:00;00",
calculated next keyframe via gst_video_time_code_add_interval()
can be different per fragment, but this is the nature of timecode.
To compensate such timecode drift, we should keep track of expected
timecode of next fragment based on observed timecode.
If the start of the GOP is >= the requested running time, put it into a
new fragment. That is, split-at-running-time would always ensure that a
split happens as early as possible after the given running time.
Previously it was comparing against the current incoming timestamp,
which does not tell us what we actually want to know as it has no direct
relation to the GOP start/end.
When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
If not configuring the sinks via the "location" property this can be
useful to know for which sink the fragment was actually opened/closed,
especially if finalization of the fragments is happening asynchronously.
Applications might handle locations and generally configuration of the
sink by themselves instead of having splitmuxsink set the location on
the sink. Nonetheless it makes sense to increment the fragment_id that
is passed to the signal so that applications know which fragment is
requested.
An application might try to access splitmuxsink from sync message handler
by g_object_{get,set} which takes lock also. In general, we don't
take lock around message handler.
Add a property which explicitly maps splitmuxsink pads to the
muxer pads they should connect to, overriding the implicit logic
that tries to match pads but yields arbitrary names.
When running in async-finalize mode, request new pads from the muxer
using the same names as old pads, instead of letting the muxer assign
new ones based on the pad template name.
The primary video stream is used to select fragment cut points
at keyframe boundaries. Auxilliary video streams may be
broken up at any packet - so fragments may not start with a keyframe
for those streams.
gst_splitmux_src_activate_part() configures the pad information
before starting the pad task, but occasionally the changes it makes
to the pad are not seen in the pad task because they're not
protected by the right locking. Use the pad's object lock to
protect those variables.
Fix a deadlock around the pads list by using an RW lock to
allow simultaneous readers. The pad list doesn't really changes
except at startup and shutdown.
Make the debug output less confusing by not mentioning a src
pad when doing calculations on the sink pad side.
Improve debug around why a GOP is considered overflowing a fragment
There is only a single sink element in async-finalize mode, and we would
keep the running time from previous fragments set in that case. As we
don't ever set the running time for the very last fragment on EOS, this
would mean that the closing time reported for the very last fragment is
the same as the closing time of the previous fragment.
Blocking in change_state() is a recipe for disaster, even more so if
we wait for another thread that also calls into various element API and
could then lead to deadlocks on e.g. the state lock.
Apart from the obvious drawbacks of hardcoding, the drawback here was
that, if we subtracted 2 frames (instead of 2.6) from the target running
time, we'd request the next keyframe a bit too far into the future,
which would make our files split at the wrong position.
https://bugzilla.gnome.org/show_bug.cgi?id=797293