When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
This should result in no worse accuracy than the base parse element, and may
result in better accuracy. In particular, the number of bytes processed at any
given point, as accumulated by baseparse, can be only accurate to
(1 / # of frames) bytes per second, and if we try to seek immediately after
pausing the pipeline to a large offset, this small inaccuracy can propagate to
something noticeable.
The use case that prompted this patch is a 45-minute MPEG-1 layer 3 file, which
has a constant bit rate but no seek tables. Trying to seek the pipeline
immediately after pauisng it, without the ACCURATE flag, to a location 41
minutes in, yields a location that is, even with <https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/374>,
still audibly incorrect. This patch yields a much closer position, no longer
audibly incorrect, and likely within a frame of the most correct position.
By the time sink_event is called, the pad's current caps have
already been updated. To address this, implement sink_event_pre_queue,
and check if the pad can be renegotiated there.
Fixes#707
- Refactored the planar transform method to support all video formats
that are stored planar, independent of the used subsampling
- Added support for Y444
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory and when AVC (length-prefixed) alignment is used.
This has quite an impact on performance on systems with limited amount of
resources. With this patch the whole GstBuffer will not be mapped at once,
instead each individual GstMemory will be iterated and mapped separately.
When calling gst_rtp_jitter_buffer_reset you pass in a seqnum.
This is considered the starting-point for a new stream.
However, the old behavior would unref this buffer, basically lying to
the thread that is pushing out buffers saying that it can expect
this buffer, when it would never arrive. The resulting effect being no
more buffer pushed out of the jitterbuffer, and it would buffer
incoming data indefinitely.
By instead inserting the buffer in the gap_packets queue, the _reset()
function will take responsibility for using that as the first buffer
of the new stream.
Fixes#703
In order to concatenate fragments, splitmuxsrc offsets
the start of each fragment PTS to 0 to align it with the
previous file. This means that DTS can go negative for
the first fragment, with really bad results.
Add a fixed offset to outgoing timestamp ranges to
avoid that.
In this mode each field is carried using its own buffer.
Allow deinterlace to negotiate caps with the Interlaced feature and
adjust the algorithm fetching lines.
Fix#620
Output frames used to have their interlace mode set to the same one as
the input. This breaks their field and comp heights when deinterlacing
an alternate stream.
Each FLAC metadata block starts with a flag denoting whether it is the
last metadata block. The existing flacparse code moves any existing
VORBISCOMMENT block to immediately follow the STREAMINFO block without
changing any block's last-metadata-block flag. If no VORBISCOMMENT block
exists, it created one with the last-metadata-block flag set to true.
This results in gstflacdec sometimes giving bad headers to libflac when
trying to play perfectly valid FLAC files depending on the file's
metadata ordering. Depending on the contents of the other metadata
blocks, current versions of libflac may or may not return
FLAC__STREAM_DECODER_ERROR_STATUS_BAD_HEADER when given this broken
metadata. This is most noticeable with files that have a large cover art
image attached where VORBISCOMMENT is the very last metadata block with
no PADDING afterwards.
This patch changes that behavior so that:
1. For FLAC files that already have a VORBISCOMMENT block, the metadata
order is preserved.
2. For FLAC files that do not have a VORBISCOMMENT block, the generated
dummy VORBISCOMMENT is placed immediately after STREAMINFO and
inherits the last-metadata-block flag from STREAMINFO.
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/484
Sometimes the headers contain useless, wrong or zero values for e.g. the
sample size with these formats. There's only a single valid value for
them so let's set these instead.
We do not have a way to know the format modifiers to use with string
functions provided by the system. G_GUINT64_FORMAT and other string
modifiers only work for glib string formatting functions. We cannot
use them for string functions provided by the stdlib. See:
https://developer.gnome.org/glib/stable/glib-Basic-Types.html#glib-Basic-Types.description
```
../gst/rtpmanager/gstrtpjitterbuffer.c: In function 'gst_jitter_buffer_sink_parse_caps':
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: unknown conversion type character 'l' in format [-Werror=format=]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
In file included from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/gtypes.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib/galloca.h:32,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/glib-2.0/glib.h:30,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/gst.h:27,
from /home/nirbheek/cerbero/build/dist/windows_x86/include/gstreamer-1.0/gst/rtp/gstrtpbuffer.h:27,
from ../gst/rtpmanager/gstrtpjitterbuffer.c:108:
/home/nirbheek/cerbero/build/dist/windows_x86/lib/glib-2.0/include/glibconfig.h:69:28: note: format string is defined here
#define G_GUINT64_FORMAT "llu"
^
../gst/rtpmanager/gstrtpjitterbuffer.c:1523:32: error: too many arguments for format [-Werror=format-extra-args]
|| sscanf (mediaclk, "direct=%" G_GUINT64_FORMAT, &clock_offset) != 1)
^~~~~~~~~~
```
See also: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/379
Previously we saved the buffer_timestamp straight into
mux->cluster_time. Since the cluster time saved into the file does not
have as high precision as GstClockTime depending on the timecodescale
the rounding of relative_timestamp was invalid as mux->cluster_time
which it was calculated relative to was not equal to the cluster time
written to the matroska file.
Example of "mkvinfo -v" of how it looks before and after this change in
an scenario where previously timestamps got out of order because of this
issue.
Notice the timestamp of the SimpleBlock right before and right after the
Cluster now being in order. The consequence of this however is that the
cluster timestamp is not necessarily the same as the timestamp of the
first buffer in the cluster however (in case it's rounded up).
Before
| + SimpleBlock (track number 1, 1 frame(s), timecode 126.922s = 00:02:06.922)
| + Frame with size 432
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.933s = 00:02:06.933)
| + Frame with size 329
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.955s = 00:02:06.955)
| + Frame with size 333
|+ Cluster
| + Cluster timecode: 126.954s
| + Cluster previous size: 97344
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 126.954s = 00:02:06.954)
| + Frame with size 61239
| + SimpleBlock (track number 2, 1 frame(s), timecode 126.975s = 00:02:06.975)
| + Frame with size 338
After
| + SimpleBlock (track number 1, 1 frame(s), timecode 135.456s = 00:02:15.456)
| + Frame with size 2260
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.468s = 00:02:15.468)
| + Frame with size 332
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 335
|+ Cluster
| + Cluster timecode: 135.489s
| + Cluster previous size: 158758
| + SimpleBlock (key, track number 1, 1 frame(s), timecode 135.490s = 00:02:15.490)
| + Frame with size 88070
| + SimpleBlock (track number 2, 1 frame(s), timecode 135.511s = 00:02:15.511)
| + Frame with size 336
Comparing gst_rtspsrc_loop_interleaved and gst_rtspsrc_loop_udp, and investigating on timeout issues, it sounds like a piece of code has been originally copied from udp to the interleaved one. The timeout variable is never used inside the interleaved one. No side effect has been seen in the removed function calls.
The debug message removed is pointless as the timeout used is "src->tcp_timeout" that is fixed.
The presence of the two timeout drove my team in investigating if the reference to the tcp_timeout was correct (it is). Hence we removed the misleading reference to the local timeout variable.
The VP Codec Configuration Box (vpcC) contains vp9 profile and
colorimetry information. Especially the profile information might
be useful for downstream to select capable decoder element.
For live streams, if we keep the stream for a long time, the timestamp
will be larger than max_uint32. In that case, timestamp should be handled
as a rollover timestamp rather than a backward timestamp.
* Organize GstRtpFunnelPad and GstRtpFunnel separately
* Use G_GNUC_UNUSED instead of (void) casts
* Don't call an event "caps"
* Use semicolons after GST_END_TEST (helps gst-indent)
Instead of having chunks with one sample per raw audio sample, have
chunks with a single sample that contains lots of raw audio samples. If
necessary these are still split again later when reading the stream.
With this we are allocating a lot less memory for the parsed sample
tables and can play files that previously triggered our limit of 200MB
for the sample table. For example, one file here would previously
allocate 3.5GB for the sample table and now only allocates 70KB.
Outputting 48000 buffers per second is not a good idea performance-wise.
If a container sample is less than 1024 raw audio frames, combine
multiple samples to get at least 1024 raw audio samples as long as
they're stored contiguous in the file.
For the other direction, if a container sample contains more than 4096
samples there is already code for splitting them up.
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=692750
When the server replies with a range "now-", it is presumed to
be a "live" stream and we should request a similar range.
This was the case prior to my refactoring to make use of
gst_rtsp_range_to_string in 5f1a732bc7,
this commit restores the behaviour for that case.
The proper way of capping on max-streams is to do it in rtpssrcdemux.
This patch uses the newly introduced property on rtpssrcdemux. Previous
behavior would not prevent rtpssrcdemux spawning new pads for every new
ssrc and potentialy causing performance trouble during teardown.
When used for processing bundled media streams within rtpbin the rtpssrcdemux element may
receive bad RTP and RTCP packets, these should not be treated as a fatal error.
The property is useful against atacks when the sender changes SSRC for
every RTP packet. The property with the same name introduced in rtpbin
was not enough, because we still can end up with thousands of pads
allocated in rtpssrcdemux.
gstrtspsrc uses a queue, set_get_param_q, to store set param and get
param requests. The requests are put on the queue by calling
get_parameters() and set_parameter(). A thread which executs in
gst_rtspsrc_thread() then pops requests from the queue and processes
them. The crash occured because the queue became empty and a NULL
request object was then used. The reason that the queue became empty
is that it was popped even when the thread was NOT processing a get
parameter or set parameter command. The fix is to make sure that the
queue is ONLY popped when the command being processed is a set
parameter or get parameter command.
If not configuring the sinks via the "location" property this can be
useful to know for which sink the fragment was actually opened/closed,
especially if finalization of the fragments is happening asynchronously.
When connected to an upstream rtpfunnel element, payload-type,
ssrc and clock-rate will not be present in the received caps.
rtprtxsend can already deal with only the clock rate being
present there, a new property is exposed to allow users to
provide a payload-type -> clock-rate map, this enables the
use of the max-size-time property for bundled streams.
ffmpeg is doing the same and various files in the wild have bogus
information in the sample description if the same information is also
duplicated afterwards in the v1/v2 sound sample desription.
Previously we only did this for non-raw audio due to
https://bugzilla.gnome.org/show_bug.cgi?id=374914
but this specific file is already worked around differently. It still
works after this change.
Also remove ad-hoc GST_READ_DOUBLE_BE re-implementation and move the
switch for legacy audio formats after reading all the sample
descriptions as we want to override the values from there.
By default imagefreeze will still reject new buffers after the first one
and immediately return GST_FLOW_EOS but the new allow-replace property
allows to change this.
Whenever updating the buffer we now also keep track of the configured
caps of the buffer and from the source pad task negotiate correctly
based on the potentially updated caps.
Only the very first time negotiation of a framerate with downstream is
performed, afterwards only the caps themselves apart from the framerate
are updated.
Elements emitting frames through several srcpads should use a
flow combiner to aggregate the chain returns and therefore only return
GST_FLOW_NOT_LINKED to upstream when all the downstream pads have
received GST_FLOW_NOT_LINKED.
In addition to that, in order to handle pads being relinked downstream,
the flow combiner should be reset in response to RECONFIGURE events.
This ensures that a both srcpads process a chain operation before a
GST_FLOW_NOT_LINKED can be propagated upstream (which would usually stop
the pipeline).
Otherwise, in a configuration with two srcpads, only one linked at a
time, after the relink the element could chain data through the now
unlinked pad and the flow combiner would resolve as GST_FLOW_NOT_LINKED
(stopping the pipeline) just because the now linked pad has not been
chained yet to update the flow combiner.
This patch adds handling of RECONFIGURE events to qtdemux. Also, since
this event handling causes the flow combiner to be used from a thread
other than the qtdemux streaming thread, usages of the flow combiner
has been guarded by the object lock.
The key is to make sure the jitterbuffer is set to NULL *before* the
ptdemux.
The race that existed would basically happen when ptdemux had reached
READY, and the jitterbuffer would then push a buffer, triggering a new
pad with a new payloadtype being added and ghosted to the rtpbin itself.
However, the srcpad of the ptdemux would now be inactive, and all the
sticky-event pushed on it would be swallowed, not allowing any to reach
the ghost-pad. Then the buffer in-flight would come to the ghostpad,
and we would assert that a buffer arrived before the necessary
events.
By simply re-ordering the state-changes, we ensure that there will be
no buffer racing into the ptdemux while its state is being changed,
and the problem disappears completely.
Notice also that there is not point in disconnecting the signals on the
ptdemux before this point, since we need the push-thread to settle
down before we can do this in a non-racy way.
Applications might handle locations and generally configuration of the
sink by themselves instead of having splitmuxsink set the location on
the sink. Nonetheless it makes sense to increment the fragment_id that
is passed to the signal so that applications know which fragment is
requested.
Add parsed=true to output caps, as we always output
whole frames, timestamped and all. Means also that
the output can be decoded by avdec_mjpeg wihout
plugging an extra parser (which has no rank).
Add a property that makes it possible for an application to set the
DateUTC header field in matroska files. This is useful for live feeds,
where the DateUTC header can be set to a UTC timestamp, matching the
beginning of the file.
Needs gstreamer!323
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/481
An application might try to access splitmuxsink from sync message handler
by g_object_{get,set} which takes lock also. In general, we don't
take lock around message handler.
By passing `NULL` to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
The do_expected_timeout() function may release the JBUF_LOCK, so we need
to check if nothing wanted the timer thread to exit after this call.
The side effect was that we may endup going back into waiting for a timer
which will cause arbitrary delay on tear down (or deadlock when test
clock is used).
Fixes#653
JBUF_WAIT_QUEUE drops the JBUF_LOCK, which means the stop condition
for the chain function may have changed (change_state to NULL). Check
this immediately after the wait so that we don't delay shutting down.
When in-place, running an allocation is not useful since videocrop
is not implicated in the allocation. So only force the allocation
query for the case it was in passthrough. This is needed since the
change in the crop region will likely pull us out of this mode. For the
case we where neither in passthrough or in-place, the allocation query
is already ran by the baseclass, so nothing special is needed.
This fixes performance issues when changing the crop region per frame.
This was reproduced using videocrop2-test.
One of the jitterbuffers functions is to try and make sense of weird
network behavior.
It is quite unhelpful for the jitterbuffer to start dropping packets
itself when what you are trying to achieve is better network resilience.
In the case of a skew, this could often mean the sender has restarted
in some fashion, and then dropping the very first buffer of this "new"
stream could often mean missing valuable information, like in the case
of video and I-frames.
This patch simply reverts back to the old behavior, prior to 8d955fc32b
and includes the simplest test I could write to demonstrate the behavior,
where a single packet arrives "perfectly", then a 50ms gap happens,
and then two more packets arrive in perfect order after that.
# Conflicts:
# tests/check/elements/rtpjitterbuffer.c
The memory leak occurs in the case when the buffer has been
added to the fragment_buffers array of the current pad and
never been sent because of the push failure of the previous
buffers: moof or mdat header or fragmented buffer(s).
This commit adds custom element messages for when gstrtpjitterbuffer
drops an incoming rtp packets due to for example arriving too late.
Applications can listen to these messages on the bus which enables
actions to be taken when packets are dropped due to for example high
network jitter.
Two properties has been added, one to enable posting drop messages and
one to set a minimum time between each message to enable throttling the
posting of messages as high drop rates.
There are in the wild (mp4) streams that basically contain no tracks
but do have a redirect info[0], in which case, we won't be able
to expose any pad (there are no tracks) so we can't post anything but
an error on the bus, as:
- it can't send EOS downstream, it has no pad,
- posting an EOS message will be useless as PAUSED state can't be
reached and there is no sink in the pipeline meaning GstBin will
simply ignore it
The approach here is to to add details to the ERROR message with a
`redirect-location` field which elements like playbin handle and use right
away.
[0]: http://movietrailers.apple.com/movies/paramount/terminator-dark-fate/terminator-dark-fate-trailer-2_480p.mov
When the queue is full (and adding more packets would risk a seqnum
roll-over), the best approach is to just start pushing out packets
from the other side. Just pushing out the packets results in the
timers being left hanging with old seqnums, so it's safer to just
execute them immediately in this case. It does limit the timer space
to the time it takes to receiver about 32k packets, but without
extended sequence number, this is the best RTP can do.
This also results in the test no longer needed to have timeouts or
timers as pushing packets in drives everything.
Fixes#619
This basically add ability to choose between inserting from head, tail
or in-place in order to try and minimize the distance to walk through in
the timer queue. This removes an overhead we had seen on high drop rate.
The timer passed to update_timers may be from the stats timer. At the
moment, we could endup rescheduling (reusing) that timer onto the normal
timer queue, unschedul it as if it was from the normal timer queue or
duplicate it into the stats timer queue again. This was protected before
as the with the fact the stats timer didn't have a valid idx.
As the offset is already applied now, we need to update and reschedule
all timers each time the offset is changed. I'm not sure who expect this
to be retro-actively applied, but there was a unit test for it.
If the jitterbuffer head change, there is no need to systematically
wakeup the timer thread. The timer thread will be waken up on if
an earlier timeout has been pushed. This prevent some more spurious
wakeup when the system is loaded. As a side effect, cranking the clock
may set the clock at an earlier position.
In this patch we now make use of the new RtpTimerQueue instead of the
old GArray. This required a lot of changes all over the place, some of
the important changes are that `timer->timeout` is no longer a PTS but
the actual timeout. This was required to get the RtpTimerQueue sorting
right. The applied offset is saved as `timer->offset`, this allow
retreiving back the PTS when needed.
The clockid updates only happens once per incoming packet. If the
currently schedule timer is before the earliest timer in the queue, we
no longer wakeup the thread. This way, if other timers get setup in the
meantime, this will reduce the number of wakup.
The timer loop code has been mostly rewritten, though the behaviour of
running the lost timers first has been kept (even though there is no
test to show what would be the side effect of doing this differently).
Fixes#608
Implement a single timer queue for all timers. The goal is to always use
ordered queues for storing timers. This way, extracting timers for
execution becomes O(1). This also allow separating the clock wait
scheduling from the timer itself and ensure that we only wake up the
timer thread when strictly needed.
The knew data structure is still O(n) on insertions and reschedule,
but we now use proximity optimization so that normal cases should be
really fast. The GList structure is also embeded intot he RtpTimer
structure to reduce the number of allocations.
This moves the RtpJitterBufferStructure type, alloc, free into
rtpjitterbuffer.c/h implementation. jitterbuffer.c strictly rely on
the fact this structure is compatible with GList, and so it make more
sense to keep encapsulate it. Also, anything that could possibly
reduce the amount of code in the element is a win.
In order to support that move, a function pointer to free the data
was added. This also allow making the free function option when
flushing the jitterbuffer.
This helps understanding which function modify the Timerdata
and which one does not. This is not always obvious from thelper
name considering recalculate_timer() does not.
In file included from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gst.h:55,
from ../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/tag/tag.h:25,
from ../gst/isomp4/qtdemux.c:56:
In function ‘qtdemux_inspect_transformation_matrix’,
inlined from ‘qtdemux_parse_trak’ at ../gst/isomp4/qtdemux.c:10676:5,
inlined from ‘qtdemux_parse_tree’ at ../gst/isomp4/qtdemux.c:14210:5:
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:645:5: error: ‘%s’ directive argument is null [-Werror=format-overflow=]
645 | gst_debug_log ((cat), (level), __FILE__, GST_FUNCTION, __LINE__, \
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
646 | (GObject *) (object), __VA_ARGS__); \
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../../dist/linux_x86_64/include/gstreamer-1.0/gst/gstinfo.h:1062:35: note: in expansion of macro ‘GST_CAT_LEVEL_LOG’
1062 | #define GST_DEBUG_OBJECT(obj,...) GST_CAT_LEVEL_LOG (GST_CAT_DEFAULT, GST_LEVEL_DEBUG, obj, __VA_ARGS__)
| ^~~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c:10294:5: note: in expansion of macro ‘GST_DEBUG_OBJECT’
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~~~~~~~~~~~~~~~
../gst/isomp4/qtdemux.c: In function ‘qtdemux_parse_tree’:
../gst/isomp4/qtdemux.c:10294:64: note: format string is defined here
10294 | GST_DEBUG_OBJECT (qtdemux, "Transformation matrix rotation %s",
| ^~
Add a property which explicitly maps splitmuxsink pads to the
muxer pads they should connect to, overriding the implicit logic
that tries to match pads but yields arbitrary names.
When running in async-finalize mode, request new pads from the muxer
using the same names as old pads, instead of letting the muxer assign
new ones based on the pad template name.
The output segment is only used in ONVIF mode.
The previous behaviour was to output a segment computed from
the Range response sent by the server.
In ONVIF mode, servers will start serving from the appropriate
synchronization point (keyframe), and the Range in response will
start at that position.
This means rtspsrc can now perform truly accurate seeks in that
mode, by clipping the output segment to the values requested in
the seek. The decoder will then discard out of segment buffers
and playback will start without artefacts at the exact requested
position, similar to the behaviour of a demuxer when an accurate
seek is requested.