We never run as a live element, even if upstream is live, and never
output any buffers with latency but immediately generate buffers as
fast as we can according to the negotiated framerate.
Passing the query upstream would proxy whatever mode of operation
upstream has, which has nothing to do with how we produce buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/653>
Move the SVMI stereoscopic atom parsing out to a helper
function to shrink qtdemux_parse_trak a bit.
Add a bounds check that the received atom is large enough
before parsing it.
Add a note to the atom parser that svmi comes from the
MPEG-A spec 23000-11.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/634>
Until now, do_expected_timeout() was shortly dropping the JBUF_LOCK in order
to push RTX event event without causing deadlock. As a side effect, some
CPU hung would happen as the timerqueue would get filled while looping over
the due timers. To mitigate this, we were processing the lost timer first and
placing into a queue the remainign to be processed later.
In the gap caused by an unlock, we could endup receiving one of the seqnum
present in the pending timers. In that case, the timer would not be found and
a new one was created. When we then update the expected timer, the seqnum
would already exist and the updated timer would be lost.
In this patch we remove the unlock from do_expected_timeout() and place all
pending RTX event into a queue (instead of pending timer). Then, as soon as
we have selected a timer to wait (or if there is no timer to wait for) we send
all the upstream RTX events. As we no longer unlock, we no longer need to pop
more then one timer from the queue, and we do so with the lock held, which
blocks any new colliding timers from being created.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/616>
When a GST_EVENT_FLUSH_START reaches the jitterbuffer, there is a chance that
our task is currently blocking waiting for a timer.
There was two problems:
* That wait wasn't checking for flushing situations
* The flushing handling wasn't waking up that conditional (to check whether it
should abort)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/608>
In some algorithms (like yadif), the Y plane has to be handled different
than the UV plane. Therefore, the planar_y functions are now called for
the Y plane, and the nv12/nv21 functions are handling only the UV/VU
planes respectively.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/444>
Switching the deinterlacing mode on-the-fly from disabled to
auto used to work, but was broken by commit #1f21747c some
years ago.
Force re-negotiation with downstream when the mode or
fields properties are changed, otherwise deinterlace
never switches out of the passthrough mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/584>
First of all get rid of the atomic seeking boolean, which was only ever
set and never read. Replace it with a flushing boolean that is used in
the loop function to distinguish no buffer because of flushing and no
buffer because of an error as otherwise we could end up in a
GST_FLOW_ERROR case during flushing.
Also only reset the state of imagefreeze in flush-stop when all
processing is stopped instead of doing it as part of flush-start.
And last, get a reference to the imagefreeze buffer in the loop function
in the very beginning and work from that as otherwise it could in theory
be replaced or set to NULL in the meantime as we release and re-take the
mutex a couple of times during the loop function.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/580>
Backwards timestamps confuse librtmp, even if they're only backwards
relative to the other stream. If the timestamp of a stream is going
backwards related to the other stream, this property allows the muxer to
skip a few buffers until it reaches the timestamp of the other stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/572>
Allows us to request pads after writing header for streamable flv's.
For non-streamable it doesn't make sense to request a new pad after
writing the header, because the headers have been written already and we
can't add the new stream. But for streamable, any clients that connect
after the new pad has been added will be able to see both streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/572>
As we override the GLib item with our own structure, we cannot use any
function from GList or GQueue that would try to free the RTPJitterBufferItem.
In this patch, we move away from g_queue_new() which forces using
g_queue_free(). This this function could use g_slice_free() if there is any items
left in the queue. Passing the wrong size to GSLice may cause data corruption
and crash.
A better approach would be to use a proper intrusive linked list
implementation but that's left as an exercise for the next person
running into crashes caused by this.
Be ware that this regression was introduced 6 years ago in the following
commit [0], the call to flush() looked useless, as there was a g_queue_free()
afterward.
Signed-off-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
[0] 479c7642fd
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/573>
The calculated threshold for timecode might be varying depending on
"max-size-timecode" and framerate.
For instance, with framerate 29.97 (30000/1001) and
"max-size-timecode=00:02:00;02", every fragment will have identical
number of frames 3598. However, when "max-size-timecode=00:02:00;00",
calculated next keyframe via gst_video_time_code_add_interval()
can be different per fragment, but this is the nature of timecode.
To compensate such timecode drift, we should keep track of expected
timecode of next fragment based on observed timecode.
The problem was this:
Due to the highly irregular arrival of RTX-packet the max-misorder variable
could be pushed very low. (-10).
If you then at some point get a big in the sequence-numbers (62 in the
test) you end up sending RTX-requests for some of those packets, and then
if the sender answers those requests, you are going to get a bunch of
RTX-packets arriving. (-13 and then 5 more packets in the test)
Now, if max-misorder is pushed very low at this point, these RTX-packets
will trigger the handle_big_gap_buffer() logic, and because they arriving
so neatly in order, (as they would, since they have been requested like
that), the gst_rtp_jitter_buffer_reset() will be called, and two things
will happen:
1. priv->next_seqnum will be set to the first RTX packet
2. the 5 RTX-packet will be pushed into the chain() function
However, at this point, these RTX-packets are no longer valid, the
jitterbuffer has already pushed lost-events for these, so they will now
be dropped on the floor, and never make it to the waiting loop-function.
And, since we now have a priv->next_seqnum that will never arrive
in the loop-function, the jitterbuffer is now stalled forever, and will
not push out another buffer.
The proposed fixes:
1. Don't use RTX in calculation of the packet-rate.
2. Don't use RTX in large-gap logic, as they are likely to be dropped.
If the start of the GOP is >= the requested running time, put it into a
new fragment. That is, split-at-running-time would always ensure that a
split happens as early as possible after the given running time.
Previously it was comparing against the current incoming timestamp,
which does not tell us what we actually want to know as it has no direct
relation to the GOP start/end.
This is disabled by default as it unnecessarily creates bigger headers
but it is something that is required by some applications and most
notably the Apple ProRes spec.
Request pads can released at any time, so make sure to hold
the object lock when iterating the element sinkpads list where
that's safe, or to use other safe pad iteration patterns in
other places.
When choosing a best pad, return a reference to the pad to make sure it
stays alive for output in the aggregator srcpad task.
Should fix a spurious valgrind error in the CI flvmux tests and some
other potential problems if the request sink pads are released while
the element is running..
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/714
Even if timecode trak is officially unsupported in non-mov flavors,
some software still supports it, e.g. Final Cut Pro X:
https://developer.apple.com/library/archive/technotes/tn2174/_index.html
The user might still expect to see the timecode information in the
non-mov file despite it being officially unsupported , because other
software e.g. QuickTime will create a timecode trak even in mp4 files.
Furthermore, software that supports timecode trak in non-mov flavors
will also display the file duration in "timecode units" instead of real
clock time, which is not necessarily the same for 29.97 fps and friends.
This might confuse users, who see a different duration for the same
framerate and amount of frames depending on whether the container is mp4
or mov.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/issues/512
gst_buffer_map () results in memcopying when a GstBuffer contains
more than one GstMemory.
This has quite an impact on performance on systems with limited amount
of resources. With this patch the whole GstBuffer will not be mapped at
once, instead each individual GstMemory will be iterated and mapped
separately.
There is a use-case for a server to re-payload opus going through it.
Problem was that the payloader requires channels in the caps, but
this is not something the depayloader can parse out of the stream, meaning
caps-negotiation would fail.
Removing the requirement of channels in the template-caps fixes this.
RTP session starts a new thread for RTCP and names it
"rtpsession-rtcp-thread" which happens to be longer than the maximum 16B
allowed by pthread_setname_np and causes the naming to fail.
See docs for more details.
This commit simply shortens the thread's name so it can actually be set.
Changing the types from boolean to guint due to the ++ operand used on
them, and only call JBUF_SIGNAL_QUEUE after settling down,
or else you end up signaling the waiting code in chain() for every buffer
pushed out.
When switching the splitmuxsrc state back to NULL quickly, it
can encounter deadlocks shutting down the part readers that
are still starting up, or encounter a crash if the splitmuxsrc
cleaned up the parts before the async callback could run.
Taking the state lock to post async-start / async-done messages can
deadlock if the state change function is trying to shut down the
element, so use some finer grained locks for that.
The queued time includes the duration of the last queued frame
(i.e., new keyframe) so the condition check should not be inclusive.
Note that the new fragment will be cut excluding the last frame
and therefore if the condition is inclusive way,
the fragment might have one frame shorter duration for all keyframe
stream such as jpeg or all-inter video streams.
Since the commit 94bb76b6b9, splitmuxsink
will split fragments based on queued time and the threshold of that.
So don't need to store the next timecode for split decision.
Not only the requested keyframe time, the queued size should be
a criterion for the split decision of timecode based mode
(same as max-size-time based split case).
This is a concept that only applies when a buffer arrives in the chain
function, and it has already been scheduled as part of a "multi"-lost
timer.
However, "multi"-lost timers are now a thing of the past, making this
whole concept superflous, and this buffer is now simply counted as "late",
having already been pushed out (albeit as a lost-event).
There is a problem with the code today, where a single timer will
be scheduled for a series of lost packets, and then if the first packet
in that series arrives, it will cause a rescheduling of that timer, going
from a "multi"-timer to a single-timer, causing a lot of the packets
in that timer to be unaccounted for, and creating a situation in where
the jitterbuffer will never again push out another packet.
This patch solves the problem by instead of scheduling those lost packets
as another timer, it instead asks to have that lost-event pushed straight
out.
This very much goes with the intent of the code here: These packets are
so desperately late that no cure exists, and we might as well get the
lost-event out of the way and get on with it.
This change has some interesting knock-on effect being presented in
later commits. It completely removes the concept of "already-lost", so
that is why that test has been disabled in this commit, to be
removed later.
This should result in no worse accuracy than the base parse element, and may
result in better accuracy. In particular, the number of bytes processed at any
given point, as accumulated by baseparse, can be only accurate to
(1 / # of frames) bytes per second, and if we try to seek immediately after
pausing the pipeline to a large offset, this small inaccuracy can propagate to
something noticeable.
The use case that prompted this patch is a 45-minute MPEG-1 layer 3 file, which
has a constant bit rate but no seek tables. Trying to seek the pipeline
immediately after pauisng it, without the ACCURATE flag, to a location 41
minutes in, yields a location that is, even with <https://gitlab.freedesktop.org/gstreamer/gstreamer/merge_requests/374>,
still audibly incorrect. This patch yields a much closer position, no longer
audibly incorrect, and likely within a frame of the most correct position.
By the time sink_event is called, the pad's current caps have
already been updated. To address this, implement sink_event_pre_queue,
and check if the pad can be renegotiated there.
Fixes#707