Making it less random and fixing a race in a GES test where we have
as pipeline:
```
videotestsrc ! output-selector name=s ! input-selector name=i s. ! timecodestamper ! i.
```
which we seek, leading to the seek reaching the video testsrc
without going through the timecodestamper and generating a buffer
even before timecodestamper gets the seek which means that its internal
state is wrong compared to the datastream it gets and attaches wrong
timecode metas.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/485>
Identity was ignoring seek and flush events even when using
a single segment. In the end it means that we couldn't compute
buffers running-time and stream time after seeks.
This commits adds support for flushing seeks only as I have no idea
what to do for non flushing ones.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/450>
In reverse playback, buffers are played back from buffer.stop
(buffer.pts + buffer.duration) to buffer.pts running times which
mean that we need to use the buffer end running time as a buffer
timestsamp, not the buffer pts when using a single segment in reverse
playback.
This is now being tested in
`validate.test.identity.reverse_single_segment`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/450>
downloadbuffer source pad pushes the first buffer before pushing
Stream Start and Segment event, when working in Push mode.
Fix:Pushing Stream Start and Segment after coming out of
wait for data, and before pushing the buffer to next element.
Fixes#534
The returned "stats" structure contains, for now, one array called
"queues" with one GstStructure per internal queue, containing said
queue's current level of bytes, buffers, and time.
It is not explicitly specified anywhere in the docs that 0% buffering is
at low-watermark and 100% buffering is at high-watermark. It was
specified only in the sources.
current_buf_mem_idx stands for the index of memory of the corresponding
buffer which is scheduled to be written in the next iteration.
If all memory objects were scheduled to be written in the current
iteration, reset the index to zero so that starting from the first
memory object of the next buffer.
Previously the default and full modes were the same. Now the default
mode is like before: it accumulates all buffers in a buffer list until
the threshold is reached and then writes them all out, potentially in
multiple writes.
The new full mode works by always copying memory to a single memory area
and writing everything out with a single write once the threshold is
reached.
If buffer lists with too many buffers would be written before, a stack
overflow would happen because of memory linear with the number of
GstMemory would be allocated on the stack. This could happen for example
when filesink is configured with a very big buffer size.
Instead now move the buffer and buffer list writing into the helper
functions and at most write IOV_MAX memories at once. Anything bigger
than that wouldn't be passed to writev() anyway and written differently
in the previous code, so this also potentially speeds up writing for
these cases.
For example the following pipeline would crash with a stackoverflow:
gst-launch-1.0 audiotestsrc ! filesink buffer-size=1073741824 location=/dev/null
The clocksync element is a generic element that can be
placed in a pipeline to synchronise passing buffers to the
clock at that point. This is similar to 'identity sync=true',
but because it isn't GstBaseTransform-based, it can process
GstBufferLists without breaking them into separate GstBuffers
When the user sets filters, we should not trace ref counts of object that
are not traced. This optimizes the tracer by potentially avoiding
generating useless backtraces.
According to [1] EINTR is a possible errno for fsync() and it happens in
reality on linux (video writing via splitmuxsink with robust muxing enabled
on a cifs mounted network share), so handle it as all other EINTR
(do/while(errno == EINTR)).
Fixes:
GError.message: Error while writing to file "vidoe_001.mp4". GError.domain: 2372 GError.code: 10 from: FileSink debug: gstfilesink.c(849): gst_file_sink_render (): /GstPipeline:Pipeline/GstSplitMuxSink:SplitMuxSink/GstBin:QueueBin/GstFileSink:FileSink: Interrupted system call
Signed-off-by: Peter Seiderer <ps.report@gmx.net>
`g_object_notify()` actually takes a global lock to look up the
`GParamSpec` that corresponds to the given property name. It's not a
huge performance hit, but it's easily avoidable by using the
`_by_pspec()` variant.
This reverts a96002bb28, which is not
necessary anymore. If we release the pad after removing it then none of
the deactivation code will actually be called because the pad has no
parent anymore, and we require a parent on the pad for deactivation to
happen.
This can then, among other things, cause a streaming thread to be still
stuck in a pad probe because the pad was never flushed, and waiting
there forever because now the pad will actually never be flushed anymore.
If a pad is currently being released we don't want to forward the
FLUSHING flow return but instead consider it as NOT_LINKED. FLUSHING
would also cause upstream to be FLUSHING.
This part was missed in a3c4a3201a and
resulted in a different (and wrong) workaround in
a96002bb28.
Otherwise we're not guaranteed to read the very latest value that
another thread might've written in there when the pad was released, and
could instead work with an old value.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
If the element before the sink needs $n buffers to produce one output
buffer, we were reffing $n events and unreffing only one.
Prevent this by using g_object_set_qdata_full() to handle the event
unreffing so we're sure no ref will be lost.
The records are static and so appear as false positives when using those
tracers with the leaks tracer as well.
The leaks tracer was already setting this flag on its record so let's
set it on the other ones as well.
In gst_download_buffer_wait_for_data(), when a seek is made with
perform_seek_to_offset() the `qlock` is released temporarily. Therefore,
the flushing condition can be set during this period and should be
checked.
This was not being checked before, causing occasional deadlocks when
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() was called.
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() assumes that the caller has already
checked that we're not flushing before, since this is done when
acquiring the lock; so if we release it temporarily somewhere, we need
to check for flush again.
Without that check, the function would keep waiting for the condition
variable to be notified before checking for flushing condition again,
and that may very well never happen. This was reproduced when during pad
deactivation when running WebKit in gdb.
sync=TRUE implementation changes the latency query of a non-live
upstream into live, though it wrongly set the upstream max latency to 0.
As non-live sources won't loose data if we wait longer, this should have
been reported as have no max latency limite (-1).
In the hotdoc inspector for example, pads are instantiated with
g_object_new, other code paths to get/set properties already make
that check.
And update doc cache
This feature was previously available only through the SIGUSR2 signal,
which meant it wasn't available on platforms that don't have UNIX
signals, such as Windows and with applications that already use
SIGUSR1 for something else.
Now we have action-signals for doing the same. These action signals
can also be used for fetching the checkpoint information
programmatically instead of printing to the debug log.