current_buf_mem_idx stands for the index of memory of the corresponding
buffer which is scheduled to be written in the next iteration.
If all memory objects were scheduled to be written in the current
iteration, reset the index to zero so that starting from the first
memory object of the next buffer.
If buffer lists with too many buffers would be written before, a stack
overflow would happen because of memory linear with the number of
GstMemory would be allocated on the stack. This could happen for example
when filesink is configured with a very big buffer size.
Instead now move the buffer and buffer list writing into the helper
functions and at most write IOV_MAX memories at once. Anything bigger
than that wouldn't be passed to writev() anyway and written differently
in the previous code, so this also potentially speeds up writing for
these cases.
For example the following pipeline would crash with a stackoverflow:
gst-launch-1.0 audiotestsrc ! filesink buffer-size=1073741824 location=/dev/null
This seems to happen when another client is accessing the file at the
same time, and retrying after a short amount of time solves it.
Sometimes partial data is written at that point already but we have no
idea how much it is, or if what was written is correct (it sometimes
isn't) so we always first seek back to the current position and repeat
the whole failed write.
It happens at least on Linux and macOS on SMB/CIFS and NFS file systems.
Between write attempts that failed with EACCES we wait 10ms, and after
enough consecutive tries that failed with EACCES we simply time out.
In theory a valid EACCES for files to which we simply have no access
should've happened already during the call to open(), except for NFS
(see open(2)).
This can be enabled with the new max-transient-error-timeout property, and
a new o-sync boolean property was added to open the file in O_SYNC mode
as without that it's not guaranteed that we get EACCES for the actual
writev() call that failed but might only get it at a later time.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/305
The correct behaviour of anything stuck in the ->render() function
between ->unlock() and ->unlock_stop() is to call
gst_base_sink_wait_preroll() and only return an error if this returns an
error, otherwise, it must continue where it left off!
https://bugzilla.gnome.org/show_bug.cgi?id=773912
If we receive more than UIO_MAXIOV (1024 typically) buffers
in a single writev call, fall back to consolidating them
into one output buffer or multiple write calls.
This could be made more optimal, but let's wait until it's
ever a bottleneck for someone
Avoid relocations and refactor so that we don't calculate
the fixed and known at compile time maximum string size
every time. Also skip the mini object flags which we are
not going to print anyway.