Increase the number of output buffers by the number of buffers requested
downstream.
Prevent buffers starvation if downstream is going to use dynamic buffer
mode on its input.
https://bugzilla.gnome.org/show_bug.cgi?id=795746
The OMX specs states that the nBufferCountActual of a port has to default
to its nBufferCountMin. If we don't change nBufferCountActual we purely rely
on this default. But in some cases, OMX may change nBufferCountMin before we
allocate buffers. Like for example when configuring the input ports with the
actual format, it may decrease the number of minimal buffers required.
This method checks this and update nBufferCountActual if needed so we'll use
less buffers than the worst case in such scenarios.
SetParameter() needs to be called when the port is either disabled or
the component in the Loaded state.
Don't do this for the decoder output as
gst_omx_video_dec_allocate_output_buffers() already check
nBufferCountMin when computing the number of output buffers.
On some platform, like rpi, the default nBufferCountActual is much
higher than nBufferCountMin so only enable this using a specific gst-omx
hack.
https://bugzilla.gnome.org/show_bug.cgi?id=791211
0xffffffff is the magic number in gst-omx meaning 'the default value
defined in OMX'. This works fine with OMX parameters which are only set
once when starting the component but not with configs which can be
changed while PLAYING.
Save the actual OMX default bitrate so we can restore it later if user
sets back 0xffffffff on the property.
Added GST_OMX_PROP_OMX_DEFAULT so we stop hardcoding magic numbers
everywhere.
https://bugzilla.gnome.org/show_bug.cgi?id=794998
The OMX specs defines 8 headers that implementations can use to define
their custom extensions. We were checking and including 3 and ignoring
the other ones.
https://bugzilla.gnome.org/show_bug.cgi?id=792043
This hack tries to pass as much information as possible from caps to the
decoder before it receives any buffer. These information can be used by
the OMX decoder to, for example, pre-allocate its internal buffers
before starting to decode and so reduce its initial latency.
This mechanism is currently supported by the zynqultrascaleplus decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792040
The Zynq UltraScale+ encoder implements a custom OMX extension to
directly import dmabuf saving the need of mapping input buffers.
This can be use with either 'v4l2src io-mode=dmabuf' or an OMX video
decoder upstream.
https://bugzilla.gnome.org/show_bug.cgi?id=792361
OMX 1.2.0 introduced a third way to manage buffers by allowing
components to only allocate buffers header during their initialization
and change their pBuffer pointer at runtime.
This new feature can save us a copy between GStreamer and OMX for each
input buffer.
This patch adds API to allocate and use such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=787093
This information can be useful to zynqultrascaleplus decoders. They may
use this information to reduce startup latency by configuring itself
before receiving the first frames.
We also have a custom OMX extension allowing the decoder to report the
latency. The profile/level information helps it reporting a more
accurate latency earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=783114
egl_render seems to have a bug and signals EOS before it has finished
pushing out all data; this hack simply makes acquire_buffer() wait
a bit more before signalling EOS, in case egl_render decides to spit
out some more data.
https://bugzilla.gnome.org/show_bug.cgi?id=741856
No mutex is locked while calling any OpenMAX functions anymore
and everything from the OpenMAX callbacks is inserted into a message
queue and handled from outside the callbacks.
Also there's only a single mutex and condition variable per component
now for handling anything from OpenMAX callbacks and a single mutex
for keeping our component/port state sane.
According to the OMX specification, implementations are allowed to call
callbacks in the context of their function calls. However, our callbacks
take locks and this causes deadlocks if the unerlying OMX implementation
uses this kind of in-context calls.
A solution to the problem would be a recursive mutex. However, a normal
recursive mutex does not fix the problem because it is not guaranteed
that the callbacks are called from the same thread. What we see in Broadcom's
implementation for example is:
- OMX_Foo is called
- OMX_Foo waits on a condition
- A callback is executed in a different thread
- When the callback returns, its calling function
signals the condition that OMX_Foo waits on
- OMX_Foo wakes up and returns
The solution I came up with here is to take a second lock inside the callback,
but only if recursion is expected to happen. Therefore, all calls to OMX
functions are guarded by calls to gst_omx_rec_mutex_begin_recursion() / _end_recursion(),
which effectively tells the mutex that at this point we want to allow calls
to _recursive_lock() to succeed, although we are still holding the master lock.
This happens on the Galaxy Nexus, and causes the pipeline to hang waiting
endlessly for a drain. The hack replaces the wait with a wait + 500ms timeout.