The OMX specs defines 8 headers that implementations can use to define
their custom extensions. We were checking and including 3 and ignoring
the other ones.
https://bugzilla.gnome.org/show_bug.cgi?id=792043
This hack tries to pass as much information as possible from caps to the
decoder before it receives any buffer. These information can be used by
the OMX decoder to, for example, pre-allocate its internal buffers
before starting to decode and so reduce its initial latency.
This mechanism is currently supported by the zynqultrascaleplus decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792040
The Zynq UltraScale+ encoder implements a custom OMX extension to
directly import dmabuf saving the need of mapping input buffers.
This can be use with either 'v4l2src io-mode=dmabuf' or an OMX video
decoder upstream.
https://bugzilla.gnome.org/show_bug.cgi?id=792361
OMX 1.2.0 introduced a third way to manage buffers by allowing
components to only allocate buffers header during their initialization
and change their pBuffer pointer at runtime.
This new feature can save us a copy between GStreamer and OMX for each
input buffer.
This patch adds API to allocate and use such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=787093
This information can be useful to zynqultrascaleplus decoders. They may
use this information to reduce startup latency by configuring itself
before receiving the first frames.
We also have a custom OMX extension allowing the decoder to report the
latency. The profile/level information helps it reporting a more
accurate latency earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=783114
egl_render seems to have a bug and signals EOS before it has finished
pushing out all data; this hack simply makes acquire_buffer() wait
a bit more before signalling EOS, in case egl_render decides to spit
out some more data.
https://bugzilla.gnome.org/show_bug.cgi?id=741856
No mutex is locked while calling any OpenMAX functions anymore
and everything from the OpenMAX callbacks is inserted into a message
queue and handled from outside the callbacks.
Also there's only a single mutex and condition variable per component
now for handling anything from OpenMAX callbacks and a single mutex
for keeping our component/port state sane.
According to the OMX specification, implementations are allowed to call
callbacks in the context of their function calls. However, our callbacks
take locks and this causes deadlocks if the unerlying OMX implementation
uses this kind of in-context calls.
A solution to the problem would be a recursive mutex. However, a normal
recursive mutex does not fix the problem because it is not guaranteed
that the callbacks are called from the same thread. What we see in Broadcom's
implementation for example is:
- OMX_Foo is called
- OMX_Foo waits on a condition
- A callback is executed in a different thread
- When the callback returns, its calling function
signals the condition that OMX_Foo waits on
- OMX_Foo wakes up and returns
The solution I came up with here is to take a second lock inside the callback,
but only if recursion is expected to happen. Therefore, all calls to OMX
functions are guarded by calls to gst_omx_rec_mutex_begin_recursion() / _end_recursion(),
which effectively tells the mutex that at this point we want to allow calls
to _recursive_lock() to succeed, although we are still holding the master lock.
This happens on the Galaxy Nexus, and causes the pipeline to hang waiting
endlessly for a drain. The hack replaces the wait with a wait + 500ms timeout.