egl_render seems to have a bug and signals EOS before it has finished
pushing out all data; this hack simply makes acquire_buffer() wait
a bit more before signalling EOS, in case egl_render decides to spit
out some more data.
https://bugzilla.gnome.org/show_bug.cgi?id=741856
gstomx.c:1376:42: error: implicit conversion from enumeration type 'GstOMXAcquireBufferReturn' to different enumeration type 'OMX_ERRORTYPE'
(aka 'enum OMX_ERRORTYPE') [-Werror,-Wenum-conversion]
g_return_val_if_fail (!port->tunneled, GST_OMX_ACQUIRE_BUFFER_ERROR);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://bugzilla.gnome.org/show_bug.cgi?id=775112
There is one rare case where calling handle_messages() more than once can cause a deadlock
in the video decoder element:
- sink pad thread starts the src pad task (gst_omx_video_dec_loop())
- _video_dec_loop() calls gst_omx_port_acquire_buffer() on dec_out_port
- blocks in gst_omx_component_wait_message() releasing comp->lock and comp->messages_lock
(initially, there are no buffers configured on that port, so it waits for OMX_EventPortSettingsChanged)
- the sink pad thread pushes a buffer to the decoder with gst_omx_port_release_buffer()
- _release_buffer() grabs comp->lock and sends the buffer to OMX, which consumes it immediately
- EmptyBufferDone gets called at this point, which signals _wait_message() to unblock
- the message from EmptyBufferDone is processed in gst_omx_component_handle_messages()
called from gst_omx_port_release_buffer()
- gst_omx_port_release_buffer releases comp->lock
- the src pad thread now gets to run, grabbing comp->lock while it exits from _wait_message()
- _acquire_buffer() calls the _handle_messages() on the next line after _wait_message(),
which does nothing (no pending messages)
- then it goes to "retry:" and calls _handle_messages() again, which also does nothing
(still no pending messages)
- scheduler switches to a videocore thread that calls EventHandler, informing us about the
OMX_EventPortSettingsChanged event that just arrived
- EventHandler graps comp->messages_lock, but not comp->lock, so it can run in parallel at
this point just fine.
- scheduler switches back to the src pad thread (which is in the middle of _acquire_buffer())
- the next _handle_messages() which is right before if (g_queue_is_empty (&port->pending_buffers))
processes the OMX_EventPortSettingsChanged
- the buffer queue is still empty, so that thread blocks again in _wait_message()
- the sink pad thread tries to acquire the next input port buffer
- _acquire_buffer() also blocks this thread in:
if (comp->pending_reconfigure_outports) { ... _wait_message() ... }
- DEADLOCK. gstreamer is waiting for omx to do something, omx waits for gstreamer to do something.
By removing those extra _handle_messages() calls, we can ensure that all the checks of
_acquire_buffer() will re-run. In the above case, after the scheduler switches back to
the middle of _acquire_buffer(), the code will enter _wait_message(), which will see that
there are pending messages and will return immediately, going back to "retry:" and
re-doing all the checks properly.
https://bugzilla.gnome.org/show_bug.cgi?id=741854
Provides omxanalogaudiosink and omxhdmiaudiosink elements on
the Raspberry PI.
- omxanalogaudiosink is capable to render raw mono or stereo audio
through the jack output.
- omxhdmiaudiosink is capable to render raw audio up to 8 channels
and transmit ac3/dts(IEC 61937) through the HDMI output.
- sinks provide a clock derived from rendered samples
- sinks support the GstStreamVolume interface by implementing
the volume and mute properties.
https://bugzilla.gnome.org/show_bug.cgi?id=728962
Fix printf formats again, so that gst-omx compiles warning-
free on the Raspberry Pi as well. Unfortunately OMX_UINT32
maybe be typedefed to uint32_t or unsigned long, which
doesn't work well with our debugging printf format strings,
so just use %u for those and cast to guint.
It exports eglIntOpenMAXILDoneMarker(), which is also
exported by libopenmaxil.so... but we need the version
from libopenmaxil.so as the other one is just a stub.
No mutex is locked while calling any OpenMAX functions anymore
and everything from the OpenMAX callbacks is inserted into a message
queue and handled from outside the callbacks.
Also there's only a single mutex and condition variable per component
now for handling anything from OpenMAX callbacks and a single mutex
for keeping our component/port state sane.
Between lock() and begin_recursion() it was possible for another thread to
try to do a recursive_lock(). This would block because the mutex was already
locked(), but not ready for recursive locking yet. unlock() would never
happen in the original thread because it was waiting for the other thread
to finish first.
Happened on the Raspberry Pi.
According to the OMX specification, implementations are allowed to call
callbacks in the context of their function calls. However, our callbacks
take locks and this causes deadlocks if the unerlying OMX implementation
uses this kind of in-context calls.
A solution to the problem would be a recursive mutex. However, a normal
recursive mutex does not fix the problem because it is not guaranteed
that the callbacks are called from the same thread. What we see in Broadcom's
implementation for example is:
- OMX_Foo is called
- OMX_Foo waits on a condition
- A callback is executed in a different thread
- When the callback returns, its calling function
signals the condition that OMX_Foo waits on
- OMX_Foo wakes up and returns
The solution I came up with here is to take a second lock inside the callback,
but only if recursion is expected to happen. Therefore, all calls to OMX
functions are guarded by calls to gst_omx_rec_mutex_begin_recursion() / _end_recursion(),
which effectively tells the mutex that at this point we want to allow calls
to _recursive_lock() to succeed, although we are still holding the master lock.
This happens on the Galaxy Nexus, and causes the pipeline to hang waiting
endlessly for a drain. The hack replaces the wait with a wait + 500ms timeout.
gst_omx_port_set_flushing() calls OMX_FillThisBuffer at the end of a flush
without releasing the port lock, and this can cause a deadlock with the
EventHandler. This patches fixes this by dropping the lock for the duration of
the fill buffer call.