This information can be useful to zynqultrascaleplus decoders. They may
use this information to reduce startup latency by configuring itself
before receiving the first frames.
We also have a custom OMX extension allowing the decoder to report the
latency. The profile/level information helps it reporting a more
accurate latency earlier.
https://bugzilla.gnome.org/show_bug.cgi?id=783114
It is safe to ignore it always. Tizonia notifies this error to pass
some khronos conformance tests. Problem is that gst-omx saves this
error in comp->last_error and then gst_omx_port_set_enabled early
error out which fails the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=782800
Previously the omx plugin was blacklisted if GST_OMX_CONFIG_DIR
points to an invalid path or if the gstomx.conf contains 0 valid
component.
Problem is that once the plugin is blacklisted, a rescan is not
triggered upon changes of the env var or the gstomx.conf file
despite being setup with gst_plugin_add_dependency.
This also makes it more consistent with other plugins that auto
generate features. For example gst-{ffmeg,libav}, gstreamer-vaapi,
v4l2 video dec.
To clarify the diff, the plugin_init func will return TRUE even if
g_key_file_get_groups returns 0 element and even if
g_key_file_load_from_dirs fails.
https://bugzilla.gnome.org/show_bug.cgi?id=782867
egl_render seems to have a bug and signals EOS before it has finished
pushing out all data; this hack simply makes acquire_buffer() wait
a bit more before signalling EOS, in case egl_render decides to spit
out some more data.
https://bugzilla.gnome.org/show_bug.cgi?id=741856
gstomx.c:1376:42: error: implicit conversion from enumeration type 'GstOMXAcquireBufferReturn' to different enumeration type 'OMX_ERRORTYPE'
(aka 'enum OMX_ERRORTYPE') [-Werror,-Wenum-conversion]
g_return_val_if_fail (!port->tunneled, GST_OMX_ACQUIRE_BUFFER_ERROR);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
https://bugzilla.gnome.org/show_bug.cgi?id=775112
There is one rare case where calling handle_messages() more than once can cause a deadlock
in the video decoder element:
- sink pad thread starts the src pad task (gst_omx_video_dec_loop())
- _video_dec_loop() calls gst_omx_port_acquire_buffer() on dec_out_port
- blocks in gst_omx_component_wait_message() releasing comp->lock and comp->messages_lock
(initially, there are no buffers configured on that port, so it waits for OMX_EventPortSettingsChanged)
- the sink pad thread pushes a buffer to the decoder with gst_omx_port_release_buffer()
- _release_buffer() grabs comp->lock and sends the buffer to OMX, which consumes it immediately
- EmptyBufferDone gets called at this point, which signals _wait_message() to unblock
- the message from EmptyBufferDone is processed in gst_omx_component_handle_messages()
called from gst_omx_port_release_buffer()
- gst_omx_port_release_buffer releases comp->lock
- the src pad thread now gets to run, grabbing comp->lock while it exits from _wait_message()
- _acquire_buffer() calls the _handle_messages() on the next line after _wait_message(),
which does nothing (no pending messages)
- then it goes to "retry:" and calls _handle_messages() again, which also does nothing
(still no pending messages)
- scheduler switches to a videocore thread that calls EventHandler, informing us about the
OMX_EventPortSettingsChanged event that just arrived
- EventHandler graps comp->messages_lock, but not comp->lock, so it can run in parallel at
this point just fine.
- scheduler switches back to the src pad thread (which is in the middle of _acquire_buffer())
- the next _handle_messages() which is right before if (g_queue_is_empty (&port->pending_buffers))
processes the OMX_EventPortSettingsChanged
- the buffer queue is still empty, so that thread blocks again in _wait_message()
- the sink pad thread tries to acquire the next input port buffer
- _acquire_buffer() also blocks this thread in:
if (comp->pending_reconfigure_outports) { ... _wait_message() ... }
- DEADLOCK. gstreamer is waiting for omx to do something, omx waits for gstreamer to do something.
By removing those extra _handle_messages() calls, we can ensure that all the checks of
_acquire_buffer() will re-run. In the above case, after the scheduler switches back to
the middle of _acquire_buffer(), the code will enter _wait_message(), which will see that
there are pending messages and will return immediately, going back to "retry:" and
re-doing all the checks properly.
https://bugzilla.gnome.org/show_bug.cgi?id=741854
Provides omxanalogaudiosink and omxhdmiaudiosink elements on
the Raspberry PI.
- omxanalogaudiosink is capable to render raw mono or stereo audio
through the jack output.
- omxhdmiaudiosink is capable to render raw audio up to 8 channels
and transmit ac3/dts(IEC 61937) through the HDMI output.
- sinks provide a clock derived from rendered samples
- sinks support the GstStreamVolume interface by implementing
the volume and mute properties.
https://bugzilla.gnome.org/show_bug.cgi?id=728962
Fix printf formats again, so that gst-omx compiles warning-
free on the Raspberry Pi as well. Unfortunately OMX_UINT32
maybe be typedefed to uint32_t or unsigned long, which
doesn't work well with our debugging printf format strings,
so just use %u for those and cast to guint.
It exports eglIntOpenMAXILDoneMarker(), which is also
exported by libopenmaxil.so... but we need the version
from libopenmaxil.so as the other one is just a stub.
No mutex is locked while calling any OpenMAX functions anymore
and everything from the OpenMAX callbacks is inserted into a message
queue and handled from outside the callbacks.
Also there's only a single mutex and condition variable per component
now for handling anything from OpenMAX callbacks and a single mutex
for keeping our component/port state sane.
Between lock() and begin_recursion() it was possible for another thread to
try to do a recursive_lock(). This would block because the mutex was already
locked(), but not ready for recursive locking yet. unlock() would never
happen in the original thread because it was waiting for the other thread
to finish first.
Happened on the Raspberry Pi.