When using a input buffer pool, the buffer may be released to the pool when
gst_omx_buffer_unmap() is called. We need to have buf->used unset at
this point as the pool may use it to check the status of the pool.
{Empty,Fill}BufferDone is called from OMX internal threads while
messages are handled from gst elements' thread. Best to do all this
when handling the message so we don't mess with OMX threads and keep
the original thread/logic split.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
This is no longer needed since we implemented close() vfuncs as the
encoder/decoder base class already take care of calling close() (which
is calling shutdown()) in its own change_state implementation.
We also move the shut down of the component from PAUSED_TO_READY to READY_TO_NULL.
By doing so upstream will have already deactivated the pool from the
encoder and so won't be preventing the OMX state change as the buffers
will all be released.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
When flusing we should wait for OMX to send the flush command complete event
AND all ports being released.
We were stopping as soon as one of those condition was met.
Fix a race between FillThisBufferDone/EmptyBufferDone and the flush
EventCmdComplete messages. The OMX implementation is supposed to release
its buffers before posting the EventCmdComplete event but the ordering
isn't guaranteed as the FillThisBufferDone/EmptyBufferDone and
EventHandler callbacks can be called from different threads (cf 2.7
'Thread Safety' in the spec).
Only wait for buffers currently used by OMX as some buffers may not be
in the pending queue because they are held downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=789475
As stated in the spec ("6.1.3 Seek Event Sequence") we should pause
before flushing.
We were pausing the decoder but not the encoder so I just aligned the
two code paths.
https://bugzilla.gnome.org/show_bug.cgi?id=797038
According to the OMX spec (3.1.3.7.1) nFilledLen is meant to include any
padding. We use to include the horizontal one (stride) but not the
vertical one if nSliceHeight is bigger than the actual height.
The calculated nFilledLen was wrong as it didn't include the padding
between planes.
https://bugzilla.gnome.org/show_bug.cgi?id=796749
Increase the number of output buffers by the number of buffers requested
downstream.
Prevent buffers starvation if downstream is going to use dynamic buffer
mode on its input.
https://bugzilla.gnome.org/show_bug.cgi?id=795746
Tell upstream about how many buffer we plan to use so they can adjust
their own number of buffers accordingly if needed.
Same logic as the existing gst_omx_video_enc_propose_allocation().
https://bugzilla.gnome.org/show_bug.cgi?id=795746
If for some reason something goes wrong and we stop the streaming loop
we may end up with other threads still waiting on the drain cond.
No more buffers will be produced by the component so they were waiting
forever.
Fix this by always signalling this cond when stopping the streaming
loop.
https://bugzilla.gnome.org/show_bug.cgi?id=796207
The OMX specs states that the nBufferCountActual of a port has to default
to its nBufferCountMin. If we don't change nBufferCountActual we purely rely
on this default. But in some cases, OMX may change nBufferCountMin before we
allocate buffers. Like for example when configuring the input ports with the
actual format, it may decrease the number of minimal buffers required.
This method checks this and update nBufferCountActual if needed so we'll use
less buffers than the worst case in such scenarios.
SetParameter() needs to be called when the port is either disabled or
the component in the Loaded state.
Don't do this for the decoder output as
gst_omx_video_dec_allocate_output_buffers() already check
nBufferCountMin when computing the number of output buffers.
On some platform, like rpi, the default nBufferCountActual is much
higher than nBufferCountMin so only enable this using a specific gst-omx
hack.
https://bugzilla.gnome.org/show_bug.cgi?id=791211
Setting the input format and the associated encoder/decoder settings
may also affect the nBufferCountMin of the input port.
Refresh the input port so we'll use up to date values in propose/decide
allocation.
https://bugzilla.gnome.org/show_bug.cgi?id=796445
gst_omx_component_get_state() used to early return if there was no
pending state change. So if the component raised an error it wasn't
considered in the invalid state until the next requested state change.
Fix this by checking first if we received an error.
https://bugzilla.gnome.org/show_bug.cgi?id=795874
The OMX stack of the zynqultrascaleplus (the only one supporting
NV12_10LE32 and NV16_10LE32) will now pick the proper profile if none
has been requested. Best to rely on its default than hardcoding a
specific one in gst-omx.
https://bugzilla.gnome.org/show_bug.cgi?id=794319
0xffffffff is the magic number in gst-omx meaning 'the default value
defined in OMX'. This works fine with OMX parameters which are only set
once when starting the component but not with configs which can be
changed while PLAYING.
Save the actual OMX default bitrate so we can restore it later if user
sets back 0xffffffff on the property.
Added GST_OMX_PROP_OMX_DEFAULT so we stop hardcoding magic numbers
everywhere.
https://bugzilla.gnome.org/show_bug.cgi?id=794998
We weren't using the usual pattern when re-setting the bitrate:
- get parameters from OMX
- update only the fields different from 0xffffffff (OMX defaults)
- set parameters
Also added a comment explaining why we re-set this param.
https://bugzilla.gnome.org/show_bug.cgi?id=794998
- Report the error from OMX if any (OMX_EventError)
- If not report the failing to the application (GST_ELEMENT_ERROR)
- return GST_FLOW_ERROR rather than FALSE
- don't leak @frame
https://bugzilla.gnome.org/show_bug.cgi?id=795352
We already have the exact same message at the beginning of
gst_omx_video_enc_handle_frame(). Having it twice is confusing when
reading/grepping logs.
I kept the earlier one to keep the symetry with
gst_omx_video_dec_handle_frame().
https://bugzilla.gnome.org/show_bug.cgi?id=794897