If buffers were released from the pool while
gst_omx_video_enc_handle_frame() was waiting for new buffers,
gst_omx_port_acquire_buffer() was never awaken as the buffers weren't
released through OMX's messaging system.
GQueue isn't thread safe so also protect it with the lock mutex.
We used to track the 'allocating' status on the pool. It is used while
allocating so output buffers aren't passed right away to OMX and input
ones are not re-added to the pending queue.
This was causing a bug when exporting buffers to v4l2src. On start
v4l2src acquires a buffer, read its stride and release it right away.
As no buffer was received by the encoder element at this point, 'allocating'
was still on TRUE and so the the buffer wasn't put back to the pending
queue and, as result, no longer available to the pool.
Fix this by checking the active status of the pool instead of manually
tracking it down. The pool is considered as active at the very end of
the activation process so we're good when buffers are released during
the activation.
The method we call in the context of pushing a buffer are all thread
safe. Holding a lock would prevent input buffers from being queued while
pushing.
https://bugzilla.gnome.org/show_bug.cgi?id=715192
The base class methods will lock this properly when needed, there seems
to be no need to lock it explicitly.
This allows the patch in gstvideodec for unlocking the stream lock
when pushing buffers out to work.
https://bugzilla.gnome.org/show_bug.cgi?id=715192
We already have code configuring the encoder stride and slice height
when receiving the first buffer from upstream.
We don't have an equivalent when the encoder is exporting its buffers to the
decoder.
There is no point adding it and making the code even more
complex as we wouldn't gain anything by exporting from the encoder to
the decoder. The dynamic buffer mode already ensures 0-copy between OMX
components.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
Propose pool upstream so input buffers can be allocated by the port and
exported as dmabuf.
The actual OMX buffers are allocated when the pool is activated, so we
don't end up doing useless allocations if the pool isn't used.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
The OMX transition state to Loaded won't be complete until all buffers
have been freed. There is no point waiting, and timeout, if we know that
output buffers haven't been freed yet.
The typical scenario is output buffers being still used downstream
and being freed later when released back to the pool.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
Now that the pool is responsible of freeing the OMX buffers, we need to
ensure that the OMX component stay alive while the pool is as we rely on
the component to free the buffers.
The GstOMXPort is owned by the component so no need to ref this one.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
The pool is stopped when all the buffers have been released. Deallocate
when stopping so we are sure that the buffers aren't still used by
another element.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
When using a input buffer pool, the buffer may be released to the pool when
gst_omx_buffer_unmap() is called. We need to have buf->used unset at
this point as the pool may use it to check the status of the pool.
{Empty,Fill}BufferDone is called from OMX internal threads while
messages are handled from gst elements' thread. Best to do all this
when handling the message so we don't mess with OMX threads and keep
the original thread/logic split.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
This is no longer needed since we implemented close() vfuncs as the
encoder/decoder base class already take care of calling close() (which
is calling shutdown()) in its own change_state implementation.
We also move the shut down of the component from PAUSED_TO_READY to READY_TO_NULL.
By doing so upstream will have already deactivated the pool from the
encoder and so won't be preventing the OMX state change as the buffers
will all be released.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
When flusing we should wait for OMX to send the flush command complete event
AND all ports being released.
We were stopping as soon as one of those condition was met.
Fix a race between FillThisBufferDone/EmptyBufferDone and the flush
EventCmdComplete messages. The OMX implementation is supposed to release
its buffers before posting the EventCmdComplete event but the ordering
isn't guaranteed as the FillThisBufferDone/EmptyBufferDone and
EventHandler callbacks can be called from different threads (cf 2.7
'Thread Safety' in the spec).
Only wait for buffers currently used by OMX as some buffers may not be
in the pending queue because they are held downstream.
https://bugzilla.gnome.org/show_bug.cgi?id=789475
As stated in the spec ("6.1.3 Seek Event Sequence") we should pause
before flushing.
We were pausing the decoder but not the encoder so I just aligned the
two code paths.
https://bugzilla.gnome.org/show_bug.cgi?id=797038
According to the OMX spec (3.1.3.7.1) nFilledLen is meant to include any
padding. We use to include the horizontal one (stride) but not the
vertical one if nSliceHeight is bigger than the actual height.
The calculated nFilledLen was wrong as it didn't include the padding
between planes.
https://bugzilla.gnome.org/show_bug.cgi?id=796749
Increase the number of output buffers by the number of buffers requested
downstream.
Prevent buffers starvation if downstream is going to use dynamic buffer
mode on its input.
https://bugzilla.gnome.org/show_bug.cgi?id=795746
Tell upstream about how many buffer we plan to use so they can adjust
their own number of buffers accordingly if needed.
Same logic as the existing gst_omx_video_enc_propose_allocation().
https://bugzilla.gnome.org/show_bug.cgi?id=795746
If for some reason something goes wrong and we stop the streaming loop
we may end up with other threads still waiting on the drain cond.
No more buffers will be produced by the component so they were waiting
forever.
Fix this by always signalling this cond when stopping the streaming
loop.
https://bugzilla.gnome.org/show_bug.cgi?id=796207