Our encoder implementation actually supports a small subset of the
formats supported by the decoder. Those are the formats for which we
have a copy path in gst_omx_video_enc_fill_buffer() and which are not
filtered out in filter_supported_formats().
Those debug infos have proved to be very helpful when debugging
timestamp issues. They are often linked to gst-omx picking the wrong
frame when trying to map from OMX.
We used to track the 'allocating' status on the pool. It is used while
allocating so output buffers aren't passed right away to OMX and input
ones are not re-added to the pending queue.
This was causing a bug when exporting buffers to v4l2src. On start
v4l2src acquires a buffer, read its stride and release it right away.
As no buffer was received by the encoder element at this point, 'allocating'
was still on TRUE and so the the buffer wasn't put back to the pending
queue and, as result, no longer available to the pool.
Fix this by checking the active status of the pool instead of manually
tracking it down. The pool is considered as active at the very end of
the activation process so we're good when buffers are released during
the activation.
The base class methods will lock this properly when needed, there seems
to be no need to lock it explicitly.
This allows the patch in gstvideodec for unlocking the stream lock
when pushing buffers out to work.
https://bugzilla.gnome.org/show_bug.cgi?id=715192
We already have code configuring the encoder stride and slice height
when receiving the first buffer from upstream.
We don't have an equivalent when the encoder is exporting its buffers to the
decoder.
There is no point adding it and making the code even more
complex as we wouldn't gain anything by exporting from the encoder to
the decoder. The dynamic buffer mode already ensures 0-copy between OMX
components.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
The OMX transition state to Loaded won't be complete until all buffers
have been freed. There is no point waiting, and timeout, if we know that
output buffers haven't been freed yet.
The typical scenario is output buffers being still used downstream
and being freed later when released back to the pool.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
The pool is stopped when all the buffers have been released. Deallocate
when stopping so we are sure that the buffers aren't still used by
another element.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
This is no longer needed since we implemented close() vfuncs as the
encoder/decoder base class already take care of calling close() (which
is calling shutdown()) in its own change_state implementation.
We also move the shut down of the component from PAUSED_TO_READY to READY_TO_NULL.
By doing so upstream will have already deactivated the pool from the
encoder and so won't be preventing the OMX state change as the buffers
will all be released.
https://bugzilla.gnome.org/show_bug.cgi?id=796918
Increase the number of output buffers by the number of buffers requested
downstream.
Prevent buffers starvation if downstream is going to use dynamic buffer
mode on its input.
https://bugzilla.gnome.org/show_bug.cgi?id=795746
Tell upstream about how many buffer we plan to use so they can adjust
their own number of buffers accordingly if needed.
Same logic as the existing gst_omx_video_enc_propose_allocation().
https://bugzilla.gnome.org/show_bug.cgi?id=795746
If for some reason something goes wrong and we stop the streaming loop
we may end up with other threads still waiting on the drain cond.
No more buffers will be produced by the component so they were waiting
forever.
Fix this by always signalling this cond when stopping the streaming
loop.
https://bugzilla.gnome.org/show_bug.cgi?id=796207
The OMX specs states that the nBufferCountActual of a port has to default
to its nBufferCountMin. If we don't change nBufferCountActual we purely rely
on this default. But in some cases, OMX may change nBufferCountMin before we
allocate buffers. Like for example when configuring the input ports with the
actual format, it may decrease the number of minimal buffers required.
This method checks this and update nBufferCountActual if needed so we'll use
less buffers than the worst case in such scenarios.
SetParameter() needs to be called when the port is either disabled or
the component in the Loaded state.
Don't do this for the decoder output as
gst_omx_video_dec_allocate_output_buffers() already check
nBufferCountMin when computing the number of output buffers.
On some platform, like rpi, the default nBufferCountActual is much
higher than nBufferCountMin so only enable this using a specific gst-omx
hack.
https://bugzilla.gnome.org/show_bug.cgi?id=791211
Setting the input format and the associated encoder/decoder settings
may also affect the nBufferCountMin of the input port.
Refresh the input port so we'll use up to date values in propose/decide
allocation.
https://bugzilla.gnome.org/show_bug.cgi?id=796445
- Report the error from OMX if any (OMX_EventError)
- If not report the failing to the application (GST_ELEMENT_ERROR)
- return GST_FLOW_ERROR rather than FALSE
- don't leak @frame
https://bugzilla.gnome.org/show_bug.cgi?id=795352
This hack tries to pass as much information as possible from caps to the
decoder before it receives any buffer. These information can be used by
the OMX decoder to, for example, pre-allocate its internal buffers
before starting to decode and so reduce its initial latency.
This mechanism is currently supported by the zynqultrascaleplus decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792040
If less than 1%.
The dynamic format change should not happen when the
resolution does not change and when only the framerate
changes but very slightly, i.e. from 50000/1677=29.81
to 89/3=29.66 so a "percentage change" of less than 1%
(i.e. 100*(29.81-29.66)/29.66 = 0.50 < 1 ). In that case
just ignore it to avoid unnecessary renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=759043
Some live streams can set the framerate to 50000/1677 (=29.81).
GstVideoInfo.fps_n << 16 is wrong if the fps_n is 50000
(i.e. greater than 32767).
https://bugzilla.gnome.org/show_bug.cgi?id=759043
The usual pattern when setting OMX params is to first get the struct
param, override the values we want to set and then set the updated
param.
We were not doing this with OMX_IndexParamVideoPortFormat and so were
resetting some fields such as OMX_VIDEO_PARAM_PORTFORMATTYPE.xFramerate
https://bugzilla.gnome.org/show_bug.cgi?id=790979
Partially revert 1b7d0b8:
omxvideodec: handle IL 1.2 behavior for OMX_SetParameter
It turned out it was a problem in the decoder which was
not updating some local variables upon SetParameter.
https://bugzilla.gnome.org/show_bug.cgi?id=783976
For the history, the parallel disable port has been introduced by:
"00be69f omxvideodec: Disable output port when setting a new format"
and then replicated to videoenc, audiodec and audioenc.
This is only required to do 'parallel' if buffers are shared between ports.
But for decoders and encoders the input and output buffer are of different
nature by definition (bitstream vs images). So they cannot be shared.
Also starting from IL 1.2.0 it is written in the spec that the parallel
disable is not allowed and will return an error. Except when buffers are
shared.
Again here we know in advance that they are not shared so let's always
do a sequential disable.
Tested on Desktop, rpi and zynqultrascaleplus.
https://bugzilla.gnome.org/show_bug.cgi?id=786348