Increase the number of output buffers by the number of buffers requested
downstream.
Prevent buffers starvation if downstream is going to use dynamic buffer
mode on its input.
https://bugzilla.gnome.org/show_bug.cgi?id=795746
Tell upstream about how many buffer we plan to use so they can adjust
their own number of buffers accordingly if needed.
Same logic as the existing gst_omx_video_enc_propose_allocation().
https://bugzilla.gnome.org/show_bug.cgi?id=795746
If for some reason something goes wrong and we stop the streaming loop
we may end up with other threads still waiting on the drain cond.
No more buffers will be produced by the component so they were waiting
forever.
Fix this by always signalling this cond when stopping the streaming
loop.
https://bugzilla.gnome.org/show_bug.cgi?id=796207
The OMX specs states that the nBufferCountActual of a port has to default
to its nBufferCountMin. If we don't change nBufferCountActual we purely rely
on this default. But in some cases, OMX may change nBufferCountMin before we
allocate buffers. Like for example when configuring the input ports with the
actual format, it may decrease the number of minimal buffers required.
This method checks this and update nBufferCountActual if needed so we'll use
less buffers than the worst case in such scenarios.
SetParameter() needs to be called when the port is either disabled or
the component in the Loaded state.
Don't do this for the decoder output as
gst_omx_video_dec_allocate_output_buffers() already check
nBufferCountMin when computing the number of output buffers.
On some platform, like rpi, the default nBufferCountActual is much
higher than nBufferCountMin so only enable this using a specific gst-omx
hack.
https://bugzilla.gnome.org/show_bug.cgi?id=791211
Setting the input format and the associated encoder/decoder settings
may also affect the nBufferCountMin of the input port.
Refresh the input port so we'll use up to date values in propose/decide
allocation.
https://bugzilla.gnome.org/show_bug.cgi?id=796445
gst_omx_component_get_state() used to early return if there was no
pending state change. So if the component raised an error it wasn't
considered in the invalid state until the next requested state change.
Fix this by checking first if we received an error.
https://bugzilla.gnome.org/show_bug.cgi?id=795874
The OMX stack of the zynqultrascaleplus (the only one supporting
NV12_10LE32 and NV16_10LE32) will now pick the proper profile if none
has been requested. Best to rely on its default than hardcoding a
specific one in gst-omx.
https://bugzilla.gnome.org/show_bug.cgi?id=794319
0xffffffff is the magic number in gst-omx meaning 'the default value
defined in OMX'. This works fine with OMX parameters which are only set
once when starting the component but not with configs which can be
changed while PLAYING.
Save the actual OMX default bitrate so we can restore it later if user
sets back 0xffffffff on the property.
Added GST_OMX_PROP_OMX_DEFAULT so we stop hardcoding magic numbers
everywhere.
https://bugzilla.gnome.org/show_bug.cgi?id=794998
We weren't using the usual pattern when re-setting the bitrate:
- get parameters from OMX
- update only the fields different from 0xffffffff (OMX defaults)
- set parameters
Also added a comment explaining why we re-set this param.
https://bugzilla.gnome.org/show_bug.cgi?id=794998
- Report the error from OMX if any (OMX_EventError)
- If not report the failing to the application (GST_ELEMENT_ERROR)
- return GST_FLOW_ERROR rather than FALSE
- don't leak @frame
https://bugzilla.gnome.org/show_bug.cgi?id=795352
We already have the exact same message at the beginning of
gst_omx_video_enc_handle_frame(). Having it twice is confusing when
reading/grepping logs.
I kept the earlier one to keep the symetry with
gst_omx_video_dec_handle_frame().
https://bugzilla.gnome.org/show_bug.cgi?id=794897
Check input buffers for ROI meta and pass them to the encoder by using
zynqultrascaleplus's custom OMX extension. Also add a new
"default-roi-quality" in order to tell the encoder what quality level
should be applied to ROI by default.
https://bugzilla.gnome.org/show_bug.cgi?id=793696
The 'target-bitrate' property can be changed while PLAYING
(GST_PARAM_MUTABLE_PLAYING). Make it thread-safe to prevent concurrent
accesses between the application and streaming thread.
https://bugzilla.gnome.org/show_bug.cgi?id=793458
I spent quiet some time figuring out why performance of my pipeline were
terrible. Turned out it was because of output frames being copied
because of stride/offset mismatch.
Add a PERFORMANCE DEBUG message to make it easier to spot and debug from logs.
https://bugzilla.gnome.org/show_bug.cgi?id=793637
The OMX specs defines 8 headers that implementations can use to define
their custom extensions. We were checking and including 3 and ignoring
the other ones.
https://bugzilla.gnome.org/show_bug.cgi?id=792043
We are now always checking which files are present or not, even when using our
internal copy of OMX, rather than hardcoding the ones present in it.
https://bugzilla.gnome.org/show_bug.cgi?id=792043
It seems cleaner to use the proper meson tools to include this path
rather than manually tweak the build flags.
This also allows us to simplify the OMX extensions detection code. We
are now always checking which files are present, even when using our
internal copy of OMX, rather than hardcoding the ones present in it.
https://bugzilla.gnome.org/show_bug.cgi?id=792043
This hack tries to pass as much information as possible from caps to the
decoder before it receives any buffer. These information can be used by
the OMX decoder to, for example, pre-allocate its internal buffers
before starting to decode and so reduce its initial latency.
This mechanism is currently supported by the zynqultrascaleplus decoder.
https://bugzilla.gnome.org/show_bug.cgi?id=792040
I find it confusing when debugging that OMX calls returning an error
where not logged as GST_LEVEL_ERROR making them harder to spot.
Fix this by introducing simple log macros checking the return value of
the OMX call and logging failures as errors.
https://bugzilla.gnome.org/show_bug.cgi?id=791069
The Zynq UltraScale+ encoder implements a custom OMX extension to
directly import dmabuf saving the need of mapping input buffers.
This can be use with either 'v4l2src io-mode=dmabuf' or an OMX video
decoder upstream.
https://bugzilla.gnome.org/show_bug.cgi?id=792361
Make use of the new GstVideoEncoder QoS API to drop late input frames. This may
help a live pipeline to catch up if it's being late and all frames end up
being dropped at the sink.
https://bugzilla.gnome.org/show_bug.cgi?id=792783
If something goes wrong while trying to manually copy the input buffer,
the 'break' was moving us out of the 'for' loop but not out of the switch block.
So we ended up calling gst_video_frame_unmap() a second time (raising
assertions) and returning TRUE rather than FALSE.
Reproduced with a WIP zynqultrascaleplus OMX branch reporting wrong
buffer sizes and so triggering this bug.
https://bugzilla.gnome.org/show_bug.cgi?id=792167
If less than 1%.
The dynamic format change should not happen when the
resolution does not change and when only the framerate
changes but very slightly, i.e. from 50000/1677=29.81
to 89/3=29.66 so a "percentage change" of less than 1%
(i.e. 100*(29.81-29.66)/29.66 = 0.50 < 1 ). In that case
just ignore it to avoid unnecessary renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=759043
If the OMX component supports dynamic buffer mode and the input buffers
are properly aligned avoid copying each input frame between OMX and
GStreamer.
Tested on zynqultrascaleplus and rpi (without dynamic buffers).
https://bugzilla.gnome.org/show_bug.cgi?id=787093
OMX 1.2.0 introduced a third way to manage buffers by allowing
components to only allocate buffers header during their initialization
and change their pBuffer pointer at runtime.
This new feature can save us a copy between GStreamer and OMX for each
input buffer.
This patch adds API to allocate and use such buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=787093
Some live streams can set the framerate to 50000/1677 (=29.81).
GstVideoInfo.fps_n << 16 is wrong if the fps_n is 50000
(i.e. greater than 32767).
https://bugzilla.gnome.org/show_bug.cgi?id=759043
The usual pattern when setting OMX params is to first get the struct
param, override the values we want to set and then set the updated
param.
We were not doing this with OMX_IndexParamVideoPortFormat and so were
resetting some fields such as OMX_VIDEO_PARAM_PORTFORMATTYPE.xFramerate
https://bugzilla.gnome.org/show_bug.cgi?id=790979
As stated in the existing comment, when flusing we should wait for OMX
to send the flush command complete event AND all ports being released.
We were stopping as soon as one of those condition was met.
Fix a race between FillThisBufferDone/EmptyBufferDone and the flush
EventCmdComplete messages. The OMX implementation is supposed to release
its buffers before posting the EventCmdComplete event but the ordering
isn't guaranteed as the FillThisBufferDone/EmptyBufferDone and
EventHandler callbacks can be called from different threads (cf 2.7
'Thread Safety' in the spec).
https://bugzilla.gnome.org/show_bug.cgi?id=789475
No semantic change so far, I just made the 'while' end condition easier
to understand as a first step before changing it.
- move error/time out checks inside the loop to make it clearer on what
we are actually waiting for.
- group port->buffers checks together with parenthesis as they are part
of the same conceptual check: waiting for all buffers to be released.
https://bugzilla.gnome.org/show_bug.cgi?id=789475
Allegro's HEVC implementation defines a superset of the profiles and
enums from the Android implementation.
Properly cast to fix -Wenum-conversion warnings from clang.
https://bugzilla.gnome.org/show_bug.cgi?id=789057
OMX's allow 3rds party to define extensions using their own enums
(like OMX_VIDEO_CODINGEXTTYPE) and to be used as the general
ones (like OMX_VIDEO_CODINGTYPE).
Properly cast those to fix -Wenum-conversion warnings from some
compilers such as clang.
https://bugzilla.gnome.org/show_bug.cgi?id=789057
The OMX spec doesn't support HEVC but the OMX stack of the
zynqultrascaleplus adds it as a custom extension.
It uses the same API as the one of Android's OMX stack.
I used the H264 encoder code as a template.
https://bugzilla.gnome.org/show_bug.cgi?id=785434
Partially revert 1b7d0b8:
omxvideodec: handle IL 1.2 behavior for OMX_SetParameter
It turned out it was a problem in the decoder which was
not updating some local variables upon SetParameter.
https://bugzilla.gnome.org/show_bug.cgi?id=783976
For the history, the parallel disable port has been introduced by:
"00be69f omxvideodec: Disable output port when setting a new format"
and then replicated to videoenc, audiodec and audioenc.
This is only required to do 'parallel' if buffers are shared between ports.
But for decoders and encoders the input and output buffer are of different
nature by definition (bitstream vs images). So they cannot be shared.
Also starting from IL 1.2.0 it is written in the spec that the parallel
disable is not allowed and will return an error. Except when buffers are
shared.
Again here we know in advance that they are not shared so let's always
do a sequential disable.
Tested on Desktop, rpi and zynqultrascaleplus.
https://bugzilla.gnome.org/show_bug.cgi?id=786348
The OMX specification doesn't provide any API to expose the latency
introduced by encoders and decoders. We implemented this as a custom
extension as declaring the latency is needed for live pipelines like
video conferencing.
https://bugzilla.gnome.org/show_bug.cgi?id=785125
Use the stride and slice height information from the first buffer meta
data to adjust the settings of the input port.
This will ensure that the OMX input buffers match the GStreamer ones
and so will save us from having to copy line-by-line each one.
This is also the first step to allow the OMX encoder to receive dmabuf.
Tested on rpi and zynqultrascaleplus.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
No significant change for now. Just delay the input port configuration
of the buffer size related fields (stride, slice height, buffer size)
until the component is activated.
This will allow us to use the actual stride/height of the first input
and so avoid the buffer copying code path in most cases.
Tested on rpi and zynqultrascaleplus.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
Allocating OMX components buffers in set_format() is too early.
Doing it when receiving the first buffers will allow the element to use
the information from the allocation query and/or the first incoming
buffer to pick to best allocation mode.
Tested on raspberry pi with dynamic resolution changes on decoder and
encoder input.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
Makes the code simpler as we no longer need to restart the thread in
gst_omx_video_enc_flush() and It's more symetric which the omxvideodec
implementation.
I'm also going to move the enabling of the OMX component in
handle_frame() and the src pad thread needs to be started after it.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
No semantic change, just factor out the code enabling and disabling the
component to their own functions.
Makes the code easier to read as the set_format() method was already
pretty big. Will also allow us to easily change the enabling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
No semantic change, just factor out the code enabling and disabling the
component to their own functions.
Makes the code easier to read as the set_format() method was already
pretty big. Will also allow us to easily change the enabling logic.
https://bugzilla.gnome.org/show_bug.cgi?id=785967
The spec states that the buffer passed to OMX_FillThisBuffer() needs to be
empty. Some implementation may check it actually is by checking its
nFilledLen field, so best to reset it as well.
https://bugzilla.gnome.org/show_bug.cgi?id=785623
Will be easier to maintain and to make enhancements.
Tested with Tizonia on Desktop.
Also tested with Bellagio to make sure it does not crash when
calling OMX_UseEGLImage and indeed it returns NotImplemented.
Then gst-omx fallback to OMX_UseBuffer if it can and so on.
Also tested on rpi to make sure there is no regression.
https://bugzilla.gnome.org/show_bug.cgi?id=784365
On segment seek, unlike EOS, we drain, but we cannot expect a flush
later to reset the decoder state. As a side effect, the decoder would
remain in EOS state and ignore any new incoming buffers.
To fix this, we call _flush() inside the _drain() function, and
_finish() becomes what _drain() was before. This way, for _finish() (the
eos case) we only drain, for _drain() triggered by segment seek or new
caps, we also reset the decoder state so it's ready to accept buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=785237
The zynqultrascaleplus OMX implementation has a custom extension
allowing decoders to output dmabuf and so avoid buffers copy between OMX
and GStreamer.
Make use of this extension when built on the zynqultrascaleplus. The
buffer pool code should be re-usable for other platforms as well.
https://bugzilla.gnome.org/show_bug.cgi?id=784847
It triggers SettingsChanged on the other port and it is up to
the client to decide if it should lead to a port reconfiguration.
Settings are propagated to the other port for fields they have
in common. But this event is only triggered on the other port
if it actually change a setting.
https://bugzilla.gnome.org/show_bug.cgi?id=783976