Handle when encoder doesn't support rate control, which is set as
VA_RC_NONE, and if the set rate control mode is not supported by the
GStreamer element, the element configuration fails.
Also it logs out max and target bitrate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
The entrypoint is set when the encoder helper is constructed,
nonetheless it was also passed as parameter when opening. That's
buggy.
In order to simplify the code, the entrypoint at construction is
honored.
But gst_va_encoder_has_profile_and_entrypoint() now doesn't rely in
the internal list of profiles since it only contains those that
belongs to codec and entrypoint, thus it queries directly the VA
driver.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3063>
Fixes CI on coordinated merges when the user has more than 20 branches
in their fork, which will happen very easily since new forks will
always have all the branches of the original remote.
```
ci/gitlab/trigger_cerbero_pipeline.py:48: UserWarning: Calling a `list()` method without specifying `get_all=True` or `iterator=True` will return a maximum of 20 items. Your query returned 20 of 37 items. See https://python-gitlab.readthedocs.io/en/v3.9.0/api-usage.html#pagination for more details. If this was done intentionally, then this warning can be supressed by adding the argument `get_all=False` to the `list()` call. (python-gitlab: /usr/local/lib/python3.7/site-packages/gitlab/client.py:979)
if os.environ["CI_COMMIT_REF_NAME"] in [b.name for b in cerbero.branches.list()]:
```
Need to put the actual profile in the output caps otherwise any
capsfilter after the encoder that was used to force the output
profile will fail, such as
fdkaacenc ! audio/mpeg,stream-format=adts,profile=he-aac-v1 ! ..
because we put profile=lc in there to match the profile signaled
in the ADTS header. This is expressed through the base-profile=lc
in the GStreamer caps though, the profile needs to carry the
'real' profile.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1785>
duplicate symbol '__invoke_on_main' in:
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstvulkan-1.0.a(cocoa_gstvkwindow_cocoa.m.o)
/Library/Frameworks/GStreamer.framework/Versions/1.0/lib/libgstgl-1.0.a(cocoa_gstglwindow_cocoa.m.o)
ld: 1 duplicate symbol for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
Also make the same change in iOS for consistency.
Continuation of https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/issues/1132
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3242>
segment events by demuxer.
In order to play nicely with `ffmpeg`, demuxers in `gst-libav` have to make
buffers available to `ffmpeg` while taking the blocking I/O model in `ffmpeg`
into account, which results in buffers not being sent downstream until `ffmpeg`
has processed them in its separate thread.
In constrast, many `gstreamer` events are simply forwarded downstream.
Currently `GST_EVENT_SEGMENT` events are forwarded downstream without any
processing, which can potentially result in:
* `GST_EVENT_SEGMENT` events being out of sync with buffers
* `GST_EVENT_SEGMENT` events going out that are incorrect because they apply
to data seen by the demuxer, but not necessarily seen by downstream elements
I came across this bug when I was attempting to enable G723.1 demuxing/decoding
using the G723.1 demuxer and decoder provided by `ffmpeg`. I wrote tests to
verify support for the functionality, and found that, in push mode,
`GST_EVENT_SEGMENT` events pushed to the demuxer by the upstream `filesrc`
element would be forwarded to the decoder without modification, resulting in
an internal data streaming error. With this patch, tests work in both push and
pull mode.
This patch solves the problem by disabling the forwarding of
`GST_EVENT_SEGMENT` events downstream (an initial `GST_EVENT_SEGMENT` event is
still pushed downstream by the demuxer). It's possible there's a better way to
do this, but, having looked at how a few different `gstreamer` demuxers deal
with `GST_EVENT_SEGMENT` events, it seems like the processing is somewhat
specific to the demuxer implementation, whereas `gst-libav` has one general way
of handling the situation for any `ffmpeg` demuxer. Perhaps there's a better
way to solve this using the `ffmpeg` API to take advantage of specific demuxer
details. IDK.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3218>
Add Windows Graphics Capture (WGC) API based screen capture mode.
The conditions where this mode is used:
* Explicitly requested by user (capture-api property)
* To capture specific window
* When DXGI desktop duplication API does not work on hybrid graphics systems
(e.g., multi-gpu laptop)
Full features of this implementation require Windows 11. And Windows 11
SDK is required to build this feature.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3144>
When the output alignment is smaller than the input alignment, for
example, When the output alignment is "FRAME" and the parse is likely
connecting to a decoder, the current PTS setting for AV1 frames inside
a TU is not very correct.
For example, a TU may begin with non-displayed frames and end with a
displayed frame. The current way will assign the PTS to the first
non-displayed frame, which is a decode-only frame and the PTS will be
discarded in the video decoder. While the last displayed frame has
invalid PTS, and so the video decoder needs to guess its PTS based on
the frame rate and previous frame's PTS. This is not a decent and
robust way. And more important, when the previous frames provide DTS,
the video decoder will also guess the PTS based on the previous frames'
DTS and trigger the warning like:
gstvideodecoder.c:3147:gst_video_decoder_prepare_finish_frame: \
<vavp9dec0> decreasing timestame
It sets the reordered_output and makes the decoder in free run mode.
We should correct the PTS for a TU, let the non-displayed frames have
no PTS while set the correct PTS to the displayed one. Also, when the
AV1 stream has multi spatial layers, there are more than one displayed
frames inside one TU with the same PTS.
Note: If the input alignment is not TU aligned, we can not know the
exact PTS of this TU, and so we just clear the PTS of the decode only
frame and leave others unchanged.
We also correct all the PTS if the output is OBU aligned. All their
PTS and DTS are set to the input buffer's PTS.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>
When the incoming data has big alignment than the output, we do not need to
call finish_frame() and exit the current handle_frame() for each splitted
frame. We can push them all at one shot with in one handle_frame(), whcih
may improve the performance and can help us to find the edge of TU.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3182>