Pass the current frame to the duplicate_picture callback. This makes it easier
to set the frame's output_buffer if we already have one available. Also
documented that unlike VP9, it is not optional to implement this as the
picture will populate the DPB if it is a key-frame. To ensure this, remove the
default implementation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1992>
The system_frame_number is notably used by V4L2 decoder as a unique
indentifier for the frame that was decoded. This value is used to tell driver
which frame to reference, as V4L2 does not have an efficient mechanism to
otherwise pass back the frames.
For this reason, and because it is more ligical, copy the original
system_frame_number into the duplicate picture instead of using the current
frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1992>
Showing existing keyframe have special meaning in AV1. All the references
frame will be refreshed with the original keyframe information. The refresh
process (7.20) is implemented by saving data from the frame_header into the
state. To fix this special case, load all the relevant information into the
frame_header.
As there is nothing happening in between this and the loading of the key-frame
into the state, this patch also remove the separate API function, using it
internally instead.
Fixes#1090
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1971>
We bind transceivers' fec_percentage property to the FEC encoder
percentage property, and with the binding bidirectional a deadlock
was introduced by the latest changes from !1762:
We take hold of the transceiver's object lock, then add the binding
and set the property to its initial value on the encoder, which causes
set_property to deadlock in the transceiver when the binding kicks in.
Changing the binding type to DEFAULT (source to target) is enough
to address the deadlock and still serves the original intent.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1967>
Holding previously decoded but not outputted pictures even after
new_sequence is not a safe approach in various aspect.
However, we cannot drain out DPB on new_sequence() unconditionally,
because there is a case where decoder should drop decoded pictures
if NoOutputOfPriorPicsFlag is set.
To detect NoOutputOfPriorPicsFlag before the new_sequence() call,
this patch splits decoding process into two path, one for nal unit parsing
in order to detect NoOutputOfPriorPicsFlag and then each nal unit
will be decoded.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1937>
1. Always set the according GstVaH264EncFrame pointer when GstVideoCodecFrame
pointer is assigned, which can make the logic safe.
2. Fix the forgotten change in _sort_by_frame_num. Its input pointer now is
GstVideoCodecFrame type.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1935>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
On GstVideoDecoder::{drain,flush}, we send null packet with
CUVID_PKT_ENDOFSTREAM flag to drain out decoder. Which will
reset CUVID parser as well.
To continue decoding after the drain, the next input buffer
should include sequence headers otherwise CUVID parser will
not report any decodeable frame.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1911>
There could be a case where the new program has the same program number as the
previous one ... but is actually located on a PID previously used for elementary
stream. In that case the program is guaranteed to not be an update of the
previous program but a completely new one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
We need to be able to look for programs by their PID also. Using a hash table
was a bit sub-par (and overkill) for storing a range of programs.
This is needed because there could potentially be two programs with the same
program id but different PMT PID (while one is being deactivated the new one
would "exist").
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1893>
Instead of using GstMiniObject to hold H264 frame, now it uses a plain
structure. Besides, instead of holding a reference to
GstVideoCodecFrame, the H264 frame structure is set as a
GstVideoCodecFrame user data.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1856>
And use the output segment position for the outgoing timestamp while it
is. This is needed to delay the calculation of `output_ts_offset` until
we actually have a usable timestamp, as tsmux will output a few initial
packets while `last_ts` is still unset.
Without this, the calculation would use the initial `0` value, which did
not have the intended effect of making VBR mode behave like CBR mode,
but always calculated an offset equal to the selected start time.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1884>
When there is vpp scaling downstream, we need to make sure SFC is not
triggered because vpp may fall into passthrough mode which causes
the decoder negotiation to create src caps with vpp scaled width/height.
This patch includes bitstream's original size in first query with
downstream in gst_msdkdec_src_caps, which is the same for what we do for
color format in this query. This is to ensure SFC scaling starts to
work only when downstream directly asks for a different size instead of
through vpp.
Note that here SFC scaling follows the same behavior as msdkvpp:
if user only changes width or height, e.g. dec ! video/x-raw,width=xx !,
the height will be modified to the value which fits the original DAR.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1838>
* Hide GstCudaMemory member variables
* Make GstCudaAllocator object GstCudaContext independent
* Set offset/stride of memory correctly via video meta
* Drop GST_BUFFER_POOL_OPTION_VIDEO_ALIGNMENT support.
This implementation actually does not support custom alignment
because we allocate device memory via cuMemAllocPitch
of which alignment is almost uncontrollable
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
* cudaupload/download
- Specify only formats actually we can deal with
nvcodec elements, not all video formats
- Supports CUDA output for download and input for upload in order
to make passthrough possible, like other upload/download elements.
* cudabasetransform
- Reset conversion element if upstream CUDA memory
holds different CUDA context and the element can accept it.
This is the same behavior as corresponding d3d11 filter elements.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1834>
This was to support very old V4L2 kernel. As we moved to DMABuf and can now
detach buffers on renegotiation, the buffer it tries to fix no longer exist.
The risk to blocking indefinitly the application does still exist though.
Fixes#1070
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1861>
When we negotiate with downstream, We should use the intersected
caps of input and output to decide the alignment and stream format.
The current code just uses the input caps which may lack the stream
format.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
The demux now outputs the AV1 stream in "tu" alignment, so we do not need
to detect the input alignment. But the annex b stream format is not recognized
by the demux, we still need to detect that stream format for the first input.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1837>
Decoders that required frame aligmment and didn't have an associated
alpha decoder were skipped. This is because the parser was constructing
caps based on the software alpha decoder, which specify super-frame
alignment.
Iterate over the caps to filter the one that have a matching codec-alpha, with
the semantic the no codec-alpha field means codec-alpha=false. Then if
everything was removed, callback to the original, so that the first non-alpha
decoder will be picked.
Fixes#820
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1855>
Currently for copying the coded buffer onto a GStreamer buffer, the
coded buffer is mapped two times: one for getting the size, and later
for do the actual copy. We can avoid this by doing directly in the
element rather than in the general encoder object.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1845>
We need to always add the RTX/RED/ULPFEC elements as rtpbin will only
call us once to request aux/fec senders/receivers.
We also need to regenerate the media section of the SDP instead of
blindly copying from the previous offer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1762>
Remove the symbolic link `gst-uninstalled` which points to `gst-env`.
The `uninstalled` is the old name and the project should stick to a
single name for the procedure.
Remove the term from all the files, exceptions are variables from
dependencies like `uninstalled_variables` from pkgconfig and
`meson-uninstalled`.
Adjust mentions of the script in the documentation and README.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1743>
Do not maintain similar build instructions within each gst-plugins-*
subproject and the subproject/gstreamer subproject. Use the build
instructions from the mono-repository and link to them via hyperlink.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1743>
This a new VA-API implementation of a H264 encoder.
It can control the GOP and parameter settings, while the MV searching,
VCL and the rate control algorithm are implemented by VA drivers and HW.
It supports most of the common usage options in H264, but still lacks
of look ahead, field, B frame weighted prediction, etc.
Co-authored-by: Victor Jaquez <vjaquez@igalia.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1051>
The g_queue_clear_full() and g_array_copy() functions in the glib
may not be available for the current glib version check, so we add
helper functions to wrap it.
This should be deleted after the glib version bumps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1051>
The fd was in different meanings on windows:
POSIX read and write use the fd as a file descriptor.
The gst_poll use the fd as a WSASocket.
This patch use WSASocket as default on windows. This is a temporary measure, because IPC has many different implement. There may be a better way in the future.
See #1044
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1791>
We have the d3d11screencapturesrc element in d3d11 plugin
which is obviously better than this element in terms of performance
and design, so we don't need to make people be confused by two separate elements.
Let's pick the better implementation and remove unnecessary one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1750>
It was assumed that the kernel parameters would match with the bitstream value
but instead the author when with another set of value. Surprisingly, this
makes no difference with the resulting fluster score.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1748>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
Note that AYUV and AYUV64 formats will be used to expand format
support, especially some packed YUV formats (e.g., Y410, YUY2)
are common DXGI formats used for hardware decoder/encoder on Windows
but those formats cannot be used as a render target. We need to handle
them differently without pixel shader help, using compute shader
for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1699>
Remove all the d3d11 and dxgi header version dependent ifdef
and bump the minimum requirement to d3d11_4.h and dxgi1_6.h.
We are already failing support old Visual Studio (Windows SDK actually)
such as Visual Studio 2015. Note that our MinGW toolchain satisfies
the requirement.
From runtime point of view, this change should be fine since
we are checking OS version with IUnknown::QueryInterface()
everywhere in order to check API availability
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1684>
We make all MSDK encoders declare "memory:VAMemory" feature. Then
the pipeline such as:
gst-launch-1.0 -vf filesrc location=xxx.h264 ! h264parse ! \
vah264dec ! msdkh265enc ! fakesink
will choose VA memory caps between the VA decoder and MSDK encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK encoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK VPP's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK decoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
We now can use the gst va lib's display to create our MSDK context,
and use its helper functions to simplify our code. The improved logic
is like this:
1. Every MSDK element should use gst_msdk_context_find() to find a MSDK
context from neighbour. If valid, reuse it.
2. Use gst_msdk_ensure_new_context(). It will first query neighbours
about the GstVaDisplay, if found(e.g. some VA element is connected),
use gst_msdk_context_from_external_display() to create a MSDK context.
3. Then, creating the MSDK context from scratch. It creates both the
display and MSDK context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The VA display object from VA lib is a common defined object. which
contain the whole display things. It is easier to use, and more important,
we can share it with the other VA plugins and keep all the VA related
plugins working on the same GPU device.
We also delete the useless gst_msdk_context_get_fd() API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The current manner for deciding the new temporal unit is based on
temporal delimiter(TD) OBU. We only start a new temporal unit when
the TD comes.
But some streams do not have TD at all, which makes the output "TU"
alignment fail to work. We now add check based on the relationship
between the different layers and it can successfully judge the TU edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Some streams may have problematic OBUs at the beginning, which causes
the parse fail to detect the alignment and return error. For example,
there may be verbose OBUs before a valid sequence, which should be
discarded until we meet a valid sequence. We should let the parse
continue when we meet such cases, rather than just return error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Our D3D11/DXVA codecs implementation has been verified
during 1.18 and 1.20 development cycle and also via the Fluster
test framework. Similar to the case of nvdec and vtdec,
we can prefer hardware over software in most cases
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1672>
Duplicating a picture what was already a dup was leading to a crash. Rename
the custom picture flags as HOLDS_BUFFER to make its meaning clear. Then save
then ref and store the picture as userdata, so it can be obtained when
duplicating. Finally, mark the doplicated as HOLDS_BUFFER to avoid thinking it
holds a request.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1681>
This patch adds a new parameter: hdr-tone-mapping (same as
vaapipostproc), if the HDR capabilites are availabe in driver, and
it's disabled by default.
If hdr-tone-mapping is enabled then HDR fields in sink caps are
processed in frames from HDR to SDR, removing those hdr fields in
source pad caps too.
hdr-tone-mapping is not enabled if a color conversion is also
requested, since it fails to process in the iHD driver, so far.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1258>
1. Use api_version variable rather than static string.
2. Remove pkgconfig generation since currently the library
is not installed, only used internally.
3. Rely on dependency "required" to abort compilation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
In commit e699aaeb we moved linking of libgudev to the plugin rather
the library, because it's only used in the plugin. But the dependency
check is still done in library.
This patch removes the dependency check in library, and updates the
dependency check in plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
A new implementation of Intel Quick Sync Video plugin.
This plugin supports both Windows and Linux but optimization for
VA/DMABuf is not implemented yet.
This new plugin has some notable differences compared with existing
MSDK plugin.
* Encoder will expose formats which can be natively supported
without internal conversion. This will make encoder
control/negotiation flow much simpler and cleaner than
that of MSDK plugin.
* This plugin includes QSV specific library loading helper,
called dispatcher, with QSV SDK headers as a part of this plugin.
So, there will be no more SDK version dependent #ifdef in the code
and also there will be no more build-time MSDK/oneVPL SDK
dependency.
* Memory allocator interop between GStreamer and QSV is re-designed
and decoupled. Instead of implementing QSV specific allocator/bufferpool,
this plugin will make use of generic GStreamer memory
allocator/bufferpool (e.g., GstD3D11Allocator and GstD3D11BufferPool).
Specifically, GstQsvAllocator object will help interop between
GstMemory and mfxFrameAllocator memory abstraction layers.
Note that because of the design decision, VA/DMABuf support is not made
as a part of this initial commit. We can add the optimization for Linux
later once GstVA library exposes allocator/bufferpool implementation as
an API like GstD3D11.
* Initial encoder implementation supports interop with GstD3D11
infrastructure, including zero-copy encoding with upstream D3D11 element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1408>
It's almost pointless and makes little sense as subclass might
want to modify refcount of the object or so. And all subclasses
are already casting them to non-const version as well.
In a general sense, we need to avoid passing refcounted object
with const qualifier.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1238>
The kCVPixelFormatType_64RGBALE enum is only available on macOS Big
Sur (11.3) and newer. We also cannot use that while configuring the
encoder or decoder on older macOS.
Define the symbol unconditionally, but only use it when we're running
on Big Sur with __builtin_available().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1613>
Hotdoc should be able to extract and parse comments out of these. Just
need to be careful to only add the glob in directories that actually
contain *.m (objc) and *.mm (objcpp) files.
Also fix some doc comments and remove redundant ones.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1614>
This moves the ABI check to the registration, so we don't expose
decoders with the wrong ABI or that are just broken somehow. It
also makes few enhancement:
- Handle missing, but required controls
- Prints the controls macro name instead of id
This should fix RK3399 support with a currently release minor
regression in the Hantro driver that cause errors.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1599>
Intel drivers expose some colorbalance's maximum values much more
bigger than their minimum values, given their middle values (default
value). This means, in practice, that the real middle point between
the maximum and minimum values implies a major change in the color
balance, which is not expected by the GStreamer color balance logic.
This patch makes the given maximum value symmetrical to the minimum
value, given the middle one (default value).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1580>
As now, we warn if the decoder have no support src pixel format, but that
warning is called before the type (hence the debug category) is initialized.
Fix this by moving the debug category init out of the type initialization,
into the register funcitons.
This will fix an assertion that occures in the register function and allow
relevant log to be seen by the users.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1588>
When fixating color, there might be "other caps" with color spaces not
supported by the caps features exposed in the vapostproc's source pad
caps template (perhaps it's a bug somewhere else in GStreamer).
This solution checks if the proposed format exists in the filter
within the caps feature associated with the proposed format.
The check is done with the new filter's function
gst_va_filter_has_video_format().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1559>
GstAudioRingBufferSpec::segsize has been configured by using
device period but GstWasapi2RingBuffer was referencing the
buffer size returned by IAudioClient::GetBufferSize()
which is most likely larger than device period.
Fixing to sync them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1533>
If the `area_surface` got unmapped when changing to the `READY` or
`NULL` state, we currently don't remap it when playback resumes and
`wp_viewporter` is supported. Without `wp_viewporter` we do remap
it, but rather unintentionally and also when not wanted.
On Weston this has not been a big problem as it so far wrongly maps
subsurfaces of unmapped surfaces anyway - i.e. only the black
background was missing on resume. On other compositors and future
Weston this prevents the `video_surface` to get remapped.
Shuffle things around to ensure `area_surface` is mapped in the
right situations and do some minor cleanup.
See also https://gitlab.freedesktop.org/wayland/weston/-/issues/426
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1483>
The later, doing damage in surface coordinates instead of buffer
coordinates, has been deprecated. The reason for that is that it
is more prone to bugs, both on the client and the compositor side,
especially when paired with buffer scale, `wp_viewporter` or
buffer transforms.
Unfortunately, on Weston this risks running into
https://gitlab.freedesktop.org/wayland/weston/-/issues/446
(which causes trouble for several other projects as well). However,
that bug only affects cases where we run in sync mode, i.e. only
during resizes. In practise I haven't been able to observe the
issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
Each time we call `wl_surface_damage()` we want to do full surface
damage. Like Mesa, just use `G_MAXINT32` to ensure we always do
full damage, reducing the need to track the right dimensions.
`window->video_rectangle` is now unused, but we keep it around for
now as we may need it again in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
From the spec:
> This request is used to describe the regions where the pending
> buffer is different from the current surface contents
We currently also call `wl_surface_damage()` on surfaces without
new or still compositor-hold buffers, e.g. when resizing the window.
In that case we call it on `area_surface_wrapper`, even though it
gets resized via `wp_viewport_set_destination()`, in which case
the compositor is in charge of repainting the area on screen.
Doing so is currently not forbidden by the spec, however it might
be in the future, see
https://gitlab.freedesktop.org/wayland/wayland/-/issues/267
Thus lets stay close to the spec and only call `wl_surface_damage()`
when we just attached a buffer.
Right now this prevents runtime assertions in Mutter.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
`gst_wl_window_set_opaque` does not get called on window resizes,
potentially leaving opaque regions too small.
According to the spec opaque regions can be bigger than the surface
size - parts that fall outside of the surface will get ignored.
Thus we can can simply use `G_MAXINT32` and be sure that the whole
surfaces will always be covered.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
If the VANC track does contain packets, but we skip over all packets, just
treat it the same as if there hadn't been any packets at all and send a
GAP event instead of erroring out with "Failed to handle essence element".
We would error out because when we reach the end of the loop without having
found a closed caption packet the flow return variable is still FLOW_ERROR
which is what it has been initialised to.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1518>
Though the profiles[0] is inited as GST_H265_PROFILE_INVALID in the
gst_h265_profile_tier_level_get_profile(), the profile detecting may
change its content later. So the return of profiles[0] may not be an
invalid profile even the len is 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1517>
The previous code was mistakenly trying to compute a cc_type out
of the first byte in the byte triplet, whereas it is to be interpreted
as:
> Bit b7 of the LINE value is the field number (0 for field 2; 1 for field 1).
> Bits b6 and b5 are 0. Bits b4-b0 form a 5-bit unsigned integer which
> represents the offset
The same mistake was made when creating padding packets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1496>
When the image is opaque but the output ProRes format has an alpha
component (4 component, 32 bits per pixel), Apple requires that we
signal that it should be ignored by setting the depth to 24 bits per
pixel. Not doing so causes the encoded files to fail validation.
So we set that in the caps and qtmux sets the depth value in the
container, which will be read by demuxers so that decoders can skip
those bytes entirely. qtdemux does this, but vtdec does not use this
information at present.
The sister change was made in qtmux and qtdemux in:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1061
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1489>
Old "application/*" are now as per RFC8081 deprecated in favor of
new "font/*" mime types. Some new encoders are already using the
updated mime types. We need to also add them to the support list
in order for assrender to correctly identify them as fonts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1481>
If there is no jitterbuffer stats we should not attempt to store them in the
global stats structure.
Also add a g_return_if_fail in _gst_structure_take_structure() about this
because it is a programmer error to pass an invalid pointer address there.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1479>
Instead of a sequence of if statements, declare a table to map profile
idc with profiles and traverse it.
Also, first add the profile from the parsed profile idc and later add,
into the profile array, the profile from the compatibility flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
It's possible a HEVC stream to have multiple profiles given the
compatibility bits. Instead of returning a single profile, internal
gst_h265_profile_tier_level_get_profiles() returns an array with all
it possible profiles.
Profiles are appended into the array only if the generated profile
is not invalid.
gst_h265_profile_tier_level_get_profile() is rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), returning the first
profile found the array.
And gst_h265_get_profile_from_sps() is also rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), but traversing the array
verifying if the proposed profile is actually valid by Annex A.3.x of
the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
* Add fec / red encoders as direct children of webrtcbin, instead
of providing them to rtpbin through the request-fec-encoder signal.
That is because they need to be placed before the rtpfunnel, which
is placed upstream of rtpbin.
* Update configuration of red decoders to set a list of RED payloads
on them, instead of setting the pt property.
That is because there may be one RED pt per media in the same session.
* Connect to request-fec-decoder-full instead of request-fec-decoder,
in order to instantiate FEC decoders according to the payload type
of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
We are querying supported swapchain colorspace via
CheckColorSpaceSupport() but it doesn't seem to be reliable.
Use only tested full-range RGB formats which are:
- sRGB
- BT709 primaries with linear RGB
- BT2020 primaries with PQ gamma
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1433>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
This adds the alignment field to the template caps. Without this field
set, the auto-plugger will see fixed caps and will use
gst_caps_is_subset() against the caps produced by the parser. This is a
challenge for all cases where a parser can do conversion. This is fixed
by adding alignment field, which makes the auto-pluggers do an
intersection of the caps as it gets unfixed caps after intersection now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
Due to a copy paste bug, the bitdepth was never set and that was leading
to requesting sizeimage of 0. Previously that worked since the driver
would in that case pick a size for us. But now the we bumped the minimum
to 4KB, the driver happily allocate 4KB of bitstream which lead to
decoding error.
As MPEG2 have a fixed bitdeph of 8, use a define instead of the run-time
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1415>
The V4L2 uAPI uses pic_num for both PicNum and ShortTermPicNum. It also
doe the same for both FrameNum and LongTermFrameIdx. This change does
not change the fluster score, but fixed a visual corruption noticed
with some third party streams.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1387>
The current code does not set the copied memory correctly when it is popped
from the surface cache pool.
1. We forget to ref the allocator, which causes the allocator to be freed
unexpected, and we get a crash later because of the memory violation.
2. We forget to add ref_mems_count, which causes the surface leak because
the surface can not be pushed back to the cache pool again.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1373>
Set minimum sizeimage such that there is enough space for any overhead
introduced by the codec.
Notably fix a vp9 issue in which a small image would not have a
bitstream buffer large enough to accomodate it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1012>
Rework gstvp9{decoder|statefulparser} to optionally parse compressed headers.
The information in these headers might be needed for accelerators
downstream, so optionally parse them if downstream requests it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1012>
When an extmap is defined twice for the same ID, firefox complains and
errors out (chrome is smart enough to accept strict duplicates).
To work around this, we deduplicate extmap attributes, and also error
out when a different extmap is defined for the same ID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1383>
There are a lot of info in the mpeg2's sequence(also including ext
display_ext and scalable_ext). We need to notify the subclass about
its change, but not all the changes should trigger a drain(), which
may change the output picture order. For example, the matrix changes
in sequence header does not change the decoder context and so no need
to trigger a drain().
Fixes: #899
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1375>
This is a requirement for GstPlayer when using the default overlay interface
provided by the pipeline. The GstPlayerWrappedVideoRenderer requires a valid
pipeline, but that's available only after the GstPlay thread has successfully
started.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1345>
ShowWindow() could be blocked while doing gst_d3d11_window_win32_unprepare
when external window handle provided to d3d11videosink in multi-threaded
environment.
The condition that issue happened is, UI thread is waiting for a
background thread that changes d3d11videosink state to NULL, and the
background thread would try to send a window message to the queue.
The queue is already occupied by the UI thread, so the background
thread will be blocked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1366>
Make sure the EGLImage we're rendering to the GL memory stays alive long enough,
until the the GL memory has been destroyed.
This change fixes tearing and black flashes artefacts that were happening
because the EGLImage was sometimes destroyed before the sink actually rendered
the associated texture.
Fixes#889
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1354>
The average_period should always represent the time between two
events. The specification defines the event time as the time
between audio samples, video frame sync, video line sync, etc.
In case of one timestamp per PDU the timestamp_interval identifies
the amount of events between the timestamp of one PDU and the
timestamp of the next PDU.
As described in IEEE 1722-2016 chapter
"10.4.12 timestamp_interval field" timestamp_interval shall be
nonzero.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1076>
Upstream caps might for example be
application/x-rtp,media=audio,encoding-name={OPUS, X-GST-OPUS-DRAFT-SPITTKA-00, multiopus}
and while that is not fixed caps it is enough to match it with a media.
Only caps structures that have the correct structure name and that have
the media and encoding-name field are preserved, but if both are present
then these caps are used as "codec preferences".
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1291>
Some decoding APIs support delayed output for performance reasons.
One example would be to request decoding for multiple frames and
then query for the oldest frame in the output queue.
This also increases throughput for transcoding and improves seek
performance when supported by the underlying backend.
Introduce support in the mpeg2 base class, so that backends that
support render delays can actually implement it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
Downstream might need the start code offset when decoding.
Previously this computation would be scattered in multiple sites. This
is error prone, so move it to the base class. Subclasses can access
slice->sc_offset directly without computing the address themselves
knowing that the size will also take the start code into account.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1013>
The GstV4l2CodecAllocator dispose function clears `self->decoder` but
the finalize function then tries to use it if the allocator has no been
detached yet.
Fix by detaching in the dispose function before we clear
`self->decoder`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1220>
Standard interlace handling:
* If we have interlace-mode=interleaved and the field order, we just
set it when creating the session
* If we have interlace-mode=(interleaved|mixed) and no field order, we
set the field order on the first buffer
The encoder session does not support changing the FieldDetail after it
has started encoding frames, so we cannot support mixed streams
correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1214>
Delay decoders downstream negotiation just before an output frame
needs to be allocated.
This is required, are least for H.264 and H.265 decoders, since
codec_data might trigger a new sequence before finishing upstream
negotiation, and sink pad caps need to set before setting source pad
caps, particularly to forward HDR fields. The other decoders are
changed too in order to keep the same structure among them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>
Using GstBaseDec hack to access the parent_object of each element in
the element itself is a bit fragile. It would be better to keep its
own parent object as the usual global variable. It would make it
resistant to code changes.
The GstBaseDec macro to access the parent object now it's internal to
base decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1257>