They can't be used in any useful way. The type of every GstMemory is
always GST_TYPE_MEMORY and the subtyping relationship has to be
implemented on top of that via the associated allocator and mem_type
string.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1764>
Scenario:
- Source 1 requesting and waiting a clock id
- Source 2 requesting and waiting on a clock id
- Test attempting to crank both sources in the same GstHarness
gst_test_clock_crank() originally dropped locks between the retrieving
of the next clock id and advancing to the next clock id. This would
mean that both sources would race each other attempting to complete
their clock waits. Sometimes the operations would be performed in the
correct order, other times they would not and a FALSE return value would
be produced.
This would lead to an assertion in gst_harness_push_from_src() expecting
that all clock cranks to succeed.
Fix by ensuring that the clock wait produced is dealt with before
processing the next by not dropping the relevant locks after retrieving
the next clock id.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1299>
This is a minimal unit test the show that the stride extrapolation can work
with all pixel format we support. This minimal verify that the extrapolation
match the stride we set into GstVideoInfo with 320x240 for all the pixel
format we support. The tiles formats are skipped, since their stride is
set as two 16bit integers, and we also skip over palette planes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
As this element is single threaded, we only need to stop the objects to
allow changing the format again. Fixes assertion notably on shutdown and
on some other situation where the format may be set twice without
actually activating the element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Many of the legacy APIs, specifically in the Linux Kernel, have a
single stride for the pictures. In this context, it is common
to extrapolate the other strides based on the selected pixel
format. Such function have been copy pasted from video4linux2
plugin into wayland, kms and v4l2codecs plugins.
This patch implements a generalized from of that function and
make it available to everyone through the video library.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Unlike other simple tiled formats, the Mediatek HW use different tile size
per-plane. The tile size is scaled according to the subsampling. Effectively,
using the name 16L32S to represent linearly layout tiles of size 16x32 bytes
in the Y plane, and 16x16 in the UV plane. In order to make this specificity
discoverable, a new SUBTILES flags have been added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
If the video4linux device supports norms but has no norm set, norm is
returned as an uninitialized variable after the ioctl call, leading to
gst_v4l2_tuner_get_norm_by_std_id() returning a random norm from the
supported norms. Catch this case and instead return NULL to indicate
that no norm is setup.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1625>
We have the d3d11screencapturesrc element in d3d11 plugin
which is obviously better than this element in terms of performance
and design, so we don't need to make people be confused by two separate elements.
Let's pick the better implementation and remove unnecessary one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1750>
... instead of round(). Depending on framerate, calculated position
may not be clearly represented by using uint64, 30000/1001 for example.
Then the result of round() can be sliglhtly larger (1ns) than
buffer timestamp. And that will cause unnecessary frame delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1747>
It was assumed that the kernel parameters would match with the bitstream value
but instead the author when with another set of value. Surprisingly, this
makes no difference with the resulting fluster score.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1748>
If a serialized event arrives behind a buffer, it should not be send before
it. This fixes the pending event handling so that only early pending events,
the one that arrrived or was generated while the adapter was empty get send
before pushing buffer. All other events are not pushed after.
This issue lead the latency tracer to think our audio encoder did not have any
latency. This was testing with opusenc in a live pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1266>
Each stream may have its own segment timeline
(i.g., different segment.start or segment.base)
depending on edit-list and composition-to-decode atom.
Make sure whether time position of a stream has been actually
far behind than that of current target stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1352>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
Often, users will need to scale inputs (e.g.
with vaapipostproc) before they are submitted
to the vaapioverlay. However, this results in
multiple VPP passes/operations in the pipeline
which creates unnecessary process overhead.
This change allows for inputs to be submitted
at original scale to vaapioverlay with per-sinkpad
scale dimensions specified so they can be scaled
and blended/composited in a single VPP pass/operation
to avoid the unnecessary process overhead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Don't set VAAPI vpp blend flags if alpha == 1.0,
i.e. fully opaque. This can avoid extra processing
overhead on some drivers that apply blending
unconditionally when flags are present, even if the
end result is the same without blend flags (i.e. all
opaque alpha channels).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Note that AYUV and AYUV64 formats will be used to expand format
support, especially some packed YUV formats (e.g., Y410, YUY2)
are common DXGI formats used for hardware decoder/encoder on Windows
but those formats cannot be used as a render target. We need to handle
them differently without pixel shader help, using compute shader
for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1699>
The libjpeg-turbo internal state might not be correctly initialized for
the first frame in a stream, pull the frame stride from gstreamer frame
metadata instead, which is correct even for the first frame, and which
makes this code consistent with the surrounding lines.
Fixes: e6d83d8f96 ("jpegdec: Support libjpeg-turbo colorspace conversion")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
It is imperative that the libjpeg-turbo state is properly initialized
before jpeg_start_decompress() is called. Make sure cinfo.out_color_space
and cinfo.raw_data_out are set to their final values matching their peer
caps before calling jpeg_start_decompress().
Fixes: e6d83d8f96 ("jpegdec: Support libjpeg-turbo colorspace conversion")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
Pull out peer caps checking code into gst_jpeg_turbo_parse_ext_fmt_convert().
This code is used by libjpeg-turbo extras to determine whether peer is capable
of handling buffers into which libjpeg-turbo can directly decode data. This
kind of check must be performed before jpeg_start_decompress() is called in
gst_jpeg_dec_prepare_decode() as well as in gst_jpeg_dec_negotiate(), hence
the common code.
This commit does modify the code a little to make it easier to call from both
call sites without much duplication, hence the extra `if (*clrspc)` test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
This reverts commit 2aa2477208.
The commit is completely wrong, libjpeg-turbo is perfectly capable
of decoding I420 (YUV) to RGB. The test case provided alongside the
aforementioned commit passes without this revert because it decodes
image of JCS_YCrCb color space, so the new `if (clrspc == JCS_RGB)`
condition is false on that image, and the libjpeg-turbo decoding
does not get used. The real bug is hidden by that commit.
The real problem is in the call order of gst_jpeg_dec_prepare_decode()
and gst_jpeg_dec_negotiate(). The gst_jpeg_dec_prepare_decode() calls
jpeg_start_decompress() which sets up internal state of the libjpeg,
however, neither cinfo.out_color_space nor cinfo.raw_data_out are
set correctly yet. Those two are set up in gst_jpeg_dec_negotiate()
which is called a bit later. Therefore, the real fix is the set up
cinfo.out_color_space and cinfo.raw_data_out before calling
jpeg_start_decompress(). This is however a separate patch.
Fixes: 2aa2477208 ("jpegdec: only allow conversions from RGB")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
Remove all the d3d11 and dxgi header version dependent ifdef
and bump the minimum requirement to d3d11_4.h and dxgi1_6.h.
We are already failing support old Visual Studio (Windows SDK actually)
such as Visual Studio 2015. Note that our MinGW toolchain satisfies
the requirement.
From runtime point of view, this change should be fine since
we are checking OS version with IUnknown::QueryInterface()
everywhere in order to check API availability
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1684>
We make all MSDK encoders declare "memory:VAMemory" feature. Then
the pipeline such as:
gst-launch-1.0 -vf filesrc location=xxx.h264 ! h264parse ! \
vah264dec ! msdkh265enc ! fakesink
will choose VA memory caps between the VA decoder and MSDK encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK encoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK VPP's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK decoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
We now can use the gst va lib's display to create our MSDK context,
and use its helper functions to simplify our code. The improved logic
is like this:
1. Every MSDK element should use gst_msdk_context_find() to find a MSDK
context from neighbour. If valid, reuse it.
2. Use gst_msdk_ensure_new_context(). It will first query neighbours
about the GstVaDisplay, if found(e.g. some VA element is connected),
use gst_msdk_context_from_external_display() to create a MSDK context.
3. Then, creating the MSDK context from scratch. It creates both the
display and MSDK context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The VA display object from VA lib is a common defined object. which
contain the whole display things. It is easier to use, and more important,
we can share it with the other VA plugins and keep all the VA related
plugins working on the same GPU device.
We also delete the useless gst_msdk_context_get_fd() API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The current manner for deciding the new temporal unit is based on
temporal delimiter(TD) OBU. We only start a new temporal unit when
the TD comes.
But some streams do not have TD at all, which makes the output "TU"
alignment fail to work. We now add check based on the relationship
between the different layers and it can successfully judge the TU edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Some streams may have problematic OBUs at the beginning, which causes
the parse fail to detect the alignment and return error. For example,
there may be verbose OBUs before a valid sequence, which should be
discarded until we meet a valid sequence. We should let the parse
continue when we meet such cases, rather than just return error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Our D3D11/DXVA codecs implementation has been verified
during 1.18 and 1.20 development cycle and also via the Fluster
test framework. Similar to the case of nvdec and vtdec,
we can prefer hardware over software in most cases
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1672>
For artificial input (in unit tests), all six bytes of
constraint_indicator_flags in hevc_caps_get_mime_codec() can be
zero. Add a guard against an out-of-bounds error that occurred in that
case. Change variables to signed int so comparison with -1 works.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1677>
Duplicating a picture what was already a dup was leading to a crash. Rename
the custom picture flags as HOLDS_BUFFER to make its meaning clear. Then save
then ref and store the picture as userdata, so it can be obtained when
duplicating. Finally, mark the doplicated as HOLDS_BUFFER to avoid thinking it
holds a request.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1681>
This patch adds a new parameter: hdr-tone-mapping (same as
vaapipostproc), if the HDR capabilites are availabe in driver, and
it's disabled by default.
If hdr-tone-mapping is enabled then HDR fields in sink caps are
processed in frames from HDR to SDR, removing those hdr fields in
source pad caps too.
hdr-tone-mapping is not enabled if a color conversion is also
requested, since it fails to process in the iHD driver, so far.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1258>
1. Use api_version variable rather than static string.
2. Remove pkgconfig generation since currently the library
is not installed, only used internally.
3. Rely on dependency "required" to abort compilation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
In commit e699aaeb we moved linking of libgudev to the plugin rather
the library, because it's only used in the plugin. But the dependency
check is still done in library.
This patch removes the dependency check in library, and updates the
dependency check in plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
Constantly updating the ts_offset results in audiable glitches
when streaming audio using ntp-sync=true. By requiring a minimum
offset before updating ts_offset this can be mitigated. Added a
parameter which can be used to set min_ts_offset in ntp-sync mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1409>
A new implementation of Intel Quick Sync Video plugin.
This plugin supports both Windows and Linux but optimization for
VA/DMABuf is not implemented yet.
This new plugin has some notable differences compared with existing
MSDK plugin.
* Encoder will expose formats which can be natively supported
without internal conversion. This will make encoder
control/negotiation flow much simpler and cleaner than
that of MSDK plugin.
* This plugin includes QSV specific library loading helper,
called dispatcher, with QSV SDK headers as a part of this plugin.
So, there will be no more SDK version dependent #ifdef in the code
and also there will be no more build-time MSDK/oneVPL SDK
dependency.
* Memory allocator interop between GStreamer and QSV is re-designed
and decoupled. Instead of implementing QSV specific allocator/bufferpool,
this plugin will make use of generic GStreamer memory
allocator/bufferpool (e.g., GstD3D11Allocator and GstD3D11BufferPool).
Specifically, GstQsvAllocator object will help interop between
GstMemory and mfxFrameAllocator memory abstraction layers.
Note that because of the design decision, VA/DMABuf support is not made
as a part of this initial commit. We can add the optimization for Linux
later once GstVA library exposes allocator/bufferpool implementation as
an API like GstD3D11.
* Initial encoder implementation supports interop with GstD3D11
infrastructure, including zero-copy encoding with upstream D3D11 element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1408>
There's no need to do this, and it can make seeking far less accurate.
For a specific use case: I am working with a long (45-minute) MPEG-1 layer 3 file, which has a constant bit rate but no seeking tables. Trying to seek the pipeline immediately after pausing it, without the ACCURATE flag, to a location 41 minutes in, yields a location that is potentially over ten seconds ahead of where it should be. This patch improves that drastically.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/374>
During dispose the pool will still have a reference count of 1 and all
API on it can still be safely called.
Subclasses will have already freed their own data before finalize is
called but would nonetheless be called into again via the pool
deactivation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1645>