While this is slightly more expensive (~48% slower per random number) it
does not cause any measurable difference when running through a complete
audio conversion pipeline.
On the other hand its random numbers are of much higher quality and on
spectrograms for 32 bit to 24 bit conversion the difference is clearly
visible.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1729>
The instant-rate value in the TrickMode enum is a
flag, but the other values are not. Move instant-rate
to the end of the enum and give it a value large enough
for it to be used without modifying the trick-mode
setting.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1788>
Update x264enc long-name to be more than just the name. Then the
description also was updated to be longer than the long-name, and
similar to the plugin description.
Finally, as I am here, H264 was replaced by H.264 and x264 is only a
plugin (not plugins).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1771>
They can't be used in any useful way. The type of every GstMemory is
always GST_TYPE_MEMORY and the subtyping relationship has to be
implemented on top of that via the associated allocator and mem_type
string.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1764>
Scenario:
- Source 1 requesting and waiting a clock id
- Source 2 requesting and waiting on a clock id
- Test attempting to crank both sources in the same GstHarness
gst_test_clock_crank() originally dropped locks between the retrieving
of the next clock id and advancing to the next clock id. This would
mean that both sources would race each other attempting to complete
their clock waits. Sometimes the operations would be performed in the
correct order, other times they would not and a FALSE return value would
be produced.
This would lead to an assertion in gst_harness_push_from_src() expecting
that all clock cranks to succeed.
Fix by ensuring that the clock wait produced is dealt with before
processing the next by not dropping the relevant locks after retrieving
the next clock id.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1299>
This is a minimal unit test the show that the stride extrapolation can work
with all pixel format we support. This minimal verify that the extrapolation
match the stride we set into GstVideoInfo with 320x240 for all the pixel
format we support. The tiles formats are skipped, since their stride is
set as two 16bit integers, and we also skip over palette planes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
As this element is single threaded, we only need to stop the objects to
allow changing the format again. Fixes assertion notably on shutdown and
on some other situation where the format may be set twice without
actually activating the element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Many of the legacy APIs, specifically in the Linux Kernel, have a
single stride for the pictures. In this context, it is common
to extrapolate the other strides based on the selected pixel
format. Such function have been copy pasted from video4linux2
plugin into wayland, kms and v4l2codecs plugins.
This patch implements a generalized from of that function and
make it available to everyone through the video library.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Unlike other simple tiled formats, the Mediatek HW use different tile size
per-plane. The tile size is scaled according to the subsampling. Effectively,
using the name 16L32S to represent linearly layout tiles of size 16x32 bytes
in the Y plane, and 16x16 in the UV plane. In order to make this specificity
discoverable, a new SUBTILES flags have been added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
If the video4linux device supports norms but has no norm set, norm is
returned as an uninitialized variable after the ioctl call, leading to
gst_v4l2_tuner_get_norm_by_std_id() returning a random norm from the
supported norms. Catch this case and instead return NULL to indicate
that no norm is setup.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1625>
We have the d3d11screencapturesrc element in d3d11 plugin
which is obviously better than this element in terms of performance
and design, so we don't need to make people be confused by two separate elements.
Let's pick the better implementation and remove unnecessary one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1750>
... instead of round(). Depending on framerate, calculated position
may not be clearly represented by using uint64, 30000/1001 for example.
Then the result of round() can be sliglhtly larger (1ns) than
buffer timestamp. And that will cause unnecessary frame delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1747>
It was assumed that the kernel parameters would match with the bitstream value
but instead the author when with another set of value. Surprisingly, this
makes no difference with the resulting fluster score.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1748>
If a serialized event arrives behind a buffer, it should not be send before
it. This fixes the pending event handling so that only early pending events,
the one that arrrived or was generated while the adapter was empty get send
before pushing buffer. All other events are not pushed after.
This issue lead the latency tracer to think our audio encoder did not have any
latency. This was testing with opusenc in a live pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1266>
Each stream may have its own segment timeline
(i.g., different segment.start or segment.base)
depending on edit-list and composition-to-decode atom.
Make sure whether time position of a stream has been actually
far behind than that of current target stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1352>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
Often, users will need to scale inputs (e.g.
with vaapipostproc) before they are submitted
to the vaapioverlay. However, this results in
multiple VPP passes/operations in the pipeline
which creates unnecessary process overhead.
This change allows for inputs to be submitted
at original scale to vaapioverlay with per-sinkpad
scale dimensions specified so they can be scaled
and blended/composited in a single VPP pass/operation
to avoid the unnecessary process overhead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Don't set VAAPI vpp blend flags if alpha == 1.0,
i.e. fully opaque. This can avoid extra processing
overhead on some drivers that apply blending
unconditionally when flags are present, even if the
end result is the same without blend flags (i.e. all
opaque alpha channels).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Note that AYUV and AYUV64 formats will be used to expand format
support, especially some packed YUV formats (e.g., Y410, YUY2)
are common DXGI formats used for hardware decoder/encoder on Windows
but those formats cannot be used as a render target. We need to handle
them differently without pixel shader help, using compute shader
for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1699>
The libjpeg-turbo internal state might not be correctly initialized for
the first frame in a stream, pull the frame stride from gstreamer frame
metadata instead, which is correct even for the first frame, and which
makes this code consistent with the surrounding lines.
Fixes: e6d83d8f96 ("jpegdec: Support libjpeg-turbo colorspace conversion")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
It is imperative that the libjpeg-turbo state is properly initialized
before jpeg_start_decompress() is called. Make sure cinfo.out_color_space
and cinfo.raw_data_out are set to their final values matching their peer
caps before calling jpeg_start_decompress().
Fixes: e6d83d8f96 ("jpegdec: Support libjpeg-turbo colorspace conversion")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
Pull out peer caps checking code into gst_jpeg_turbo_parse_ext_fmt_convert().
This code is used by libjpeg-turbo extras to determine whether peer is capable
of handling buffers into which libjpeg-turbo can directly decode data. This
kind of check must be performed before jpeg_start_decompress() is called in
gst_jpeg_dec_prepare_decode() as well as in gst_jpeg_dec_negotiate(), hence
the common code.
This commit does modify the code a little to make it easier to call from both
call sites without much duplication, hence the extra `if (*clrspc)` test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
This reverts commit 2aa2477208.
The commit is completely wrong, libjpeg-turbo is perfectly capable
of decoding I420 (YUV) to RGB. The test case provided alongside the
aforementioned commit passes without this revert because it decodes
image of JCS_YCrCb color space, so the new `if (clrspc == JCS_RGB)`
condition is false on that image, and the libjpeg-turbo decoding
does not get used. The real bug is hidden by that commit.
The real problem is in the call order of gst_jpeg_dec_prepare_decode()
and gst_jpeg_dec_negotiate(). The gst_jpeg_dec_prepare_decode() calls
jpeg_start_decompress() which sets up internal state of the libjpeg,
however, neither cinfo.out_color_space nor cinfo.raw_data_out are
set correctly yet. Those two are set up in gst_jpeg_dec_negotiate()
which is called a bit later. Therefore, the real fix is the set up
cinfo.out_color_space and cinfo.raw_data_out before calling
jpeg_start_decompress(). This is however a separate patch.
Fixes: 2aa2477208 ("jpegdec: only allow conversions from RGB")
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1687>
Remove all the d3d11 and dxgi header version dependent ifdef
and bump the minimum requirement to d3d11_4.h and dxgi1_6.h.
We are already failing support old Visual Studio (Windows SDK actually)
such as Visual Studio 2015. Note that our MinGW toolchain satisfies
the requirement.
From runtime point of view, this change should be fine since
we are checking OS version with IUnknown::QueryInterface()
everywhere in order to check API availability
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1684>
We make all MSDK encoders declare "memory:VAMemory" feature. Then
the pipeline such as:
gst-launch-1.0 -vf filesrc location=xxx.h264 ! h264parse ! \
vah264dec ! msdkh265enc ! fakesink
will choose VA memory caps between the VA decoder and MSDK encoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK encoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK VPP's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The MSDK decoder's query function is not set and it just forwards
all query to its base class. We now need to answer the context
query correctly. Other VA plugins need to query the VA display.
By the way, the current query of "gst.msdk.Context" is also missing.
The other MSDK elements must depend on the bin's context message(
sent in context_propagate()) to set their MsdkContext correctly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
We now can use the gst va lib's display to create our MSDK context,
and use its helper functions to simplify our code. The improved logic
is like this:
1. Every MSDK element should use gst_msdk_context_find() to find a MSDK
context from neighbour. If valid, reuse it.
2. Use gst_msdk_ensure_new_context(). It will first query neighbours
about the GstVaDisplay, if found(e.g. some VA element is connected),
use gst_msdk_context_from_external_display() to create a MSDK context.
3. Then, creating the MSDK context from scratch. It creates both the
display and MSDK context.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The VA display object from VA lib is a common defined object. which
contain the whole display things. It is easier to use, and more important,
we can share it with the other VA plugins and keep all the VA related
plugins working on the same GPU device.
We also delete the useless gst_msdk_context_get_fd() API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1087>
The current manner for deciding the new temporal unit is based on
temporal delimiter(TD) OBU. We only start a new temporal unit when
the TD comes.
But some streams do not have TD at all, which makes the output "TU"
alignment fail to work. We now add check based on the relationship
between the different layers and it can successfully judge the TU edge.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Some streams may have problematic OBUs at the beginning, which causes
the parse fail to detect the alignment and return error. For example,
there may be verbose OBUs before a valid sequence, which should be
discarded until we meet a valid sequence. We should let the parse
continue when we meet such cases, rather than just return error.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1634>
Our D3D11/DXVA codecs implementation has been verified
during 1.18 and 1.20 development cycle and also via the Fluster
test framework. Similar to the case of nvdec and vtdec,
we can prefer hardware over software in most cases
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1672>
For artificial input (in unit tests), all six bytes of
constraint_indicator_flags in hevc_caps_get_mime_codec() can be
zero. Add a guard against an out-of-bounds error that occurred in that
case. Change variables to signed int so comparison with -1 works.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1677>
Duplicating a picture what was already a dup was leading to a crash. Rename
the custom picture flags as HOLDS_BUFFER to make its meaning clear. Then save
then ref and store the picture as userdata, so it can be obtained when
duplicating. Finally, mark the doplicated as HOLDS_BUFFER to avoid thinking it
holds a request.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1681>
This patch adds a new parameter: hdr-tone-mapping (same as
vaapipostproc), if the HDR capabilites are availabe in driver, and
it's disabled by default.
If hdr-tone-mapping is enabled then HDR fields in sink caps are
processed in frames from HDR to SDR, removing those hdr fields in
source pad caps too.
hdr-tone-mapping is not enabled if a color conversion is also
requested, since it fails to process in the iHD driver, so far.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1258>
1. Use api_version variable rather than static string.
2. Remove pkgconfig generation since currently the library
is not installed, only used internally.
3. Rely on dependency "required" to abort compilation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
In commit e699aaeb we moved linking of libgudev to the plugin rather
the library, because it's only used in the plugin. But the dependency
check is still done in library.
This patch removes the dependency check in library, and updates the
dependency check in plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1650>
Constantly updating the ts_offset results in audiable glitches
when streaming audio using ntp-sync=true. By requiring a minimum
offset before updating ts_offset this can be mitigated. Added a
parameter which can be used to set min_ts_offset in ntp-sync mode.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1409>
A new implementation of Intel Quick Sync Video plugin.
This plugin supports both Windows and Linux but optimization for
VA/DMABuf is not implemented yet.
This new plugin has some notable differences compared with existing
MSDK plugin.
* Encoder will expose formats which can be natively supported
without internal conversion. This will make encoder
control/negotiation flow much simpler and cleaner than
that of MSDK plugin.
* This plugin includes QSV specific library loading helper,
called dispatcher, with QSV SDK headers as a part of this plugin.
So, there will be no more SDK version dependent #ifdef in the code
and also there will be no more build-time MSDK/oneVPL SDK
dependency.
* Memory allocator interop between GStreamer and QSV is re-designed
and decoupled. Instead of implementing QSV specific allocator/bufferpool,
this plugin will make use of generic GStreamer memory
allocator/bufferpool (e.g., GstD3D11Allocator and GstD3D11BufferPool).
Specifically, GstQsvAllocator object will help interop between
GstMemory and mfxFrameAllocator memory abstraction layers.
Note that because of the design decision, VA/DMABuf support is not made
as a part of this initial commit. We can add the optimization for Linux
later once GstVA library exposes allocator/bufferpool implementation as
an API like GstD3D11.
* Initial encoder implementation supports interop with GstD3D11
infrastructure, including zero-copy encoding with upstream D3D11 element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1408>
There's no need to do this, and it can make seeking far less accurate.
For a specific use case: I am working with a long (45-minute) MPEG-1 layer 3 file, which has a constant bit rate but no seeking tables. Trying to seek the pipeline immediately after pausing it, without the ACCURATE flag, to a location 41 minutes in, yields a location that is potentially over ten seconds ahead of where it should be. This patch improves that drastically.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/374>
During dispose the pool will still have a reference count of 1 and all
API on it can still be safely called.
Subclasses will have already freed their own data before finalize is
called but would nonetheless be called into again via the pool
deactivation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1645>
It's almost pointless and makes little sense as subclass might
want to modify refcount of the object or so. And all subclasses
are already casting them to non-const version as well.
In a general sense, we need to avoid passing refcounted object
with const qualifier.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1238>
... in order to make older g-i happy (~1.60) which doesn't like
freeform descriptions in the value_name field. Which in turn
then makes hotdoc happy instead of erroring out when we bump
the symbol index version.
We usually only (ab)use the name field for description strings
for private plugin enums, not for public API visible to bindings.
This lets glib-mkenum generate the _get_type() function for the
enum again, which in turn will generate the expected value names
to match the enums.
We might be able to add this back later once we can upgrade the
g-i version requirement (and the documentation job image).
This reverts most of commit b0aab48cdcf0a454d14aeb4d907209d8ee3f1add
There's a race condition in gsttagdemux.c between typefinding and the
end-of-stream event. If TYPE_FIND_MAX_SIZE is exceeded,
demux->priv->collect is set to NULL and an error is returned. However,
the end-of-stream event causes one last attempt at typefinding to occur.
This leads to gst_tag_demux_trim_buffer() being called with the NULL
demux->priv->collect buffer which it attempts to dereference, resulting
in a segfault.
The malicious MP3 can be created by:
printf "\x49\x44\x33\x04\x00\x00\x00\x00\x00\x00%s", \
"$(dd if=/dev/urandom bs=1K count=200)" > malicious.mp3
This creates a valid ID3 header which gets us as far as typefinding. The
crash can then be reproduced with the following pipeline:
gst-launch-1.0 -e filesrc location=malicious.mp3 ! queue ! decodebin ! audioconvert ! vorbisenc ! oggmux ! filesink location=malicious.ogg
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/967
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1620>
The kCVPixelFormatType_64RGBALE enum is only available on macOS Big
Sur (11.3) and newer. We also cannot use that while configuring the
encoder or decoder on older macOS.
Define the symbol unconditionally, but only use it when we're running
on Big Sur with __builtin_available().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1613>
Hotdoc should be able to extract and parse comments out of these. Just
need to be careful to only add the glob in directories that actually
contain *.m (objc) and *.mm (objcpp) files.
Also fix some doc comments and remove redundant ones.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1614>
Platform wise, is not possible, as far as I known, to have Wayland
without kernel's DRM. Though, it's possible to configure
gstreamer-vaapi without DRM but Wayland support, with the enhanced
handling of dmabuf in vaapisink for Wayland, vaapisink will always
fail. Given both issues, configuration with no DRM but Wayland, makes
things more complex, and a simpler approach is to refuse that
configuration.
This patch disables Wayland support if there isn't DRM support. Also,
it disables the display test for Wayland, relying only on DRM and
X11.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1606>
Formats map is instantiated at the end of the display
instantiation. The problem is the Wayland display which looks for a
format in a callback, before the map is populated.
If user compiles gstreamer-vaapi with DRM support, the map is
populated with a DRM display at GStreamer plugin registration. But if
not, or a VA driver is not available, the plugin will try with a
Wayland driver, which cause the NULL de-reference.
Nevertheless, in the case of no DRM support, and if the Wayland
display doesn't get a reply from the format conversion is not a
problem.
So the solution is the trivial one, check if the format map is already
populated before de-reference it.
Fixes: #977
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1606>
This is listed as a public interface implemented by osxaudio, so we
need to mark it as a plugin API so that it's listed in the
documentation correctly.
This is an ancient symbol, so add it to the symbol index too.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1601>
These symbols from macOS plugins osxaudio, osxvideo, and applemedia
have been present for a very long time but were never documented.
This allows us to document these, and also add Since: markers for new
features (symbols) there were added in 1.20
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1601>
This moves the ABI check to the registration, so we don't expose
decoders with the wrong ABI or that are just broken somehow. It
also makes few enhancement:
- Handle missing, but required controls
- Prints the controls macro name instead of id
This should fix RK3399 support with a currently release minor
regression in the Hantro driver that cause errors.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1599>
... in addition to all the other issues that were ignored for them
already.
At least the reverse playback test regularly times out, waiting for a
download to finish that has already finished successfully.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1593>
Starting with libsoup3, there is no attempt to handle thread safety
inside the library, and it was never considered fully safe before
either. Therefore, move all session handling into its own thread.
The libsoup thread has its own context and main loop. When some
request is made or a response needs to be read, an idle source
is created to issue that; the gstreamer thread issuing that waits
for that to be complete. There is a per-src condition variable to
deal with that.
Since the thread/loop needs to be longer-lived than the soup
session itself, a wrapper object is provided to contain them. The
soup session only has a single reference, owned by the wrapper
object.
It is no longer possible to force an external session, since this
does not seem to be used anywhere within gstreamer and would be
tricky to implement; this is because one would not have to provide
just a session, but also the complete thread arrangement made in
the same way as the system currently does internally, in order to
be safe.
Messages are still built gstreamer-side. It is safe to do so until
the message is sent on the session. Headers are also processed on
the gstreamer side, which should likewise be safe.
All requests as well as reads on the libsoup thread are issued
asynchronously. That allows libsoup to schedule things with as
little blocking as possible, and means that concurrent access
to the session is possible, when sharing the session.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/947
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1555>
Intel drivers expose some colorbalance's maximum values much more
bigger than their minimum values, given their middle values (default
value). This means, in practice, that the real middle point between
the maximum and minimum values implies a major change in the color
balance, which is not expected by the GStreamer color balance logic.
This patch makes the given maximum value symmetrical to the minimum
value, given the middle one (default value).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1580>
It was getting pulled in automatically via cairo, but the version
there is too old for us. We need the latest to fix Windows ARM64 NEON
support:
> ERROR: No specified compiler can handle file subprojects\libpng-1.6.37\arm/filter_neon.S
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1570>
As now, we warn if the decoder have no support src pixel format, but that
warning is called before the type (hence the debug category) is initialized.
Fix this by moving the debug category init out of the type initialization,
into the register funcitons.
This will fix an assertion that occures in the register function and allow
relevant log to be seen by the users.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1588>
Those will cause us to renegotiate at the next aggregate cycle,
and while at that point we may decide to reconfigure upstream
branches (in practice we don't as this is inherently racy,
and that's the reason why mixer subclasses perform conversion
internally), we certainly don't want to just forward the event
willy-nilly to all our sinkpads.
An actual issue this is fixing is when caps downstream of a
compositor are changed at every samples-selected signal emission,
for the purpose of interpolating the output geometry, and the
compositor has a non-zero latency, the reconfigure events were
forwarded to basesrc, which triggered an allocation query, which
in turn caused aggregator to have to drain (thus not being able
to queue <latency> frames), leading to disastrous effects
(choppy output as compositor couldn't consume frames fast enough,
the higher the latency the choppier the output)
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1464>
The merge request workflow can be confusing to people unfamiliar with
it, so add screenshots.
Also add a new section on how to revise merge requests, since a lot of
people tend to open new merge requests to make any changes.
Eliminate the separate "How to Prepare a Merge Request for Submission"
section -- merge it with the main section.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1367>
This is usually necessary to allow gst-indent to treat it as
a statement, but we do not run gst-indent on headers and we do not
have extra semicolons in other places that this macro is used in the
header. Fixes warnings when using the header:
```
In file included from gstreamer/subprojects/gst-plugins-base/gst-libs/gst/video/video.h:185,
from XYZ:9001:
gstreamer/subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.h:206:78: warning: ISO C does not allow extra ‘;’ outside of a function [-Wpedantic]
206 | G_DEFINE_AUTOPTR_CLEANUP_FUNC(GstVideoAggregatorConvertPad, gst_object_unref);
| ^
gstreamer/subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.h:214:181: warning: ISO C does not allow extra ‘;’ outside of a function [-Wpedantic]
214 | G_DECLARE_DERIVABLE_TYPE (GstVideoAggregatorParallelConvertPad, gst_video_aggregator_parallel_convert_pad, GST, VIDEO_AGGREGATOR_PARALLEL_CONVERT_PAD, GstVideoAggregatorConvertPad);
| ^
```
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1572>
When fixating color, there might be "other caps" with color spaces not
supported by the caps features exposed in the vapostproc's source pad
caps template (perhaps it's a bug somewhere else in GStreamer).
This solution checks if the proposed format exists in the filter
within the caps feature associated with the proposed format.
The check is done with the new filter's function
gst_va_filter_has_video_format().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1559>
In the need-data appsrc callback, a buffer is pulled from the
appsink. This buffer is then copied so that metadata is writable.
The copy is pushed to the appsrc but it doesn't take ownership
of the buffer so we need to manually unref it. The original buffer
is finally unreffed when the sample is freed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1548>
This is due to an unsafe usage of the pad task. We didn't ensure proper
ownership of the task. That race involved the task being released too early,
and was detected, luckily, by the glib mutex implementationt that
reported the mutex being disposed while being locked.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1478>
On Android (especially) and for static builds in general it is safer to link
against libsoup and have the dynamic custom loading disabled. For those cases we
can safely assume the application will use either libsoup2 or libsoup3 and not
both.
Fixes#939
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1536>
The earlier size of 2 MB was set back in 2009, it doesn't
seem unreasonable to raise it to 8 MB these days. The use
case at hand is matroskademux containing both a video stream
with a very low amount of compression but no decoding latency,
and a H265 stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1538>
It is an extremely common mistake on Windows to have incorrect PATH
values when loading a plugin, and the error from g_module_error()
(which just calls FormatMessageW()) is very confusing in this case:
The specified module could not be found.
https://docs.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-#ERROR_MOD_NOT_FOUND
It implies the plugin itself could not be found. The actual issue is
that a DLL dependency could not be found. We need to detect this case
and print a more useful error message.
We should still print the error fetched from FormatMessage() so that
people are able to google for it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1540>
GstAudioRingBufferSpec::segsize has been configured by using
device period but GstWasapi2RingBuffer was referencing the
buffer size returned by IAudioClient::GetBufferSize()
which is most likely larger than device period.
Fixing to sync them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1533>
If the `area_surface` got unmapped when changing to the `READY` or
`NULL` state, we currently don't remap it when playback resumes and
`wp_viewporter` is supported. Without `wp_viewporter` we do remap
it, but rather unintentionally and also when not wanted.
On Weston this has not been a big problem as it so far wrongly maps
subsurfaces of unmapped surfaces anyway - i.e. only the black
background was missing on resume. On other compositors and future
Weston this prevents the `video_surface` to get remapped.
Shuffle things around to ensure `area_surface` is mapped in the
right situations and do some minor cleanup.
See also https://gitlab.freedesktop.org/wayland/weston/-/issues/426
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1483>
Latest ffmpeg has removed avcodec_get_context_defaults(), and its
documentation says a new AVCodecContext should be allocated for this
purpose. The pointer returned by avcodec_find_decoder() is now
const-qualified so we also need to adjust for it. And, AVCOL_RANGE_MPEG
is now rejected with strict_std_compliance > FF_COMPLIANCE_UNOFFICIAL.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1531>
The later, doing damage in surface coordinates instead of buffer
coordinates, has been deprecated. The reason for that is that it
is more prone to bugs, both on the client and the compositor side,
especially when paired with buffer scale, `wp_viewporter` or
buffer transforms.
Unfortunately, on Weston this risks running into
https://gitlab.freedesktop.org/wayland/weston/-/issues/446
(which causes trouble for several other projects as well). However,
that bug only affects cases where we run in sync mode, i.e. only
during resizes. In practise I haven't been able to observe the
issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
Each time we call `wl_surface_damage()` we want to do full surface
damage. Like Mesa, just use `G_MAXINT32` to ensure we always do
full damage, reducing the need to track the right dimensions.
`window->video_rectangle` is now unused, but we keep it around for
now as we may need it again in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
From the spec:
> This request is used to describe the regions where the pending
> buffer is different from the current surface contents
We currently also call `wl_surface_damage()` on surfaces without
new or still compositor-hold buffers, e.g. when resizing the window.
In that case we call it on `area_surface_wrapper`, even though it
gets resized via `wp_viewport_set_destination()`, in which case
the compositor is in charge of repainting the area on screen.
Doing so is currently not forbidden by the spec, however it might
be in the future, see
https://gitlab.freedesktop.org/wayland/wayland/-/issues/267
Thus lets stay close to the spec and only call `wl_surface_damage()`
when we just attached a buffer.
Right now this prevents runtime assertions in Mutter.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
`gst_wl_window_set_opaque` does not get called on window resizes,
potentially leaving opaque regions too small.
According to the spec opaque regions can be bigger than the surface
size - parts that fall outside of the surface will get ignored.
Thus we can can simply use `G_MAXINT32` and be sure that the whole
surfaces will always be covered.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
If the VANC track does contain packets, but we skip over all packets, just
treat it the same as if there hadn't been any packets at all and send a
GAP event instead of erroring out with "Failed to handle essence element".
We would error out because when we reach the end of the loop without having
found a closed caption packet the flow return variable is still FLOW_ERROR
which is what it has been initialised to.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1518>
Though the profiles[0] is inited as GST_H265_PROFILE_INVALID in the
gst_h265_profile_tier_level_get_profile(), the profile detecting may
change its content later. So the return of profiles[0] may not be an
invalid profile even the len is 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1517>
The previous code was mistakenly trying to compute a cc_type out
of the first byte in the byte triplet, whereas it is to be interpreted
as:
> Bit b7 of the LINE value is the field number (0 for field 2; 1 for field 1).
> Bits b6 and b5 are 0. Bits b4-b0 form a 5-bit unsigned integer which
> represents the offset
The same mistake was made when creating padding packets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1496>
Sometimes we can't output anything because we don't have enough
incoming frames. In that case, the resampler was trying to call
do_quantize() and do_resample() in a loop forever because there would
never be samples to output (so chain->samples would always be NULL).
Fix this by not calling chain->make_func() in a loop -- seems
completely unnecessary since calling it over and over won't change
anything if the make_func() can't output samples.
Also add some checks for the input and / or output being NULL when
doing conversion or quantization. This will happen when we have
nothing to output.
We can't bail early, because we need resampler->samples_avail to be
updated in gst_audio_resampler_resample(), so we must call that and
no-op everything along the way.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1461>
When the image is opaque but the output ProRes format has an alpha
component (4 component, 32 bits per pixel), Apple requires that we
signal that it should be ignored by setting the depth to 24 bits per
pixel. Not doing so causes the encoded files to fail validation.
So we set that in the caps and qtmux sets the depth value in the
container, which will be read by demuxers so that decoders can skip
those bytes entirely. qtdemux does this, but vtdec does not use this
information at present.
The sister change was made in qtmux and qtdemux in:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1061
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1489>
Old "application/*" are now as per RFC8081 deprecated in favor of
new "font/*" mime types. Some new encoders are already using the
updated mime types. We need to also add them to the support list
in order for assrender to correctly identify them as fonts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1481>
If there is no jitterbuffer stats we should not attempt to store them in the
global stats structure.
Also add a g_return_if_fail in _gst_structure_take_structure() about this
because it is a programmer error to pass an invalid pointer address there.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1479>
Add gst_dep to gst_rtsp_server_deps, in the context of buildroot, this
will avoid the following build failure, because the correct girdir
location will be retrieved from gstreamer-1.0.pc:
/home/giuliobenetti/autobuild/run/instance-3/output-1/host/riscv32-buildroot-linux-gnu/sysroot/usr/bin/g-ir-compiler gst/rtsp-server/GstRtspServer-1.0.gir --output gst/rtsp-server/GstRtspServer-1.0.typelib --includedir=/usr/share/gir-1.0
Could not find GIR file 'Gst-1.0.gir'; check XDG_DATA_DIRS or use --includedir
error parsing file gst/rtsp-server/GstRtspServer-1.0.gir: Failed to parse included gir Gst-1.0
If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the .mk file should help.
Typically like this: PKG_MAKE_ENV += GIR_EXTRA_LIBS_PATH="$(@D)/.libs"
Fixes:
- http://autobuild.buildroot.org/results/04af6b22cfa0cffb6a3109a3b32b27137ad2e0b0
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1460>
Prior to this patch, we considered that a stream was blocking
whenever a pad probe was triggered for either the RTP pad or
the RTCP pad.
This led to situations where we subsequently unblocked and expected
to find a segment on the RTP pad, which was racy.
Instead, we now only consider that the stream is blocking when
the pad probe for the RTP pad has triggered with a blockable object
(buffer, buffer list, gap event).
The RTCP pad is simply blocked without affecting the state of the
stream otherwise.
Fixes#929
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1452>
Instead of a sequence of if statements, declare a table to map profile
idc with profiles and traverse it.
Also, first add the profile from the parsed profile idc and later add,
into the profile array, the profile from the compatibility flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
It's possible a HEVC stream to have multiple profiles given the
compatibility bits. Instead of returning a single profile, internal
gst_h265_profile_tier_level_get_profiles() returns an array with all
it possible profiles.
Profiles are appended into the array only if the generated profile
is not invalid.
gst_h265_profile_tier_level_get_profile() is rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), returning the first
profile found the array.
And gst_h265_get_profile_from_sps() is also rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), but traversing the array
verifying if the proposed profile is actually valid by Annex A.3.x of
the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
BT.2020 color primaries are designed to cover much wider range of
CIE chromaticity than BT.709, and also it's used for both SDR and HDR
contents. So, the incorrect assumption (i.e., BT.709 as a BT.2020)
is risky and resulting image color tends to be visually very wrong.
Unless there's obvious clue, don't consider color space of high resolution
video stream as BT.2020
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1445>
* Add fec / red encoders as direct children of webrtcbin, instead
of providing them to rtpbin through the request-fec-encoder signal.
That is because they need to be placed before the rtpfunnel, which
is placed upstream of rtpbin.
* Update configuration of red decoders to set a list of RED payloads
on them, instead of setting the pt property.
That is because there may be one RED pt per media in the same session.
* Connect to request-fec-decoder-full instead of request-fec-decoder,
in order to instantiate FEC decoders according to the payload type
of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
When multiple streams are bundled together, there may be more
than one red payload type to handle.
In addition, as the red decoder works by filling in gaps in
the seqnums, there needs to be one rtp_history queue per sequence
domain.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
In redenc, when input buffers have a header for the TWCC extension,
we now add one to our wrapper buffers.
In ulpfecenc we add one in that case to our protection buffers.
This makes TWCC functional when UlpRed is used in webrtcbin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1414>
In order to allow "level-asymmetry-allowed" we now handle a new
"profile" field, which as the same semantics as the "profile" field in
H.264 stream so that we can force payloaded stream to have the right
format when using the `gst_sdp_media_get_caps_from_media` to set caps
filter after the payloader. This allows a simple negotiation in standard
RTP negotiation based on SDPs (like webrtc) for that particular case,
closely respecting the specs.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1410>
The ["level-asymmetry-allowed"] field states that the peer wants the
profile specified in the "profile-level-id" fields but doesn't care
about the level. To express this in GStreamer caps term, we add a
"profile" field in the caps, which reuses the usual "profile" semantics
for H.264 streams and, and remove "profile-level-id" and
"level-asymmetry-allowed" fields.
["level-asymmetry-allowed"]: https://www.iana.org/assignments/media-types/video/H264
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1410>
We are querying supported swapchain colorspace via
CheckColorSpaceSupport() but it doesn't seem to be reliable.
Use only tested full-range RGB formats which are:
- sRGB
- BT709 primaries with linear RGB
- BT2020 primaries with PQ gamma
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1433>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
This adds the alignment field to the template caps. Without this field
set, the auto-plugger will see fixed caps and will use
gst_caps_is_subset() against the caps produced by the parser. This is a
challenge for all cases where a parser can do conversion. This is fixed
by adding alignment field, which makes the auto-pluggers do an
intersection of the caps as it gets unfixed caps after intersection now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>
XNextEvent() blocks indefinitely in absence of X11 events, which can
prevent the pipeline from stopping.
This can cause problems when ximagesrc is used in "remote desktop"
scenarios and the GStreamer application itself, through which the user
is viewing and controlling the machine, is the only source of input
events.
Replace the call with non-blocking XCheckTypedEvent().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1438>
since `gst_caps_replace()` and `gst_pad_set_caps()` both ref the caps and neither of them takes the ownership of the caps -> it must be unreffed in `gst_multi_file_src_set_property()`
to test the leak (on Unix): `echo coucou > /tmp/file.txt && GST_TRACERS=leaks GST_DEBUG="GST_TRACER:7" gst-launch-1.0 multifilesrc location=/tmp/file.txt caps='txt' ! fakesink`
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1436>
When detecting the remote time has been reset which may occur if remote
device providing the clock server has been power reset, then clock is
no longer synced. Setting clock state will trigger a signal to client
informing on sync lost making it possibility to take appropriate action.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/975>
Due to a copy paste bug, the bitdepth was never set and that was leading
to requesting sizeimage of 0. Previously that worked since the driver
would in that case pick a size for us. But now the we bumped the minimum
to 4KB, the driver happily allocate 4KB of bitstream which lead to
decoding error.
As MPEG2 have a fixed bitdeph of 8, use a define instead of the run-time
variable.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1415>
vaapidecode is used in vaapidecodebin and it exposes all the
theoretically supported caps, but that slows down autoplug. With this
autplug is negotiated faster, giving more option to decodebin to select
other decoder.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1405>