Scenario:
- Source 1 requesting and waiting a clock id
- Source 2 requesting and waiting on a clock id
- Test attempting to crank both sources in the same GstHarness
gst_test_clock_crank() originally dropped locks between the retrieving
of the next clock id and advancing to the next clock id. This would
mean that both sources would race each other attempting to complete
their clock waits. Sometimes the operations would be performed in the
correct order, other times they would not and a FALSE return value would
be produced.
This would lead to an assertion in gst_harness_push_from_src() expecting
that all clock cranks to succeed.
Fix by ensuring that the clock wait produced is dealt with before
processing the next by not dropping the relevant locks after retrieving
the next clock id.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1299>
This is a minimal unit test the show that the stride extrapolation can work
with all pixel format we support. This minimal verify that the extrapolation
match the stride we set into GstVideoInfo with 320x240 for all the pixel
format we support. The tiles formats are skipped, since their stride is
set as two 16bit integers, and we also skip over palette planes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
As this element is single threaded, we only need to stop the objects to
allow changing the format again. Fixes assertion notably on shutdown and
on some other situation where the format may be set twice without
actually activating the element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Many of the legacy APIs, specifically in the Linux Kernel, have a
single stride for the pictures. In this context, it is common
to extrapolate the other strides based on the selected pixel
format. Such function have been copy pasted from video4linux2
plugin into wayland, kms and v4l2codecs plugins.
This patch implements a generalized from of that function and
make it available to everyone through the video library.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
Unlike other simple tiled formats, the Mediatek HW use different tile size
per-plane. The tile size is scaled according to the subsampling. Effectively,
using the name 16L32S to represent linearly layout tiles of size 16x32 bytes
in the Y plane, and 16x16 in the UV plane. In order to make this specificity
discoverable, a new SUBTILES flags have been added.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1567>
If the video4linux device supports norms but has no norm set, norm is
returned as an uninitialized variable after the ioctl call, leading to
gst_v4l2_tuner_get_norm_by_std_id() returning a random norm from the
supported norms. Catch this case and instead return NULL to indicate
that no norm is setup.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1625>
We have the d3d11screencapturesrc element in d3d11 plugin
which is obviously better than this element in terms of performance
and design, so we don't need to make people be confused by two separate elements.
Let's pick the better implementation and remove unnecessary one.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1750>
... instead of round(). Depending on framerate, calculated position
may not be clearly represented by using uint64, 30000/1001 for example.
Then the result of round() can be sliglhtly larger (1ns) than
buffer timestamp. And that will cause unnecessary frame delay.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1747>
It was assumed that the kernel parameters would match with the bitstream value
but instead the author when with another set of value. Surprisingly, this
makes no difference with the resulting fluster score.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1748>
If a serialized event arrives behind a buffer, it should not be send before
it. This fixes the pending event handling so that only early pending events,
the one that arrrived or was generated while the adapter was empty get send
before pushing buffer. All other events are not pushed after.
This issue lead the latency tracer to think our audio encoder did not have any
latency. This was testing with opusenc in a live pipeline.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1266>
Each stream may have its own segment timeline
(i.g., different segment.start or segment.base)
depending on edit-list and composition-to-decode atom.
Make sure whether time position of a stream has been actually
far behind than that of current target stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1352>
If present, add '-lsocket' and '-lnsl' to network_deps.
ext/curl/meson.build: add network_deps to dependencies
gst/festival/meson.build: same
sys/shm/meson.build: same
Fixes linking issues on Illumos distros.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1525>
Often, users will need to scale inputs (e.g.
with vaapipostproc) before they are submitted
to the vaapioverlay. However, this results in
multiple VPP passes/operations in the pipeline
which creates unnecessary process overhead.
This change allows for inputs to be submitted
at original scale to vaapioverlay with per-sinkpad
scale dimensions specified so they can be scaled
and blended/composited in a single VPP pass/operation
to avoid the unnecessary process overhead.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Don't set VAAPI vpp blend flags if alpha == 1.0,
i.e. fully opaque. This can avoid extra processing
overhead on some drivers that apply blending
unconditionally when flags are present, even if the
end result is the same without blend flags (i.e. all
opaque alpha channels).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1380>
Note that AYUV and AYUV64 formats will be used to expand format
support, especially some packed YUV formats (e.g., Y410, YUY2)
are common DXGI formats used for hardware decoder/encoder on Windows
but those formats cannot be used as a render target. We need to handle
them differently without pixel shader help, using compute shader
for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1699>