The fake video decoder ignores input bitstream except
to enforce caps restrictions. It reads video width,
height and framerate from caps. Then it just pushes
video frames without doing any decoding.
The fake video decoder just draws a snake moving from
left to right in the middle of the frame. This is a
light weight drawing while it still provides an idea
about how smooth is the rendering.
The fake video decoder inherits from GstVideoDecoder.
It is useful to measure how smooth will be the whole
rendering pipeline if you had the most efficient video
decoder. Also useful to bisect issues for example when
suspecting issues in a specific video decoder.
Handles mpeg2, mpeg4, h263, h264, theora, vp8, wmv3, msmpeg,
flash-video, vp6, vp9, wmv1, wmv2, divx but more can be
added if needed.
For now it can only output RGBA, RGBx, BGRA, BGRx.
Its rank is 0 (none) but I added a property to change it so
that it can be selected by decodebin.
gst-launch-1.0 fakevideodec rank=512 \
playbin uri=http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4http://bugzilla.gnome.org/show_bug.cgi?id=723778Closes#679
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5636>
From a multimedia perspective GLES >= 2 has the big advantage of
supporting external textures (`OES_EGL_image_external` /
`OES_EGL_image_external_essl3`), allowing various YUV formats to be
imported directly by drivers.
It appears unlikely by now that the extension will ever be ported to
GL with Vulkan becoming more popular, leaving GL without an "official"
way to import YUV formats.
Further more, for Gst internal purposes it's likely that GLES2 works
equally well if not better on most drivers these days, especially on
embedded devices.
Thus switch the default for EGL context creation to GLES2. This won't
affect apps that create their own context, but `gst-launch-1.0` etc.,
which are often used for testing so people don't have to pass
`GST_GL_API=gles2`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5509>
By sealing for future writes, we broke Wayland SHM support. It seems like the
wayland library maps the SHM in read/write mode. This is visible through no
display and an error message like this:
wl_shm@7: error 2: failed mmap fd 43: Operation not permitted
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5684>
Take the case into account when user filters have been set before the
source gets updated.
Note that the further linking of the filters, if present, happens below
in the `gst_camera_bin_check_and_replace_filter()` calls.
The audio filter is still affected by the same issue but left out for
now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5527>
If this property is enabled then the jitterbuffer will do the normal PTS
calculations according to the configured mode instead of making use of
the RFC7273 media clock.
The timestamp calculated from the RFC7273 media clock will only be
stored in the reference timestamp meta, if addition of that meta is enabled.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
When this property is used, it is assumed that the system clock is
synced close enough to the media clock used by an RFC7273 stream.
As long as both clocks are at most a few seconds from each other this
will give the correct results and avoids having to create an actual
network clock that has to sync first.
If the system clock is actually synchronized to the media clock then
everything will behave exactly the same, otherwise the reference
timestamp meta will be correct but the buffer timestamps will be off by
the difference between the two clocks.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
Do more checks for clock equality than just checking pointers. The same
NTP/PTP clock might be used as pipeline clock but a new instance, so
instead also check what clock they are synced to.
Also handling setting / resetting of the media clock and pipeline clock
correctly by resetting the media clock's state accordingly.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5512>
This allows configuring the TTL that is used for multicast packets sent
out on the sockets, and is defaulting to 1 as before. The default might
change at some point.
In some networks multiple hops are needed to reach the PTP clock and
this allows to configure GStreamer in a way that works in such networks.
At a later time, per-domain or per-interface TTL configurations might be
added when needed.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5649>
Even if IDXGIOutput6 says current display colorspace is HDR,
captured texture via IDXGIOutputDuplication::AcquireNextFrame()
is converted frame by OS unless we use IDXGIOutput5::DuplicateOutput1()
with DXGI_FORMAT_R16G16B16A16_FLOAT format, in order for captured
frame to be scRGB color space. Then application should perform
tonemap operation based on reported display white level, color primaries, etc.
Since we don't have any tonemapping implementation, ignores colorimetry
reported by IDXGIOutput6.
Fixes: https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3128
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5671>
This commit makes sure that pads are valid for linking
after the pads has been temporarily unlocked in the linking process.
Not doing this opens up for a race condition where
pads potentially can be linked twice.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5670>
This is how it was documented and how it worked before the port to GstPlay.
Without this, applications expecting signals to be emitted directly
without anything running the main context will simply not receive any
signals.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5672>
d2d runtime seems to execute pending GPU command list
when DXGI ID2D1RenderTarget is being released, and it will invoke
d3d11 immediate context APIs. Should protect all rendering operations
and DXGI resources with lock.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5659>
Because we treat raw audio chunks/samples as keyframes, they were interfering
with seek time adjustment.
Became apparent when the accompanying video stream was I-frame only,
for example ProRes.
Since raw audio streams can be seeked freely, it's fine to just ignore them here,
giving priority to the real keyframes in the video stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4946>
The code seems to validate that the media-level fingerprint matches
the fingerprint of the previous media or of the whole session. There
is no such requirement in any RFC I found. The session-session one
is just meant to act as a fallback when there is no media-level
fingerprint.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1118>
This adds roughly 3 GB to the CI image, but assures us that we will
have most of debug symbols installed, minus a some big ones we
don't need and filter out.
The size increase is not that impactful as it's a one time cost,
since the images get cached in the runners afterwards.
This will improve stacktraces we get from the CI and validate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5629>