Each time we call `wl_surface_damage()` we want to do full surface
damage. Like Mesa, just use `G_MAXINT32` to ensure we always do
full damage, reducing the need to track the right dimensions.
`window->video_rectangle` is now unused, but we keep it around for
now as we may need it again in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
From the spec:
> This request is used to describe the regions where the pending
> buffer is different from the current surface contents
We currently also call `wl_surface_damage()` on surfaces without
new or still compositor-hold buffers, e.g. when resizing the window.
In that case we call it on `area_surface_wrapper`, even though it
gets resized via `wp_viewport_set_destination()`, in which case
the compositor is in charge of repainting the area on screen.
Doing so is currently not forbidden by the spec, however it might
be in the future, see
https://gitlab.freedesktop.org/wayland/wayland/-/issues/267
Thus lets stay close to the spec and only call `wl_surface_damage()`
when we just attached a buffer.
Right now this prevents runtime assertions in Mutter.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
`gst_wl_window_set_opaque` does not get called on window resizes,
potentially leaving opaque regions too small.
According to the spec opaque regions can be bigger than the surface
size - parts that fall outside of the surface will get ignored.
Thus we can can simply use `G_MAXINT32` and be sure that the whole
surfaces will always be covered.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1446>
If the VANC track does contain packets, but we skip over all packets, just
treat it the same as if there hadn't been any packets at all and send a
GAP event instead of erroring out with "Failed to handle essence element".
We would error out because when we reach the end of the loop without having
found a closed caption packet the flow return variable is still FLOW_ERROR
which is what it has been initialised to.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1518>
Though the profiles[0] is inited as GST_H265_PROFILE_INVALID in the
gst_h265_profile_tier_level_get_profile(), the profile detecting may
change its content later. So the return of profiles[0] may not be an
invalid profile even the len is 0.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1517>
The previous code was mistakenly trying to compute a cc_type out
of the first byte in the byte triplet, whereas it is to be interpreted
as:
> Bit b7 of the LINE value is the field number (0 for field 2; 1 for field 1).
> Bits b6 and b5 are 0. Bits b4-b0 form a 5-bit unsigned integer which
> represents the offset
The same mistake was made when creating padding packets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1496>
When the image is opaque but the output ProRes format has an alpha
component (4 component, 32 bits per pixel), Apple requires that we
signal that it should be ignored by setting the depth to 24 bits per
pixel. Not doing so causes the encoded files to fail validation.
So we set that in the caps and qtmux sets the depth value in the
container, which will be read by demuxers so that decoders can skip
those bytes entirely. qtdemux does this, but vtdec does not use this
information at present.
The sister change was made in qtmux and qtdemux in:
https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1061
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1489>
Old "application/*" are now as per RFC8081 deprecated in favor of
new "font/*" mime types. Some new encoders are already using the
updated mime types. We need to also add them to the support list
in order for assrender to correctly identify them as fonts.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1481>
If there is no jitterbuffer stats we should not attempt to store them in the
global stats structure.
Also add a g_return_if_fail in _gst_structure_take_structure() about this
because it is a programmer error to pass an invalid pointer address there.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1479>
Instead of a sequence of if statements, declare a table to map profile
idc with profiles and traverse it.
Also, first add the profile from the parsed profile idc and later add,
into the profile array, the profile from the compatibility flags.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
It's possible a HEVC stream to have multiple profiles given the
compatibility bits. Instead of returning a single profile, internal
gst_h265_profile_tier_level_get_profiles() returns an array with all
it possible profiles.
Profiles are appended into the array only if the generated profile
is not invalid.
gst_h265_profile_tier_level_get_profile() is rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), returning the first
profile found the array.
And gst_h265_get_profile_from_sps() is also rewritten in terms of
gst_h265_profile_tier_level_get_profiles(), but traversing the array
verifying if the proposed profile is actually valid by Annex A.3.x of
the specification.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1440>
* Add fec / red encoders as direct children of webrtcbin, instead
of providing them to rtpbin through the request-fec-encoder signal.
That is because they need to be placed before the rtpfunnel, which
is placed upstream of rtpbin.
* Update configuration of red decoders to set a list of RED payloads
on them, instead of setting the pt property.
That is because there may be one RED pt per media in the same session.
* Connect to request-fec-decoder-full instead of request-fec-decoder,
in order to instantiate FEC decoders according to the payload type
of the stream.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1429>
We are querying supported swapchain colorspace via
CheckColorSpaceSupport() but it doesn't seem to be reliable.
Use only tested full-range RGB formats which are:
- sRGB
- BT709 primaries with linear RGB
- BT2020 primaries with PQ gamma
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1433>
When using playbin3, it seems that the alpha decode is always first to
push caps and run an allocation query. As the format change from sink
and alpha were not synchronized, the allocation query could endup
being run before the caps are pushed. That may lead to failing query,
which makes the decoder thinks there is no GstVideoMeta downstream and
most likely CPU copy the frame.
This patch implements a format cookie to track and synchronize the
format changes on both pads fixing the racy performance issue.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1439>