When user is passing the actual interlace-mode when calling
gst_video_decoder_set_interlaced_output_state() it should not be
overidden by the input interlace-mode.
Needed to fix#825 as we want to keep interlace-mode=interleaved from
parsers and have the OMX decoder producing interlace-mode=alternate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/852>
Rename remaining `gst_video_color_transfer_{encode,decode}` functions on
the `GstVideoTransferFunction` enumeration to
`gst_video_transfer_function_{encode,decode}` permitting
gobject-introspection to turn these into associated functions and place
them under the respective `<enumeration>` block in gir XML files.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/805>
gst_pad_get_current_caps() may be wrong when there is a renegotiation in
progress for the pad and we have not yet received or selected the buffer
with different caps yet.
Fix by storing the caps through in a similar way to the existing code
for buffer/video-info selection machinery.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/813>
This allows subclasses that notice missing reference frames to request a
new sync point to allow seamless decoding again. While doing so the
subclass can also signal whether it wants a) all following input frames
until the sync point to be discarded or b) all output frames until the
sync point to be marked as corrupt.
Sending of force-keyunit events for this can be throttled by the
application via the "min-force-keyunit-interval" property.
This replaces custom behaviour for the same in various decoders, for
example openh264dec.
Based on patches by Haakon Sporsheim <haakon@pexip.com> and
Stian Selnes <stian@pexip.com>.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/730>
If the first frame(s) at the very beginning or after a flush are not a
sync point then the base class would discard them before passing them to
the subclass.
This also fixes the previously broken distance_from_sync handling: it
was never reset at sync points.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/730>
This can be used by applications to configure decoders so that corrupted
frames are directly discarded instead of being forwarded inside the
pipeline. It is a replacement for the "output-corrupt" property of the
ffmpeg decoders.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/730>
This can be used by subclasses to mark output frames as known to be
corrupted, for example if reference frames were missing. ffmpeg's
decoders can signal this.
In addition this flag is propagated downstream if the input frame had it
set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/730>
This is not actually required (anymore?). Source pad caps can be
negotiated at any time regardless of any configured (or existing) sink
pads and videoaggregator comes up with some fixated caps based on the
downstream caps.
Subclasses can override this behaviour as needed by overriding
update_src_caps().
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/793>
`gst_gl_memory_read_pixels` reads pixels from `GLMemory` into the
pointer, effectively writing to it. This is opposite from
`gst_gl_memory_texsubimage` which reads texture data from `read_pointer`
into `GLMemory`.
Both cases are clarified by changing `read_pointer` to `write_pointer`,
and explaining what `gst_gl_memory_texsubimage` does in addition to
referring back to `gst_gl_memory_read_pixels`.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/806>
These then don't require going through the generic code path via AYUV64
first but can be converted directly.
This speeds up processing of
videotestsrc ! v210 ! videoconvert ! other_format ! fakesink
by a factor of 1.55 for I420/YV12 and 1.40 for the other destination
formats and reduces memory pressure considerably.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/775>
The type is called GstVideoTransferFunction so the function names should
match, otherwise gobject-introspection is keeping the functions as
global functions instead of methods on the type.
The same mistake was also made in lots of other APIs over the years, but
here we can at least fix it for 1.18 still.
Thanks to Marijn Suijten for noticing.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/807>
For audio we copy metas that have no tags at all, or that only have the
"audio" and/or "audio-channels" tag. Audio codecs don't change the
audio aspect of the stream and in almost all cases don't change the
number of channels. They might however change the sample rate (e.g.
Opus). Subclasses that change the number of channels will have to
override ::transform_meta() accordingly.
For video we copy metas that have no tags at all, or that only have the
"video" and/or "video-size" and/or "video-orientation" tag. Video codecs
don't change the "video" aspect of the stream and in almost all cases
don't change the resolution or orientation. Subclasses that rescale or
change the orientation will have to override ::transform_meta()
accordingly.
See https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/576#note_610581
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/801>
Call gst_aggregator_selected_samples() after filling the queues
(but before preparing frames).
Implement GstAggregator.peek_next_sample.
Add an example that demonstrates usage of the new API in combination
with the existing buffer-consumed signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/728>
This adds linear 32x32 NV12 based tiles. This format is notably used by
Allwinner VCU and exposed in V4L2 as being "SUNXI Tiled" format. In this
patch we generalize the plane info calculation so we can share this part
with the 4L4 variant.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/754>
Since we are using VideoMeta, the converter (similarly to the video_frame_copy
utility) should have no issue dealing with frames that are slightly larger.
This situation occure as some element will use padded width/height for
allocation, which results in a VideoMeta width/height being larger then the
display width/height found in the negotiated caps.
Fixes#790
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/747>
For example, BT709, BT601, and BT2020_10 all have theoretically
different transfer functions, but the same function in practice. In
these cases, we should use the fast path for negotiating. Also,
BT2020_12 is essentially the same as the other three, just with one more
decimal point, so it gives the same result for fewer bits. This is now
also aliased to the former three.
Also make videoconvert do passthrough if the caps have equivalent
transfer functions but are otherwise matching.
As of the previous commit, we write the correct transfer function for
BT601, instead of the (functionally identical but different ISO code)
transfer function for BT709. Files created using GStreamer prior to that
commit write the wrong transfer function for BT601 and are, strictly
speaking, 2:4:5:4 instead. However, this commit takes care of
negotiation, so that conversions from/to the same transfer function are
done using the fast path.
Fixes#783
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/724>
If the cropping or scaling input or output rects put us completely
outside the input/output frame respectively, we can't draw anything
except black safely. Check for those conditions and don't set up a
configuration that attempts to access out of bounds memory outside
the input/output framebuffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/696>
If the frames passed in to gst_video_converter_frame()
have a different layout than was configured for, the
conversion code might go out of bounds and crash.
Do a sanity check on each frame passed in, and in the
absence of a return value in the API, just
refuse the conversion in invalid cases and leave the
destination frame untouched so it's obvious to
users that it was broken.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/696>
Previously we only handled one event at a time, which could lead to the
following two suboptimal situations:
- frame 0 at 20ms, frame 1 at 40ms and two force-keyunit events at 10ms
and 15ms. We would create a new keyframe for both of the frames.
- 100 force-keyunit events with running-time NONE would cause all
following 100 frames to be made into a keyframe.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/684>
Also not optimal but at least simplifies the code a bit and doesn't
require g_list_length() and g_list_append() in a few places.
For 2.0 there are some more candidates to change but unfortunately
they're currently part of the API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-base/-/merge_requests/683>
When receiving an instant-rate-change event, store the updated
seek flags and replace the flags in any input segments with them
to allow for instant switching between trickmodes and not.
This commit modifies GstVideoMasteringDisplayInfo and GstVideoContentLightLevel
structs so that each value is to be more like hdr_metadata_infoframe struct
of linux drm header and DXGI_HDR_METADATA_HDR10 struct of Windows.
So each value is no more fraction but normalized one as per CTA 861.G spec.
Also the unit of each value will be consistent with H.264, H.265
specifications, hdr_metadata_infoframe struct for linux and
DXGI_HDR_METADATA_HDR10 struct for Windows.
[38/1301] Generating GstVideo-1.0.gir with a custom command.
../subprojects/gst-plugins-base/gst-libs/gst/video/gstvideoaggregator.c:231: Error: GstVideo: identifier not found on the first line:
*
^
This can be used to have compositor display either the background
or a stream on a lower zorder after a live input stream freezes
for a certain amount of time, for example because of network
issues.
This patch introduces a new API to send and parse mouse scroll events. Mouse
event coordinates are sent relative to the display space of the related output
area. This is usually the size in pixels of the window associated with the
element implementing the GstNavigation interface.
The GST_VIDEO_BUFFER_FLAG_TOP_FIELD flag is a superset of
GST_VIDEO_BUFFER_FLAG_BOTTOM_FIELD as they are defined using other
flags. As a result we can't use GST_BUFFER_FLAG_IS_SET() to check for
those flags.
This simply implies not trying to "prepare" those buffers,
as mapping an empty buffer to a video frame does not make
much sense.
This also adds a simple test in compositor that performs
some trivial checking of the handling of gap events, the test
was not failing before, but an error was logged, this is
no longer the case.
Fixes#717
If there's no known value in the best caps then the functions to convert
them to strings will return NULL. Having the fields not in the caps is
not a problem, having them with a NULL value however will cause
negotiation failures.
In prepare_frame, it is not enough for the target info
(conversion_info) to not have changed to decide not to update
the converter, as the vpad info may have changed as well.
Fixes#714
This marker is optional, its name refer to RTP marker bit. This mark can
be use to reduce latency in various use cases. With the split between
finish_frame() and finish_subframe() we will now be able to identitfy
the last subframe with no latency.
In order to detail the use of GST_BUFFER_FLAG_MARKER in a video
use case, the flag GST_VIDEO_BUFFER_FLAG_MARKER has been introduced
with a proper documentation clarifying marker's role.
Introduce a new API so encoders can split the encoding in subframes.
This can be useful to reduce the overall latency as we no longer need to
wait for the full frame to be encoded to start decoding or sending it.
The matrices were in the wrong order.
Instead of the conversion matrix being
_ XYZ_TO_RGB_output * RGB_TO_XYZ_input * input_RGB
It was
_ RGB_TO_XYZ_input * XYZ_TO_RGB_output * input_RGB
I'm going to use this new API in gst-omx so an encoder can request
v4l2src to produce buffers matching the encoder stride and slice heights
preventing copies of incoming buffers.
Especially for interlaced input make sure to
a) never mix both fields
b) never read lines after the end of the input frame
c) allocate enough space in the temporary lines to not write outside
the allocated memory area
This fixes various memory corruptions and rescaling artefacts.
At the moment, we only posted QoS messages when frame_drop() was
called, but not in finish_frame() when QoS triggered a late push.
This should fix applications that tries to account the dropped
frames. We also emit a warning on drops so it's more clear what is
happening.
By adding this field, buffer producers can now explicitly set the exact
geometry of planes, allowing users to easily know the padded size and
height of each plane.
GstVideoMeta is always heap allocated by GStreamer itself so we can
safely extend it.
When using gst_video_info_align() user had no easy way to retrieve the
padded size and height of each plane.
This can easily be implemented in fill_planes() as it's already called
in align() with the padded height.
Ideally we'd add a plane_size field to GstVideoInfo but the remaining
padding is too small so that would be an ABI break.
Fix#618