Commit bc09d8cc changed gstmpdparser to put the entire
<ContentProtection> element in the "value" field, so that DRMs
other than PlayReady could make use of the data inside this
element.
However, the data in the "value" field does not include any
XML namespace declarations that are used within the element. This
causes problems for a namespace aware XML parser that wants to
make use of this data.
This commit modifies the way the XML is converted to a string
so that XML namespaces are preserved in the output.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2487>
Some closedcaption elements like sccenc except input buffers
to have timecode metas. The original use case is to serialize
closed captions extracted from a video stream, in that case
ccextractor copies the video time code metas to the closed
caption buffers, but no such mechanism exists when creating
a CC stream ex nihilo.
Remedy that by having timecodestamper accept closedcaption
input caps, as long as they have a framerate.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2490>
It seems "GST_VAAPI_PLUGIN_BASE_SRC_PAD_CAN_DMABUF (decode)" will
return false even if this platform support the mem_type dma buffer.
And media-driver will return GST_VAAPI_BUFFER_MEMORY_TYPE_DMA_BUF2
on Gen12(such as TGL).
Without this patch, The command such as:
gst-launch-1.0 videotestsrc num-buffers=100 ! video/x-raw, format=I420 ! \
x264enc ! h264parse ! vaapih264dec ! video/x-raw\(memory:DMABuf\) ! fakesink
will return not-negotiated.
Signed-off-by: Zhang Yuankun <yuankunx.zhang@intel.com>
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/437>
GstWebRTCSCTPTransport is now made into into an abstract base class
that only contains property specifications matching the
RTCSctpTransport interface of the W3C WebRTC specification, see
https://w3c.github.io/webrtc-pc/#rtcsctptransport-interface. This
class is put into the WebRTC library to expose it for applications and
to allow for generation of bindings for non-dynamic languages using
GObject introspection.
The actual implementation is moved to the subclass WebRTCSCTPTransport
located in the WebRTC plugin.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2214>
Being able to access the SCTP Transport object from the application
means the application can access the associated DTLS Transport object
and its ICE Transport object. This means we can observe the ICE state
also for a data-channel-only session. The collated
ice-connection-state on webrtcbin only includes the ICE Transport
objects that resides on the RTP transceivers (which is exactly how it
is specified in
https://w3c.github.io/webrtc-pc/#rtciceconnectionstate-enum).
For the consent freshness functionality (RFC 7675) to work the ICE
state must be accessible and consequently the SCTP transport must be
accessible for enabling consent freshness checking for a
data-channel-only session.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2214>
Prevent cluttering up the rtpsession, and keeping things localized.
Also write TWCC-seqnums for *all* streams in the session if configured by
caps.
A while back WebRTC was not doing TWCC for audio, basically breaking the
whole idea of a "transport-wide seqnuencenumber" applying for all bundled
streams. However, they have since fixed this, and now it no longers
makes sense to be able to single out certain payloadtypes for
use with TWCC, rather just including them all.
This also makes using RTX, RED, FEC etc much simpler, as it will apply
to them all as they enter the rtpsession.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/927>
Just like what we do in VA plugins, the GST_MAP_VAAPI can directly
peek the surface of the VA buffers. The old flag 0 just peek the
surface proxy, which may not be convenient for the users who do not
want to include our headers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer-vaapi/-/merge_requests/435>
This encoder advertises alignment=au as output format, which means
each output frame should contain a full decodable access unit.
The video encoder base class is not aware of our output alignment
and will output spurious buffers with just the SPS/PPS inside when
we call gst_video_encoder_set_headers(), which is broken because
each buffer is supposed to contain a full decodable access unit
in our case.
Just don't tell the base class about our headers, they will be
sent at the beginning of each IDR frame anyway.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2178>
When the probe returns GST_PAD_PROBE_REMOVE and gets called concurrently
from the streaming thread while we're in the callback here, the hook has
already been destroyed by the time we've reacquired the object lock.
Consequently, cleanup_hook gets passed an invalid pointer.
Keep another reference to the hook alive to avoid this situation.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/873>
e.g. when exporting an opaque image, yet with alpha channel.
Apple ProRes certification requires that, when a ProRes-writing
application *knows* that the entire frame is opaque, the application
writes only RGB without alpha even when the clip is RGBA. For that,
this tiny change allows the app to override pixel depth when writing ProRes.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1061>
When a buffer is pushed downstream, we should try not to hold the
buffer mapped with write access. Doing so would often lead to
an unneccesary memcpy later.
For instance, gst_buffer_make_writable() in
gst_video_decoder_finish_frame() will cause a memcpy because of
_memory_get_exclusive_reference().
We know that we can perform a two-step remap when using system
memory, as this will not cause the location of the memory to
change.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/812>
When outputting fragmented mp4, with a seekable downstream, we rewrite
the moov to maybe add a duration to the mvex. If we start by not
writing the initial moov->mvex->mhed duration and then overwrite with a
moov containing mhed atom, the moov's will have different sizes and
could overwrite subsequent data and result in an unplayable file.
e.g. The initial moov would be of size 842 and the final moov would have
a size of 862.
Fix by always pushing out the mhed duration in the moov when
fragmenting.
Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/issues/898
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-good/-/merge_requests/1060>
User can get the required buffer size by using buffer pool config.
Since d3d11 implementation is a candidate for public library in the future,
we need to hide everything from header as much as possible.
Note that the total size of allocated d3d11 texture memory by GPU is not
controllable factor. It depends on hardware specific alignment/padding
requirement. So, GstD3D11 implementation updates actual buffer size
by allocating D3D11 texture, since there's no way to get CPU accessible
memory size without allocating real D3D11 texture.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2482>