As waiting for the load to be finished is specific to the WebView, it should be
done from our WPEView, not from the WPEContextThread. This fixes issues where
multiple wpesrc elements are created in sequence. Without this patch the first
view might receive erroneous buffer notifications.
Fixes#1386
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1568>
Currently for buffer splitting only output duration can be specified.
Allow specifying a buffer size in bytes for splitting.
Consider a use case of the below pipeline
appsrc ! rptL16pay ! capsfilter ! rtpbin ! udpsink
Maintaining MTU for RTP transfer is desirable but in a scenario
where the buffers being pushed to appsrc do not adhere to this,
an audiobuffersplit element placed between appsrc and rtpL16pay
with output buffer size specified considering the MTU can help
mitigate this.
While rtpL16pay already has a MTU setting, in case of where an
incoming buffer has a size close to MTU, for eg. with a MTU of
1280, a buffer of size 1276 bytes would be split into two buffers,
one of 1268 and other of 8 bytes considering RTP header size of
12 bytes. Putting audiobuffersplit between appsrc and rtpL16pay
can take care of this.
While buffer duration could still be used being able to specify
the size in bytes is helpful here.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1578>
libnice now supports the concept of end-of-candidate, so use the API
for it. This also means that if you don't do that, the webrtcbin will
never declared the connection as failed.
This requires bumping the dependency to libnice 0.1.16
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1139>
The width/height from the video meta can be padded width, height. But when
sourcing from padded buffer, we only want to use the valid pixels. This
rectangle is from the crop meta, orther it is deduces from the caps. The width
and height from the caps is save in the parent class, use these instead of the
GstVideoInfo when settting the src rectangle.
This fixes an issue with 1080p video displaying repeated or green at the
padded bottom 8 lines (seen with v4l2codecs).
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1580>
Note that newly added formats (YUY2, UYVY, and VYUY) are not supported
render target view formats. So such formats can be only input of d3d11convert
or d3d11videosink. Another note is that YUY2 format is a very common
format for hardware en/decoders on Windows.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1581>
gst_caps_new_simple gets wrong types for rate and channel which
may lead to a crash.
As 64-bit values for rate, depth, format, channels does not
make much sense and since any other functionality in gstreamer
expects G_TYPE_INT for channels and rate, we should stick to that
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1576>
This (so-far) Linux- and FreeBSD-only API lets users create file
descriptors purely in memory, without any backing file on the filesystem
and the race condition which could ensue when unlink()ing it.
It also allows seals to be placed on the file, ensuring to every other
process that we won’t be allowed to shrink the contents, potentially
causing a SIGBUS when they try reading it.
This patch is best viewed with the -w option of git log -p.
It is an almost exact copy of Wayland commit
6908c8c85a2e33e5654f64a55cd4f847bf385cae, see
https://gitlab.freedesktop.org/wayland/wayland/merge_requests/4
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1577>
For Dolby AC4 audio experience, parsing PMTs/APD from transport stream layer for all available presentations.
Refer to ETSI EN 300 468 V1.16.1 (2019-05)
1. 6.4.1 Audio preselection descriptor
2. Table M.1: Mapping of codec specific values to the audio preselection descriptor
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1555>
Adding vp9parse element to parse various stream information such as
resolution, profile, and so on. If upstream does not provide resolution and/or
profile, this would be useful for decodebin pipeline for autoplugging
suitable decoder element depending on template caps of each decoder element.
In addition, vp9parse element supports unpacking superframe into
single frame for decoders. The vp9 superframe is a frame which consists
of multiple frames (or superframe with one frame is allowed) followed by superframe
index block. Then unpacked each frame will be considered as normal frame
by decoder. The decision for unpacking will be done by downstream element's
"alignment" caps field, which can be "super-frame" or "frame".
If downstream specifies the "alignment" as "frame",
then vp9parse element will split an incoming superframe into single frames
and the superframe index (located at the end of the superframe) data
will be discarded by vp9parse element.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1041>
The default BYTE DURATION basesrc query handler will return `-1` and TRUE. In
order to properly handle cases where upstream http servers didn't return a valid
Content-Length we also need to check whether it was valid when calculating
bitrates.
Avoids returning completely bogus bitrates with gogol's video streaming services
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1544>
Add custom IMFMediaBuffer and IMF2DBuffer implementation in order to
keep track of lifecycle of Media Foundation memory object.
By this new implementation, we can pass raw memory of upstream buffer
to Media Foundation without copy.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/1518>