There's no reason to release GstMemory manually at all.
If we do release GstMemory, corresponding GstBuffer will be
discarded by GstBufferPool baseclass because the size is changed
to zero.
Actual cause of heavy CPU usage in case of fixed-size pool
(i.e., decoder output buffer pool) and if we remove GstMemory from
GstBuffer is that GstBufferPool baseclass is doing busy wait in acquire_buffer()
for some reason. That needs to be investigated though, discarding
and re-alloc every GstBuffer is not ideal already.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4935>
Depending on the exact output format, 0x00 may be a better default for
padding than 0x80. 0x00 is the recommended padding value when used in
CDP (and cc_data) but is not when used in s334-1a. See CTA-708-E 4.3.5
amd SMPTE 334-1-2007 5.3.2.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4578>
On MacOS with homebrew, the openssl library is not
properly detected with pkg-config.
So disable the test compilation if openssl
is not properly detected along with libcrypto.
libcrypto is detected but it uses the system one
which leads to the error:
your binary is not an allowed client of /usr/lib/libcrypto.dylib for
architecture x86_64
See more details from Apple:
https://developer.apple.com/forums/thread/124782
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4481>
Running tests with Vulkan Validation enabled show an error on vkimage tests:
Code 0 : Validation Error: [ VUID-VkImageViewCreateInfo-image-04441 ]
Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_COMMAND_BUFFER; Object 1: handle
= 0x50000000005, type = VK_OBJECT_TYPE_IMAGE;
| MessageID = 0xb75da543
| Invalid usage flag for VkImage 0x50000000005[] used by vkCreateImageView(). In
this case, VkImage should have VK_IMAGE_USAGE_SAMPLED_BIT |
VK_IMAGE_USAGE_STORAGE_BIT | VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT |
VK_IMAGE_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT |
VK_IMAGE_USAGE_TRANSIENT_ATTACHMENT_BIT | VK_IMAGE_USAGE_INPUT_ATTACHMENT_BIT |
VK_IMAGE_USAGE_FRAGMENT_SHADING_RATE_ATTACHMENT_BIT_KHR |
VK_IMAGE_USAGE_FRAGMENT_DENSITY_MAP_BIT_EXT |
VK_IMAGE_USAGE_VIDEO_DECODE_DST_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_DECODE_DPB_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_ENCODE_SRC_BIT_KHR |
VK_IMAGE_USAGE_VIDEO_ENCODE_DPB_BIT_KHR | VK_IMAGE_USAGE_SAMPLE_WEIGHT_BIT_QCOM
| VK_IMAGE_USAGE_SAMPLE_BLOCK_MATCH_BIT_QCOM set during creation.
The Vulkan spec states: image must have been created with a usage value
containing at least one of the usages defined in the valid image usage list for
image views
This patch adds VK_IMAGE_USAGE_SAMPLED_BIT to the usage bits in test.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4296>
H.265 NAL always have 2 bytes of headers. Unlike the H.264 parser, this parser
will simply return that there is NO_NAL if some of these bytes are missing.
This is then properly special cased by parsers and decoders. Add a test to
ensure we don't break this in the future.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3234>
The appropriate return value for incomplete NAL header should be
GST_H264_PARSER_NO_NAL_END. This tells the parser element to
gather more data. Previously, it would assume the NAL is corrupted
and would drop the data, potentially causing stream corruption.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3234>
As specified in EIA/CEA-608-B section 8.4:
When closed captioning is used on line 21, field 2, it shall conform
to all of the applicable specifications and recommended practices as
defined for field 1 services with the following differences:
a) The non-printing character of the miscellaneous control-character pairs
that fall in the range of 14h, 20h to 14h, 2Fh in field 1, shall be replaced
with 15h, 20h to 15h, 2Fh when used in field 2.
b) The non-printing character of the miscellaneous control-character pairs
that fall in the range of 1Ch, 20h to 1Ch, 2Fh in field 1, shall be replaced
with 1Dh, 20h to 1Dh, 2Fh when used in field 2.
This means simply switching the "field" field in the caps isn't enough for
converting raw 608 from one field to another, some control codes also
need to be amended.
+ Adds simple test
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4126>
The previous implementation was a bit primitive, assuming the subclass
had registered a template name starting with sink_ . Instead make
the effort of parsing the actual template name, and use that to generate
the final pad name.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4032>
Raw 608 caps can now contain a "field" field. On the input side it
signifies that the input raw 608 is attached to either field 0 or 1,
on the output side it allows selecting whether to extract the raw 608
data for field 0 or 1 for field-aware formats.
In addition, it is also allowed to use ccconverter to "convert" 608
field 0 to 608 field 1 (and conversely), this is passthrough as the
change only needs to happen in the caps.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/4031>
Systems like musl libc don't support ISO 6937 in iconv. This ensures
that the MPEG-TS plugin can cope with that. There is existing support
in the plugin for other methods, so it seems to have been the original
intent anyway.
Fixes: #1314
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3245>
If an input is malformed (only produces cea608 field 1 cc_data) then
when in passthrough we would effectively be dropping every second cea608
on output as we would not store any unused cea608 data.
Fix by having all code paths go through the framerate conversion code
which will store and retrieve any relevant data across buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3211>
In fact, all the h264 bit writer have byte aligned output except
the slice header. So we change the API from bit size in unit to
byte size, which is easy to use. For slice header, we add a extra
"trail_bits_num" to return the unaligned bits number.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3193>
According to W3C
specification (https://w3c.github.io/webrtc-pc/#datachannel-send) we
should return InvalidStateError exception when trying to send when the
channel is not open. In the world of C/glib/gstreamer we don't have
exceptions but have to rely on gboolean/GError instead. Introducing
these calls for a change in function signature of the action signals
used to send data on the datachannel. Changing the signature of the
existing "send-string" and "send-data" signals would mean an immediate
breaking change so instead we deprecate them. Furthermore, there is no
way to express GError** as an argument to an action signal in a way
that fits language bindings (pointer-to-pointer simply does not work)
and we have to use regular functions instead.
Therefore we introduce gst_webrtc_data_channel_send_data_full() and
gst_webrtc_data_channel_send_string_full() while deprecating the old
functions and corresponding signals.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1958>
doesn't align on 20 millisecond frame size.
The AMR-WB codec imposes a fixed 20 millisecond frame size. In its current
form, the `voamrwbenc` plugin deals with this limitation by discarding any
audio at the end of the stream that falls short of 20 milliseconds. This patch
keeps the audio data, and appends silence to the end to preserve frame size
alignment.
The patch also adds tests to check for the updated behavior. I noticed that
tests weren't being built, so I changed the build to allow for building the
tests when the `tests` and `voamrwbenc` options are set.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/3027>
This avoids getting in a bunch of corner cases. We'd have to insert
a "rejected" line from the start as a place-holder to get around this,
but the rest of the code just becomes more complicated, so just
disallow it for now.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2439>
There might be a sequence of event and buffer flow:
- Got stream-start/caps/segment events
- Got flush events
- And then buffers with a new segment event
In the above case, stream-start and caps event might not be reached to
peer proxysrc if peer proxysrc is not ready to receive them.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1552>
This new signal allows data-channel consumers to configure signal handlers on a
newly created data-channel, before any data or state change has been notified.
The webrtcin unit-tests were refactored to make use of this new signal.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2427>
WHen bundling, if multiple medias are used with the same media payload, then
each of the fec/rtx/red additions would add a distinct payload. This could
very easily overflow the available payload space.
Instead, track the relationship between the media payload value and
the relevant fec/rtx/red payload values and reuse them whenever
necessary, even when bundling.
e.g.
...
a=group:BUNDLE video0 video1
m=video 9 UDP/SAVPF 96 97
a=mid:video0
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
m=video 9 UDP/SAVPF 96 97
a=mid:video1
a=rtpmap:96 VP8/90000
a=rtpmap:97 rtx/90000
a=fmtp:97 apt=96
...
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2474>
Now it uses the JPEG parser in libgstcodecparsers, while the whole
code is simplified by relying more in baseparser class for tag
handling.
The element now signals chroma-format and default framerate is 0/1,
which is for still-images.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1473>
Add gst_h265_parser_identify_and_split_nalu_hevc() method to
handle a case where packetized stream contains start-code prefix.
This new method behaves similar to exisiting gst_h265_parser_identify_nalu_hevc()
but it will scan start-code prefix to split given data into
NAL units.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2394>
They are part of gst_dep already and we have to make sure to always have
gst_dep. The order in dependencies matters, because it is also the order
in which Meson will set -I args. We want gstreamer's config.h to take
precedence over glib's private config.h when it's a subproject.
While at it, remove useless fallback args for gmodule/gio dependencies,
only gstreamer core needs it.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2031>
If things progress fast enough, some state changes may not be seen be
the waiting code.
Fix by:
1. keeping a list of all the state changes
2. waiting checks each entry and if the relevant state is found, all
states up to and including then are removed.
This ensures that any waits will see all the state sets.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
Input (sink pads) is the already-ssrc-muxed stream with the relevant rtp
sdes header extensions already applied:
- mid
- stream-id
- repaired-stream-id
Output (src pads) have the pads separated into individual ssrc's as
that's what rtpbin gives us.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
When creating a transceiver when creating an answer, the media kind of the
transceiver was never set correctly initially. This would lead to a
GST_WARNING being produced about changin a transceiver's media kind.
Fix by retrieving the GstSDPMedia kind from the offer instead as the answer
GstSDPMedia has not been set as the answer caps have not been chosen yet.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1664>
This reverts commit 3cad3455377d5a22faa138d9df840257059776c8.
That commit was breaking the association between an audio and
a video track in the standard case.
In practice, to support carrying separate MediaStream, we are
going a way to map what MediaStreamTrack belong to what MediaStream,
but that will require some thinking about the API.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2023>
From https://datatracker.ietf.org/doc/html/draft-ietf-mmusic-msid-16:
> Multiple media descriptions with the same value for msid-id and
> msid-appdata are not permitted.
Our previous implementation of simply using the CNAME as the msid
identifier and the name of the transceiver as the msid appdata was
misguided and incorrect, and created issues when bundling multiple
video streams together: the ontrack event was emitted with the same
streams for the two bundled medias, at least in Firefox.
Instead, use the transceiver name as the identifier, and expose
a msid-appdata property on transceivers to allow for further
customization by the application. When the property is not set,
msid-appdata can be left empty as it is specified as optional.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/2003>
Make it posible to configure the element to obtain the timestamps from
reference timestamp meta data instead of using the ntp-offset property,
or estimating its own offset. Currently the only time format supported
is "timestamp/x-unix", i.e. UTC time expressed in the unix time epoch.
In addition the custom event GstNtpOffset has been renamed to
GstOnvifTimestamp, to reflect that it is not necessarily used to convey
the ntp-offset. As a consequence we had to modify a couple of files in
the rtsp-server as well.
Fixes#984
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1683>
We need to always add the RTX/RED/ULPFEC elements as rtpbin will only
call us once to request aux/fec senders/receivers.
We also need to regenerate the media section of the SDP instead of
blindly copying from the previous offer.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1762>
Note that AYUV and AYUV64 formats will be used to expand format
support, especially some packed YUV formats (e.g., Y410, YUY2)
are common DXGI formats used for hardware decoder/encoder on Windows
but those formats cannot be used as a render target. We need to handle
them differently without pixel shader help, using compute shader
for example.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1699>
When an extmap is defined twice for the same ID, firefox complains and
errors out (chrome is smart enough to accept strict duplicates).
To work around this, we deduplicate extmap attributes, and also error
out when a different extmap is defined for the same ID.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1383>
When schedule is true (as is the case by default), we insert padding
when no caption data is present in the schedule queue, and previously
weren't checking whether the caption pad had gone EOS, leading to
infinite scheduling of padding after EOS on the caption pad.
Rectify that by adding a "drain" parameter to dequeue_caption()
In addition, update the captions_and_eos test to push valid cc_data
in: without this cccombiner was attaching padding buffers it had
generated itself, and with that patch would now stop attaching
said padding to the second buffer. By pushing valid, non-padding
cc_data we ensure a caption buffer is indeed attached to the first
and second video buffers.
Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/1252>