rtpulpfeccommon.c:432:27: error: format ‘%lx’ expects argument of type
‘long unsigned int’, but argument 10 has type ‘guint64 {aka long long unsigned int}’
https://bugzilla.gnome.org/show_bug.cgi?id=793732
The ulpfecenc "mux-seq" and "ssrc" properties were initially added
because the element did more than implement ULPFEC. As it was
decided that FLEXFEC would be implemented in a separate element,
both properties are now unneeded and confusing.
Change the default for the ulpfecenc multi-packet property,
as it is expected that most users of this element will be protecting video
streams.
Change the default property for the rtpredenc allow-no-red-blocks
property, as it should also be its default mode of operation.
https://bugzilla.gnome.org/show_bug.cgi?id=793843
It is expected that when connecting to a stream that has
already started, the caps will only arrive at the interval
specified on rtpgstpay, we shouldn't be warning as this is
a normal mode of operation.
https://bugzilla.gnome.org/show_bug.cgi?id=793798
We expose a set of new elements:
* ULPFEC encoder / decoder
* A storage element, which should be placed before jitterbuffers,
and is used to store packets in order to attempt reconstruction
after the jitterbuffer has sent PacketLost events
* RED encoder / decoder (RFC 2198), these are necessary to
use FEC in webrtc, as browsers will propose and expect ulpfec
packets to be wrapped in red packets
With contributions from:
Mathieu Duponchelle <mathieu@centricular.com>
Sebastian Dröge <sebastian@centricular.com>
https://bugzilla.gnome.org/show_bug.cgi?id=792696
All received configurations are parsed and added to a list, this lead
to an unbounded memory usage. As the configuration is resent every
second this quickly lead to a large memory usage.
Add a check to only add the config if it is not already available in
the list. This fix only handle the typical case of a well behaved
stream, a malicious server could still send many useless
configurations to raise the client memory usage.
Only for byte-stream or hev1. For hvc1 the SPS/PPS are in the
caps as codec_data field and in this case they shouldn't be in
the stream data as well. The output caps should be updated with
the new codec_data if needed, for hvc1.
We keep the boolean byte_stream around since it's nicer for
readability and most of the code just cares about byte_stream
or not. This is useful for future-proofing the code for when
we add support for hev1 output as well.
This would happen if input is byte-stream with four-byte
sync markers instead of three-byte ones. The code that
scans for sync markers will place the start of the NALU
on the third-last byte of the NALU sync marker, which
means that any additional zeros may be counted as belonging
to the previous NALU instead of being part of the next sync
marker. Fix that so we don't send VPS/SPS/PPS with trailing
zeros in this case.
See https://bugzilla.gnome.org/show_bug.cgi?id=732758
There is no difference between pushing out a buffer directly
with gst_rtp_base_depayload_push() and returning it from the
process function. The base class will just call _depayload_push()
on the returned buffer as well.
So instead of marshalling buffers through three layers and back,
just push them from one place in handle_nal() and always return
NULL from the process vfunc. This simplifies the code a little.
Also rename _push_fragmentation_unit() to _finish_fragmentation_unit()
for clarity. Push sounds like it means being pushed out, whereas
it might just be pushed into an adapter.
This change has the side-effect that multiple NALs in a single STAP
(such as SPS/PPS) may no longer be pushed out as a single buffer if
we output NALs in byte-stream format (i.e. not aggregate AUs), but
that shouldn't really make any difference to anyone.
This would happen if input is byte-stream with four-byte
sync markers instead of three-byte ones. The code that
scans for sync markers will place the start of the NALU
on the third-last byte of the NALU sync marker, which
means that any additional zeros may be counted as belonging
to the previous NALU instead of being part of the next sync
marker. Fix that so we don't send SPS/PPS with trailing
zeros in this case.
https://bugzilla.gnome.org/show_bug.cgi?id=732758
The G722 payload only accepts G722 audio with channels=1, so it must
specify the encoding-params=1 in its src caps, otherwise it causes issues
with farstream which thinks it supports 2 channels G722 and when
confronted with a remote that has G722/8000/2, it will negotiate it
and error out with a not-negotiated when the caps don't intersect
at runtime.
https://bugzilla.gnome.org/show_bug.cgi?id=789878
This then just counts samples and calculates the output timestamps based
on that and the very first observed timestamp. The timestamps on the
buffers are continued to be used to detect discontinuities that are too
big and reset the counter at that point.
When receiving data via Bluetooth, many devices put completely wrong
values into the RTP timestamp field. For example iOS seems to put a
timestamp in milliseconds in there, instead of something based on the
current sample offset (RTP clock-rate == sample rate).
https://bugzilla.gnome.org/show_bug.cgi?id=787297
Do not allocate payload size outbuf if appending payload buffer.
The commit 137672ff18 attached payload
to the output buffer but forgot to remove payload allocation. That
effectively doubled payload size and add zero'ed or random bytes.
Makes the following pipeline work again:
gst-launch-1.0 -v audiotestsrc wave=2 ! gsmenc ! rtpgsmpay ! rtpgsmdepay ! gsmdec ! autoaudiosink
https://bugzilla.gnome.org/show_bug.cgi?id=784616
There is no difference between pushing out a buffer directly
with gst_rtp_base_depayload_push() and returning it from the
process function. The base class will just call _depayload_push()
on the returned buffer as well.
So instead of marshalling buffers through three layers and back,
just push them from one place in handle_nal() and always return
NULL from the process vfunc. This simplifies the code a little.
Also rename _push_fragmentation_unit() to _finish_fragmentation_unit()
for clarity. Push sounds like it means being pushed out, whereas
it might just be pushed into an adapter.
This change has the side-effect that multiple NALs in a single STAP
(such as SPS/PPS) may no longer be pushed out as a single buffer if
we output NALs in byte-stream format (i.e. not aggregate AUs), but
that shouldn't really make any difference to anyone.
Use the ::process_rtp_packet() vfunc to avoid mapping the
RTP buffer twice.
gst_rtp_buffer_get_payload_buffer() returns a new sub-buffer
which will always be writable, so no need to make it writable.
Every g_quark_from_static_string() is a hash table lookup serialised
on the global quark lock in GLib. Let's just look up the two quarks
we need once and cache them locally for future use. While we're at it,
add new utility functions for the two most commonly used tags
(audio + video). Make first argument a gpointer so we don't have to
cast and make the code ugly. These are used for logging purposes
only anyway.
This prevents storing an infinite amount of e.g. comment headers if they
come without a new initialization header in front of them. There can
only be one header of each type.
If we also replace all headers when receiving any possibly following
comments header, we would throw away the config header before being able
to make use of it.
The payloader needs to reset and update the vorbis config data which is
pushed on the network if it receives new headers, or at least, it may
have to do so.
Without this, the stream configuration could change without the
payloader sending the new configuration to the other side.
Insert VPS/SPS/PPS before the first NAL unit containing an I-frame in an
access unit only. If an access unit consists of several such NAL units
(tiles) VPS/SPS/PPS should only be inserted before the first of them so
that parameters are only updated between frames.
Do not insert VPS/SPS/PPS before P-frames when config-interval is -1.
https://bugzilla.gnome.org/show_bug.cgi?id=775817
Some buggy payloaders, e.g. rtph263pay, may use mode B for packets
that starts with a picture (or GOB) start code although it's not
allowed. Let's be nice and not drop these packets/frames.
https://bugzilla.gnome.org/show_bug.cgi?id=773516
Bump the bitstream parsing to TRACE log level so it doesn't flood the
output when trying to read the more useful DEBUG and LOG messages.
Also use GST_DEBUG_OBJECT instead of GST_DEBUG in various places
https://bugzilla.gnome.org/show_bug.cgi?id=773514
Altough commits 6a16be7, 64f9d08 and 0c7e3a8 fixed some issues they
introduced others. This patch fixes the leak of one macroblock for every
B fragment.
Macroblock structures must not be freed immediately after finding the
boundaries as they are stored and used later. However the inital dummy
structure (used for finding the first boundary) must be freed.
CID #1212156https://bugzilla.gnome.org/show_bug.cgi?id=773512
We were just picking the timestamp of the last buffer pushed into our
adapter before we had enough data to push out.
This fixes things to figure out how large each frame is and what
duration it covers, so we can set both the timestamp and duration
correctly.
Also adds some DISCONT handling.
This may cause a few packets to be processed by the parser, but it's
better than never pushing out buffers from a slightly broken stream
where no marker bits are set.
Under certain conditions gst_rtp_buffer_get_payload() returns a copy of
the payload. In this case the payload modifications will not affect the
rtp buffer. So instead of modifying the payload buffer directly we
should modify the buffer that actually gets pushed on the adapter.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Always intersect with the filter caps in the getcaps function
to make sure we return a subset of what was requested.
Other payloaders also have this problem and need fixing
in future commits.
When parsing NAL unit type in codec_data, check the 6bits of
NAL_unit_type only and do not require the array_completeness bit to be
0, since the default and mandatory value of array_completeness is 1 for
hvc1.
https://bugzilla.gnome.org/show_bug.cgi?id=768653
Handle sprop-vps, sprop-sps and sprop-pps in caps instead of
sprop-parameter-sets.
rtph265pay works with byte-stream and hvc1 formats but not hev1 yet. It
handles profile-id, tier-flag and level-id in caps query.
https://bugzilla.gnome.org/show_bug.cgi?id=753760
This is supposed to be either in the codec_data (avc stream format) or inside
the stream, and we extract it from there. It should not be set from a
property as it's stream specific.
https://bugzilla.gnome.org/show_bug.cgi?id=767789
The GST_BUFFER_OFFSET of output buffers returned to GstRtpBasePayload
should reflect the number of "samples" in the unit of the RTP clock in this
buffer. If this is not true, then it shouldn't be set.
https://bugzilla.gnome.org/show_bug.cgi?id=761943
1. according to RFC, T bit is only set when either the RTP packet only contains the J2K main header, or the packet contains tile parts from multiple tiles. This is now being managed correctly in the code. The second scenario cannot happen with our payloader, since tile headers are always placed in their own RTP packet, and so a packet cannot contain tile parts from multiple tiles.
However, I have added code to track if multiple tile parts are included in a single RTP packet, in case in the future we want to put header and data in same packet.
2. Old code would set the tile id to zero for all J2K packets. This is now set correctly to the appropriate tile id.
https://bugzilla.gnome.org/show_bug.cgi?id=745187
After clearing the adapter due to a DISCONT, as might happen when some packet(s)
have been lost, the depayloader was pushing data into the adapter (which had no
header due to the clear), creating a headerless frame out of it, and sending it
downstream. The downstream decoder would then usually ignore it; unless there
were lots of DISCONTs from the jitterbuffer in which case the decoder would reach
its max_errors limit and throw an element error. Now we just discard that data.
It is probaby not worth trying to salvage this data because non-progressive
jpeg does not degrade gracefully and makes the video unwatchable even with
low packet loss such as 3-5%.
payload_buffer hasn't been assigned a value before the jumps to
switch_failed or packet_short. So the value must be NULL. No need
to unmap and unref.
CID #1316476
Free memory of current macroblock once it isn't needed so it isn't leaked
by the call of the gst_rtp_h263_pay_B_mbfinder function.
if (!(mac = gst_rtp_h263_pay_B_mbfinder (context, gob, mac, mb))) {
CID 1212156
RFC 2435 mentions in section 4.1 that U/V use table number 1, but this seems
just like an example. Some encoders are not following that and there seems to
be no reason to reject their streams.
https://bugzilla.gnome.org/show_bug.cgi?id=761345
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
VP9_PAY_PICTURE_ID_7BITS and VP9_PAY_PICTURE_ID_15BITS are mutually
exclusive options of the picture-id-mode. We can break after the
first case.
1 or 2 bytes need to be added to the header length depending on the
PictureID size.
https://tools.ietf.org/html/draft-uberti-payload-vp9-00#section-4.2
CID 1353479
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without tags or
with only the audio tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
Both Firefox and Chrome uses OPUS as the encoding in their SDP.
Adding this now defacto standard name remove the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
This will help elements that cannot deal with multistream,
such as the RTP payloader.
The caps now do not include a "streams" field anymore, but
a "multistream" boolean, since we have no real use for knowing
the exact amount of streams.
https://bugzilla.gnome.org/show_bug.cgi?id=665078
Quick and dirty implementation of an RTP payloader and depayloader
for VP9. In particalur it assumes no spatial or temporal layering,
non-flexible mode, and some other bits and pieces.
https://bugzilla.gnome.org/show_bug.cgi?id=754773
Looks like it just uses the NAL enums and nothing else from
the codecparsers, and that's the only reason it had to be
moved from -good to -bad when it was originally added. We
can probably keep those NAL enums up to date enough, so let's
remove the codecparser dependency so it can be moved back into
-good.
https://bugzilla.gnome.org/show_bug.cgi?id=761606
It's not enough to have timeout or event based VPS/SPS/PPS information
sent in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending VPS/SPS/PPS isn't enough.
It might also be desirable in general to make sure the VPS/SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
VPS/SPS/PPS is not signaled out of band.
This commit adds the possibility to send VPS/SPS/PPS with every key frame.
This mode can be enabled by setting "config-interval" property to -1. In
this case the payloader will add VPS, SPS and PPS before every key (IDR)
frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
h264parse and gstrtph264depay do the same, let's keep the behaviour
consistent. As we now include the codec_data inside the stream, this causes
less caps renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=753228
rtph264depay does the same and this fixes decoding of some streams with 32
SPS (or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255),
but the field in the codec_data for the number of SPS or PPS is only 5
(or 8) bit. As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spect about the codec_data.
This causes an assertion and would lead to getting a NULL instead
of a buffer. Without proper checking this would easily lead to a
segfault.
Related to rpth264depay: https://bugzilla.gnome.org/show_bug.cgi?id=737199
type is truncated to 0-31 with "& 0x1f", but right after that it is checks if
the value is equivalent to GST_H265_NAL_VPS, GST_H265_NAL_SPS, and
GST_H265_NAL_PPS (which are 32, 33, and 34 respectively). Obviously, this will
never be True if the value is maximum 31 after the truncation.
The intention of the code was to truncate to 0-63.
After further investigation the previous commit is wrong. The code intended to
check if the type is 39 or the ranges 41-44 and 48-55. Just like gsth265parse.c
does. Type 40 would not be complete.
nal_type is the index for a GstH265NalUnitType enum. There are two types of dead
code here:
First, after checking if nal_type is >= 39 there are two OR conditionals that
check if the value is in ranges higher than that number, so if nal_type >= 39
falls in the True branch those other conditions aren't checked and if it falls
in the False branch and they are checked, they will always also be False. They
are redundant.
Second, the enum has a range of 0 to 40. So the checks for ranges higher than 41
should never be True.
Removing this redundant checks.
CID 1249684
For APP/JPG markers the size is following and we have to skip that. This is
not really a problem unless the marker contains e.g. a preview JPEG or
something else that we might interprete as another marker.
In file included from gstrtpL16depay.h:27:0,
from gstrtp.c:73:
gstrtpchannels.h:154:33: error: 'channel_orders' defined but not used [-Werror=unused-const-variable]
static const GstRTPChannelOrder channel_orders[] =
It's not enough to have timeout or event based SPS/PPS information sent
in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending SPS/PPS is not sufficient.
It might also be desirable in general to make sure the SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
SPS/PPS is not signaled out of band.
This patch adds the possibility to send SPS/PPS with every key frame. This
mode can be enabled by setting "config-interval" property to -1. In this
case the payloader will add SPS and PPS before every key (IDR) frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
This way we can use -1 as special value, which is nicer than MAXUINT.
This is backwards compatible even with the GValue API, as shown by
a unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
Actual code is checking for a NULL terminator and a ';' terminator,
for backward compat, in a chained way that cause all events being rejected.
The proper condition is to reject the events when terminator isn't
in ['\0', ';'] set.
https://bugzilla.gnome.org/show_bug.cgi?id=758151
* use g_list_free_full(), don't iterate elements maually when freeing
* call gst_rtp_*_pay_clear_packet(), don't duplicate its code
* use gst_buffer_unref() to clarify that it is buffers being released,
instead of refering directly to gst_mini_object_unref()
Fixes https://bugzilla.gnome.org/show_bug.cgi?id=755277
Use constantDuration to calculate the timestamp of non-first AU in the
RTP packet.
If constantDuration is not present in the MIME parameters, its value
must be calculated based on the timing information from two consecutive
RTP packets with AU-Index equal to 0.
https://bugzilla.gnome.org/show_bug.cgi?id=747881
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
GstRTSPMedia uses this classification to detect the real payloader
inside a dynpay bin and asserts if it doesn't find it, therefore
it is required
https://bugzilla.gnome.org/show_bug.cgi?id=753325
Initialize the PT to the default value of the codec and check if
it is still the default before declaring the pt to be dynamic or
not when setting the caps.
Also use the PT constants from the rtp lib when possible
https://bugzilla.gnome.org/show_bug.cgi?id=747965
We need a proper caps event from upstream with the full RTP caps as we can't
create caps ourselves from thin air. Fixes usage of rtpstreamdepay after e.g.
a filesrc or any other element that supports pull mode.
https://bugzilla.gnome.org/show_bug.cgi?id=753066
h264parse does the same, let's keep the behaviour consistent. As we now
include the codec_data inside the stream too here, this causes less caps
renegotiation.
The spec says:
When a picture parameter set NAL unit with a particular value of
pic_parameter_set_id is received, its content replaces the content of the
previous picture parameter set NAL unit, in decoding order, with the same
value of pic_parameter_set_id (when a previous picture parameter set NAL unit
with the same value of pic_parameter_set_id was present in the bitstream).
h264parse does the same and this fixes decoding of some streams with 32 SPS
(or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255), but
the field in the codec_data for the number of SPS or PPS is only 5 (or 8) bit.
As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spec about the codec_data.
Need to check that the number of bytes we want to copy from the adapter
actually is available and handle the error case gracefully. This error
may happen if malformed packets are received and we don't have a
complete frame.
https://bugzilla.gnome.org/show_bug.cgi?id=752663
For more optimised RTP packet handling: means we don't
need to map the input buffer again but can just re-use
the mapping the base class has already done.
https://bugzilla.gnome.org/show_bug.cgi?id=750235
For more optimised RTP packet handling: means we don't
need to map the input buffer again but can just re-use
the map the base class has already done.
https://bugzilla.gnome.org/show_bug.cgi?id=750235