A new signal named on-bundled-ssrc is provided and can be
used by the application to redirect a stream to a different
GstRtpSession or to keep the RTX stream grouped within the
GstRtpSession of the same media type.
https://bugzilla.gnome.org/show_bug.cgi?id=772740
aacparse resizes input buffer while converting ADTS stream to RAW,
During buffer resize buffer write permission is not checked.
This throws gst_buffer_is_writable assertion and leads to AV sync issue some times.
It is corrected by making buffer writeable using gst_buffer_make_writable
https://bugzilla.gnome.org/show_bug.cgi?id=774129
TIME segment implies that stream/running time is being handled by upstream.
So, we shouldn't override it without any clue.
This patch is for fixing seek in DASH streaming.
https://bugzilla.gnome.org/show_bug.cgi?id=774196
The accumulator is filled by intersecting with all the pad caps, as such
it must be initialized with ANY (like it is before the iteration is
started) and not to EMPTY.
Fixes the CAPS query always returning EMPTY caps when resyncing happened
during the query, e.g. because pads were added/removed.
The g_object_unref (saddr) before receiving message seems to be redundant as it
is done just before jumping to retry
Though not directly related, part of
https://bugzilla.gnome.org/show_bug.cgi?id=772841
Control messages are used only in multicast mode - to detect if the destination
address is not ours and possibly drop the packet. However in non-multicast
modes the messages are still allocated and freed even if not used. Therefore
request control messages from g_socket_receive_message() only in multicast
mode.
https://bugzilla.gnome.org/show_bug.cgi?id=772841
Do not use last buffer TS + buffer duration because buffer duration
might be inaccurate, especially for frame rates like 30fps where a
rounding error is observed.
https://bugzilla.gnome.org/show_bug.cgi?id=773785
When doing rtx, the jitterbuffer will always add an rtx-timer for the next
sequence number.
In the case of the packet corresponding to that sequence number arriving,
that same timer will be reused, and simply moved on to wait for the
following sequence number etc.
Once an rtx-timer expires (after all retries), it will be rescheduled as
a lost-timer instead for the same sequence number.
Now, if this particular sequence-number now arrives (after the timer has
become a lost-timer), the reuse mechanism *should* now set a new
rtx-timer for the next sequence number, but the bug is that it does
not change the timer-type, and hence schedules a lost-timer for that
following sequence number, with the result that you will have a very
early lost-event for a packet that might still arrive, and you will
never be able to send any rtx for this packet.
Found by Erlend Graff - erlend@pexip.comhttps://bugzilla.gnome.org/show_bug.cgi?id=773891
The lost-event was using a different time-domain (dts) than the outgoing
buffers (pts). Given certain network-conditions these two would become
sufficiently different and the lost-event contained timestamp/duration
that was really wrong. As an example GstAudioDecoder could produce
a stream that jumps back and forth in time after receiving a lost-event.
The previous behavior calculated the pts (based on the rtptime) inside the
rtp_jitter_buffer_insert function, but now this functionality has been
refactored into a new function rtp_jitter_buffer_calculate_pts that is
called much earlier in the _chain function to make pts available to
various calculations that wrongly used dts previously
(like the lost-event).
There are however two calculations where using dts is the right thing to
do: calculating the receive-jitter and the rtx-round-trip-time, where the
arrival time of the buffer from the network is the right metric
(and is what dts in fact is today).
The patch also adds two tests regarding B-frames or the
“rtptime-going-backwards”-scenario, as there were some concerns that this
patch might break this behavior (which the tests shows it does not).
The new timeout is always going to be (timeout + delay), however, the
old behavior compared the current timeout to just (timeout), basically
being (delay) off.
This would happen if rtx-delay == rtx-retry-timeout, with the result that
a second rtx attempt for any buffers would be scheduled immediately instead
of after rtx-delay ms.
Simply calculate (new_timeout = timeout + delay) and then use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=773905
Commit 83e718 added a pad template to splitmux request
pads, which means that GstElement now releases the pads on
dispose, but after having removed all elements in the bin
and unlinked them. Make sure we can handle cleanup in that case
without throwing assertions.
https://bugzilla.gnome.org/show_bug.cgi?id=773784
qtdemux.c: In function ‘qtdemux_parse_tree’:
qtdemux.c:10139:16: error: ‘color_table_id’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
if (color_table_id != 0) {
^
qtdemux.c:10121:19: note: ‘color_table_id’ was declared here
guint16 color_table_id;
^~~~~~~~~~~~~~
The ProRes guidelines suggest an interleave of 0.5s is common, but
specifies that for ProRes at most 2MB (for SD) and 4MB (for HD) should
be used per chunk.
It might also make sense to use similar numbers in general.
https://bugzilla.gnome.org/show_bug.cgi?id=773217
Previously we were switching from one chunk to another on every single
buffer. This wastes some space in the headers and, depending on the
software, might depend in more reads (e.g. if the software is reading
multiple samples in one go if they're in the same chunk).
The ProRes guidelines suggest an interleave of 0.5s is common, but
specifies that for ProRes at most 2MB (for SD) and 4MB (for HD) should
be used per chunk. This will be handled in a follow-up commit.
https://bugzilla.gnome.org/show_bug.cgi?id=773217
It's required for ProRes to work with other software.
It is also in the MP4 standard, but inventing values here seems a bit
tricky for the general case and it does not really give any extra
information.
https://bugzilla.gnome.org/show_bug.cgi?id=769048
Some buggy payloaders, e.g. rtph263pay, may use mode B for packets
that starts with a picture (or GOB) start code although it's not
allowed. Let's be nice and not drop these packets/frames.
https://bugzilla.gnome.org/show_bug.cgi?id=773516
Bump the bitstream parsing to TRACE log level so it doesn't flood the
output when trying to read the more useful DEBUG and LOG messages.
Also use GST_DEBUG_OBJECT instead of GST_DEBUG in various places
https://bugzilla.gnome.org/show_bug.cgi?id=773514
Altough commits 6a16be7, 64f9d08 and 0c7e3a8 fixed some issues they
introduced others. This patch fixes the leak of one macroblock for every
B fragment.
Macroblock structures must not be freed immediately after finding the
boundaries as they are stored and used later. However the inital dummy
structure (used for finding the first boundary) must be freed.
CID #1212156https://bugzilla.gnome.org/show_bug.cgi?id=773512
Instead of sending EOS when a source byes we have to wait for
all the sources to be gone, which means they already sent BYE and
were removed from the session. We now handle the EOS in the rtcp
loop checking the amount of sources in the session.
https://bugzilla.gnome.org/show_bug.cgi?id=773218
Improve RFC2326 - chapter C.3 compatibility:
In case just a single stream is specified in SDP and the control attribute
is missing do not drop the stream but rather assume "a=control:*"
https://bugzilla.gnome.org/show_bug.cgi?id=770568
Use the number of milliframes per second for integral and drop-frame
framerates, as suggested by the QT file format specification and other
places. We already did that for integral framerates before, but not for
drop-frame framerates. This now keeps precision better.
For all other framerates, check if it's close to a well-known framerate
and use that instead.
https://bugzilla.gnome.org/show_bug.cgi?id=769041
We consider there's a sifnificant difference when it's larger than on second
or than half the duration of the last processed fragment in case the latter is
larger.
https://bugzilla.gnome.org/show_bug.cgi?id=754230
Modify the caps string to allow width and height greater than 4096.
There is no need to restrict it since the matroska format allows the
width and height values to be up to eight bytes long.
https://bugzilla.gnome.org/show_bug.cgi?id=773582
This solves a hanging mainloop in following scenario:
* connect to source
* network/server drops
* pipeline set to NULL (and connection to flushing as part)
* pipeline set to PAUSED/PLAYING (connection to non-flushing, but not recorded)
* [connecting still not possible]
* pipeline set to NULL => mainloop hangs (since no actual flushing is done)
The pacing of the overall muxing is controlled
by the video GOPs arriving, so we can only handle
1 video stream, and the request pad is named accordingly.
Ignore a request for a 2nd video pad if there's already
an active one.
In file included from ../subprojects/gst-plugins-good/gst/monoscope/gstmonoscope.c:42:0:
../subprojects/gst-plugins-base/gst-libs/gst/audio/audio.h:26:39: fatal error: gst/audio/audio-enumtypes.h: No such file or directory
#include <gst/audio/audio-enumtypes.h>
^
compilation terminated.
https://ci.gstreamer.net/job/GStreamer-master-meson/271/console
Found via the Jenkins CI:
FAILED: subprojects/gst-plugins-good/gst/multifile/gstmultifile@sha/gstsplitmuxsink.c.o
[...]
In file included from ../subprojects/gst-plugins-good/gst/multifile/gstsplitmuxsink.h:24:0,
from ../subprojects/gst-plugins-good/gst/multifile/gstsplitmuxsink.c:59:
../subprojects/gst-plugins-base/gst-libs/gst/pbutils/pbutils.h:30:43: fatal error: gst/pbutils/pbutils-enumtypes.h: No such file or directory
#include <gst/pbutils/pbutils-enumtypes.h>
^
compilation terminated.
https://ci.gstreamer.net/job/GStreamer-master-meson/263/console
If the seek stop point (or start, during reverse play)
was within the segment we just finished, go EOS immediately
instead of proceeding through all other parts and sending
0 length seeks to them.
https://bugzilla.gnome.org/show_bug.cgi?id=772138
When one part moves ahead of the others - due to excessive
downstream queueing, or really small input files - then
we can end up activating parts more than once. That can lead to
effects like shutting down pad tasks prematurely.
https://bugzilla.gnome.org/show_bug.cgi?id=772138
This reverts commit f1ceaab02f.
This broke atomic file writes in "buffer" mode. It did make
sure that any streamheaders are prepended to each file in
buffer mode as well, but that's not really needed in practice,
whereas atomic file writes are, so let's restore the status
quo ante for now since this was primarily a code cleanup anyway,
and if anyone needs to streamheaders in buffer mode too they
can make a patch to implement that differently. Re-implementing
the atomic writes in the element also seems way too much work.
https://bugzilla.gnome.org/show_bug.cgi?id=766990
We were just picking the timestamp of the last buffer pushed into our
adapter before we had enough data to push out.
This fixes things to figure out how large each frame is and what
duration it covers, so we can set both the timestamp and duration
correctly.
Also adds some DISCONT handling.
The basic idea is this:
1. For *larger* rtx-rtt, weigh a new measurement as before
2. For *smaller* rtx-rtt, be a bit more conservative and weigh a bit less
3. For very large measurements, consider them "outliers"
and count them a lot less
The idea being that reducing the rtx-rtt is much more harmful then
increasing it, since we don't want to be underestimating the rtt of the
network, and when using this number to estimate the latency you need for
you jitterbuffer, you would rather want it to be a bit larger then a bit
smaller, potentially losing rtx-packets. The "outlier-detector" is there
to prevent a single skewed measurement to affect the outcome too much.
On wireless networks, these are surprisingly common.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Assuming equidistant packet spacing when that's not true leads to more
loss than necessary in the case of reordering and jitter. Typically this
is true for video where one frame often consists of multiple packets
with the same rtp timestamp. In this case it's better to assume that the
missing packets have the same timestamp as the last received packet, so
that the scheduled lost timer does not time out too early causing the
packets to be considered lost even though they may arrive in time.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
There is no need to schedule another EXPECTED timer if we're already
past the retry period. Under normal operation this won't happen, but if
there are more timers than the jitterbuffer is able to process in
real-time, scheduling more timers will just make the situation worse.
Instead, consider this packet as lost and move on. This scenario can
occur with high loss rate, low rtt and high configured latency.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
This patch fixes an issue with the estimated gap duration when there is
a gap immediately after a lost timer has been processed. Previously
there was a discrepancy beteen the gap in seqnum and gap in dts which
would cause wrong calculated duration. The issue would only be seen with
retranmission enabled since when it's disabled lost timers are only
created when a packet is received and the actual gap length and last dts
is known.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
Stats should also be collected for unsuccessful packets.
rtx-rtt is very important for determining the necessary configured
latency on the jitterbuffer. It's especially important to be able to
increase the latency when retransmitted packets arrive too late and are
considered lost. This patch includes these late packets in the
calculation of the various rtx stats, making them more correct and
useful.
Also in the case where the original packet arrives after a NACK is sent,
the received RTX packet should update the stats since it provides useful
information about RTT.
The RTT is only updated if and only if all requested retranmissions are
received. That way the RTT is guaranteed to make sense. If not we don't
know which request the packet is a response to and the RTT may be bogus.
A consequence of this patch is that RTT is not updated for a request
when one of the RTX packets for that seqnum is lost, but that since
measured RTT will be more accurate.
The implementation store the RTX information from the timed out timers
and use this when the retransmitted packet arrives. For performance
these timers are stored separately from the "normal" timers in order to
not impact performance (see attached performance test).
https://bugzilla.gnome.org/show_bug.cgi?id=769768
When disabled we can save some iterations over timers.
There is probably an argument for rtx-delay-reorder to exist, but
for normal operations, handling jitter (reordering) is something a
jitterbuffer should do, and this variable feels like functionality that
is not "in-sync" with what the jitterbuffer is trying to achieve.
Example: You have 50ms jitter on your network, and are receiving
audio packets with 10ms durations. An audio packet should not be
considered late until its rtx-timeout has expired (and hence a rtx-event
is sent), but with rtx-delay-reorder, events will be sent pretty much
all the time due to the jitter on the network.
Point being: The jitterbuffer should adapt its size to the measured network
jitter, and then rtx-delay-reorder needs to adapt as well, or simply
get out of the way and let the other (better) rtx-mechanisms do their job.
Also change find_timer to only use seqnum as an argument, since there
will only ever be one timer per seqnum at any given time. In the
one case where the type matters, the caller simply checks the type.
https://bugzilla.gnome.org/show_bug.cgi?id=769768
And actually calculate the field duration instead of a frame duration so
that we can properly timestamp output frames in fields=all mode.
This is probably still broken for reverse playback in telecine mode.
This may cause a few packets to be processed by the parser, but it's
better than never pushing out buffers from a slightly broken stream
where no marker bits are set.
To be able to cap the number of allowed streams for one session.
This is useful for preventing DoS attacks, where a sender can change
SSRC for every buffer, effectively bringing rtpbin to a halt.
https://bugzilla.gnome.org/show_bug.cgi?id=770292
Under certain conditions gst_rtp_buffer_get_payload() returns a copy of
the payload. In this case the payload modifications will not affect the
rtp buffer. So instead of modifying the payload buffer directly we
should modify the buffer that actually gets pushed on the adapter.
It implements now this interface with its video-direction
property. Values are changed to GstVideoOrientationMethod but they have
the same value than the originals.
https://bugzilla.gnome.org/show_bug.cgi?id=768687
On 32-bit x86: gstsplitmuxsink.c:966:31: warning: format ‘%u’ expects
argument of type ‘unsigned int’, but argument 9 has type
‘guint64 {aka long long unsigned int}’
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Some servers add properties like charset, e.g.
application/sdp; charset=utf8
Ideally we should also parse the charset and do conversion of all messages,
but that's for a later time.
This reverts commit fa008f271a.
async-handling in GstBin causes the pipeline to spin at 100%
CPU as the top-level pipeline tries to change that state
to PLAYING constantly. This is a workaround for a core
problem, essentially, but an improvement in this case for now.
After dropping the splitmux lock, re-check the state,
don't just fall through and sleep unconditionally,
as we may have already missed the wakeup.
https://bugzilla.gnome.org/show_bug.cgi?id=769514
The current 'l' pointer will be NULL when the loop
is interrupted with a 'break' statement. Need to have
it advance to the next list item before interrupting.
And don't just reset everything. This makes sure that we can continue to
handle data in the following scenario:
moov: discont
moof: discont
mdat: continuous
Previously this would fail because the offset would be the accumulated offset
from moov and moof at the mdat position, while the buffer offset might be
something completely different.
Use signed clock times for running time everywhere
so that we handle negative running times without
going haywire, similar to what queue and multiqueue
do these days.
Always intersect with the filter caps in the getcaps function
to make sure we return a subset of what was requested.
Other payloaders also have this problem and need fixing
in future commits.
When parsing NAL unit type in codec_data, check the 6bits of
NAL_unit_type only and do not require the array_completeness bit to be
0, since the default and mandatory value of array_completeness is 1 for
hvc1.
https://bugzilla.gnome.org/show_bug.cgi?id=768653
We should add all pads, no matter if they are linked or active or not at this
point. Skipping some that are not will cause different behaviour than with
other muxers.
This can only happen if a) upstream somehow gets around the CAPS event failing
or b) there never being any CAPS event.
The following code assumes that all pads have a codec-id.
https://bugzilla.gnome.org/show_bug.cgi?id=768509
Handle sprop-vps, sprop-sps and sprop-pps in caps instead of
sprop-parameter-sets.
rtph265pay works with byte-stream and hvc1 formats but not hev1 yet. It
handles profile-id, tier-flag and level-id in caps query.
https://bugzilla.gnome.org/show_bug.cgi?id=753760
The FLV header cannot be trusted to indicate video or
audio presence, as the comments already mention. Don't
delay pushing tags waiting for streams that might never
appear.
Tags are now pushed immediately after they change:
- After parsing an onMetaData script object
- After negotiating caps on a pad
https://bugzilla.gnome.org/show_bug.cgi?id=768440
As seen in the parent switch for object_type_id, the 4 possible values are
0x40, 0x66, 0x67 and 0x68. Fixing the nested switch to match these values.
Looks like it was a typo making them decimal instead of hexadecimal.
CID 1363328
Without, raw AAC can't be handled and we have some information available in
the decoder that most likely allows us to decode the stream in one way or
another. This is the same code already used by matroskademux for the same
reasons, and ffmpeg/vlc play such files just fine too by guesswork.
This is to handle cases where upstream handles the fragmented streaming in TIME
segments and sends us data with gaps within fragments. This would happen when dealing
with trick-modes.
When upstream (push-based, TIME SEGMENT) wishes to send discontinuous samples,
it must obey the following rules:
* The buffer containing the [moof] must have a valid GST_BUFFER_OFFSET
* The buffers containing the first sample after a gap:
* MUST start at the beginning of a sample,
* MUST have the DISCONT flag set,
* MUST have a valid GST_BUFFER_OFFSET relative to the beginning of the fragment.
https://bugzilla.gnome.org/show_bug.cgi?id=767354
If we consider the RTSP state, what can happen is that it is PLAYING but the
element already asynchronously tried to PAUSE and it just did not happen yet.
We would then override this setting to PAUSED (while the element actually is
in PAUSED) and set the RTSP state to PLAYING again. This would then cause us
to produce packets while the sinks are all PAUSED, piling up thousands of
packets in the rtpjitterbuffer and other elements and finally failing.
This is supposed to be either in the codec_data (avc stream format) or inside
the stream, and we extract it from there. It should not be set from a
property as it's stream specific.
https://bugzilla.gnome.org/show_bug.cgi?id=767789
The Session Data Protocol doesn't allow specifying a cipher for the
SRTCP, so it will use the SRTP one. In the "srtpenc" element the cipher
"aes-128-icm" is the default for SRTP and SRTCP, but if we want to have
an SRTCP with the "aes-256-icm" cipher then we also need to set the SRTP
cipher to "aes-256-icm", otherwise "aes-128-icm" will be used instead.
https://bugzilla.gnome.org/show_bug.cgi?id=767799
With non-time segments, it now assumes that the arrival time of packets
is not relevant and that only the RTP timestamp matter and it produces
an output segment start at running time 0.
https://bugzilla.gnome.org/show_bug.cgi?id=766438
No variables were added/removed. This was just a good excuse to:
* Comment what most variables are used for (and when)
* Order them in such a way as to show first the common variables used
in all cases, followed by those only used in push-mode
We shouldn't go through segment activation as we will only have a limited
understanding of how the whole stream timeline looks like from the moof. We
only know about the current fragment, while upstream knows about the whole
stream.
This fixes seeking in DASH streams, both for seeks after the current moof and
for seeks into the current moof. The former would fail because the moof ends
and we can't activate any segment, the latter would cause a segment that stops
at the moof end, and no further fragments would be played because we end up
being EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=767071
Some endpoints (like Tandberg E20) can send BYE packet containing our
internal SSRC. I this case we would detect SSRC collision and get rid
of the source at some point. But because we are still sending packets
with that SSRC the source will be recreated immediately.
This brand new internal source will not have some variables incorrectly
set in its state. For example 'seqnum-base` and `clock-rate` values will be
-1.
The fix is not to act on BYE RTCP if it contains internal or unknown
SSRC.
https://bugzilla.gnome.org/show_bug.cgi?id=762219
matroskademux would take the GST_OBJECT_LOCK in
- gst_matroska_demux_push_codec_data_all()
- gst_matroska_demux_query()
Some parse element such as FLAC checks upstream seekability, and
there is some use cases that matroska-demux is linked to a parse element
(e.g.,FLAC format) without intermediate elements (e.g., queue).
In this case, matroska-demux never returns from _push_codec_data_all()
because the parser can return only after it receives the response to
the upstream query, but that's not going to happen because it's
deadlocked.
Elements must not hold the object lock whilst pushing out events
or data.
https://bugzilla.gnome.org/show_bug.cgi?id=766645
The GST_BUFFER_OFFSET of output buffers returned to GstRtpBasePayload
should reflect the number of "samples" in the unit of the RTP clock in this
buffer. If this is not true, then it shouldn't be set.
https://bugzilla.gnome.org/show_bug.cgi?id=761943
segment_duration and media_time should be parsed based on version
of elst box. Specification defines that an elst box with version 1
has uint64 and int64 values for segment_duration and media_time,
respectively.
https://bugzilla.gnome.org/show_bug.cgi?id=766301
Set the async-handling property on GstBin to let it manage
async-handling instead of the local handling from the previous
commit. Works because of #174a5e in core
When switching fragments, hide the async-start/async-done
messages from the parent bin, as otherwise we sometimes (very rarely)
hang in PAUSED instead of returning / continuing to PLAYING
state.
1. according to RFC, T bit is only set when either the RTP packet only contains the J2K main header, or the packet contains tile parts from multiple tiles. This is now being managed correctly in the code. The second scenario cannot happen with our payloader, since tile headers are always placed in their own RTP packet, and so a packet cannot contain tile parts from multiple tiles.
However, I have added code to track if multiple tile parts are included in a single RTP packet, in case in the future we want to put header and data in same packet.
2. Old code would set the tile id to zero for all J2K packets. This is now set correctly to the appropriate tile id.
https://bugzilla.gnome.org/show_bug.cgi?id=745187
Properly handle edts segments for push-based operation seeking.
We only support edts that a single segment that has media at the end,
being preceeded by any number of gap segments.
This also allows the qt segment rate to be respected after seeks
https://bugzilla.gnome.org/show_bug.cgi?id=765669
When a packet arrives that has already been considered lost as part of a
large gap the "lost timer" for this will be cancelled. If the remaining
packets of this large gap never arrives, there will be missing entries
in the queue and the loop function will keep waiting for these packets
to arrive and never push another packet, effectively stalling the
pipeline.
The proposed fix conciders parts of a large gap definitely lost (since
they are calculated from latency) and ignores the late arrivals.
In practice the issue is rare since large gaps are scheduled immediately,
and for the stall to happen the late arrival needs to be processed
before this times out.
https://bugzilla.gnome.org/show_bug.cgi?id=765933
The access to the session hash table must happen while the session lock is
taken, otherwise another thread might modify the hash table while we're
creating the stats.
https://bugzilla.gnome.org/show_bug.cgi?id=766025
This signal allows a user to directly return a sorted list of
files to be joined, so that they don't have to follow the
filename pattern that the "location" property expects.
https://bugzilla.gnome.org/show_bug.cgi?id=753625
The wav spec tells that 'fmt' (and 'bext' if present) must come before 'data'.
There is no requirement for 'fmt' to be first. We already had a list of chunks
to skip, but it is easier to just skip any chunk while seeking for 'fmt'.
This fixes reading files generated by ProTools.
Via the MPEG-4 Part 3 spec we can support the other layers too.
Also correct the samples per frame calculation for MP3 if it's MPEG-2 or
MPEG-2.5.
https://bugzilla.gnome.org/show_bug.cgi?id=765725
We only changed them for UDP so far, which caused the wrong seqnum-base and
other information to be passed to rtpjitterbuffer/etc when seeking. This
usually wasn't that much of a problem as the code there is robust enough, but
every now and then it causes us to drop up to 32756 packets before we
continue doing anything meaningful.
https://bugzilla.gnome.org/show_bug.cgi?id=765689
set_fields() should only be called in the beginning, otherwise we will never
remember the maximum audio chunk size and write a wrong block align... which
then causes wrong timestamps and other problems.
3ea338ce27 changed avimux to do that, but it
never actually kept track of the max audio chunk for MP3 and MP2. These are
knowing the hdr.scale only after parsing the frames instead of at setcaps
time.
timescale/1 is unreliable value for framerate. Due to downstream
element usually use framerate generated by qtdemux, let it be omitted
until the framerate can be reliably calculated.
https://bugzilla.gnome.org/show_bug.cgi?id=764733
When playing a stream that has been protected by DASH CENC, playback
will fail if a seek is performed. Qtdemux produces the error "stream
is protected using cenc, but no cenc protection system information
has been found" and playback stops.
The problem is that gst_qtdemux_reset() gets called as part of the
FLUSH during a seek. This function frees the protection_system_ids
array. When gst_qtdemux_configure_protected_caps() is called after the
seek has completed, the protection_system_ids array is empty and
qtdemux is unable to create the correct output caps for the protected
stream.
This commit changes it to only free the protection_system_ids on
hard resets.
https://bugzilla.gnome.org/show_bug.cgi?id=761787
This allows disabling of sender address retrieval, which might
be useful in certain scenarios, like when the socket is connected,
or the sender address is not of interest (e.g. when receiving an
MPEG-TS stream). Disabling sender address retrieval in those
cases can have minor performance advantages.
https://bugzilla.gnome.org/show_bug.cgi?id=563323
The server can send multiple crypto sessions, one for each SSRC with its
own rollover counter. We parse this information and pass it to the SRTP
decoder via the "request-key" signal.
https://bugzilla.gnome.org/show_bug.cgi?id=730540
Otherwise we will use fields from the old caps with everything set up for the
new caps, causing crashes and worse.
Also don't do anything if the same caps are set twice.
qtdemux->streams is an array, it will never evaluate to true when comparing
to NULL. Instead we want to check the number of streams to make sure the
stream is available.
https://bugzilla.gnome.org/show_bug.cgi?id=753614
CID 1358389
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
After clearing the adapter due to a DISCONT, as might happen when some packet(s)
have been lost, the depayloader was pushing data into the adapter (which had no
header due to the clear), creating a headerless frame out of it, and sending it
downstream. The downstream decoder would then usually ignore it; unless there
were lots of DISCONTs from the jitterbuffer in which case the decoder would reach
its max_errors limit and throw an element error. Now we just discard that data.
It is probaby not worth trying to salvage this data because non-progressive
jpeg does not degrade gracefully and makes the video unwatchable even with
low packet loss such as 3-5%.
The PIFF data is stored in a custom UUID box which is parsed and the
crypto_info of the element is updated accordingly. This allows
downstream decryptors to process and decrypt the protected content.
https://bugzilla.gnome.org/show_bug.cgi?id=753614
payload_buffer hasn't been assigned a value before the jumps to
switch_failed or packet_short. So the value must be NULL. No need
to unmap and unref.
CID #1316476
Free memory of current macroblock once it isn't needed so it isn't leaked
by the call of the gst_rtp_h263_pay_B_mbfinder function.
if (!(mac = gst_rtp_h263_pay_B_mbfinder (context, gob, mac, mb))) {
CID 1212156
Make sure that all data is drained out when the reference pad
goes EOS. Fixes a problem where data that arrives on other
pads after the reference pad finishes can stall forever and
never pass EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=763711
Deadlock occurs when splitting files if one stream received no buffer during
the first GOP of the next file. That can happen in that scenario for example:
1) The first GOP of video is collected, it has a duration of 10s.
max_in_running_time is set to 10s.
2) Other streams catchup and we receive the first subtitle buffer at ts=0 and
has a duration of 1min.
3) We receive the 2nd subtitle buffer with a ts=1min. in_running_time is set to
1min. That buffer is blocked in handle_mq_input() because
max_in_running_time is still 10s.
4) Since all in_running_time are now > 10s, max_out_running_time is now set to
10s. That first GOP gets recorded into the file. The muxer pop buffers out
of the mq, when it tries to pop a 2nd subtitle buffer it blocks because the
GstDataQueue is empty.
5) A 2nd GOP of video is collected and has a duration of 10s as well.
max_in_running_time is now 20s. Since subtitle's in_running_time is already
1min, that GOP is already complete.
6) But let's say we overran the max file size, we thus set state to
SPLITMUX_STATE_ENDING_FILE now. As soon as a buffer with ts > 10s (end of
previous GOP) arrives in handle_mq_output(), EOS event is sent downstream
instead. But since the subtitle queue is empty, that's never going to
happen. Pipeline is now deadlocked.
To fix this situation we have to:
- Send a dummy event through the queue to wakeup output thread.
- Update out_running_time to at least max_out_running_time so it sends EOS.
- Respect time order, so we set out_running_tim=max_in_running_time because
that's bigger than previous buffer and smaller than next.
https://bugzilla.gnome.org/show_bug.cgi?id=763711
RFC 2435 mentions in section 4.1 that U/V use table number 1, but this seems
just like an example. Some encoders are not following that and there seems to
be no reason to reject their streams.
https://bugzilla.gnome.org/show_bug.cgi?id=761345
It's not like we could accept any other caps here. The caps are decided by the
upstream caps event.
Also keep the filter order intact when filtering the results against the
filter caps.
https://bugzilla.gnome.org/show_bug.cgi?id=763326
If we don't find the index of the sample correctly in src_convert function,
we have to unref about the qtdemux before returning value.
So, I have modify it about instead pass qtdemux as a parameter into
src_convert function.
https://bugzilla.gnome.org/show_bug.cgi?id=763973
Currently, get_duration function always return the TRUE even though
it can't be set duration correctly. So, we need to add the else condition
about the fail case. Also, we already set the GST_CLOCK_TIME_NONE
in this function. So I have modify it which is related code in some
function.
https://bugzilla.gnome.org/show_bug.cgi?id=763968
In other words, gst_pad_get_current_caps should never return NULL
in a pad-added callback from the demuxer.
Added tests for the two special cases with AAC and H.264 where this
would happen every time.
https://bugzilla.gnome.org/show_bug.cgi?id=763780
Changing the input caps and not using them anymore afterwards is useless, and
it breaks negotiation in pipelines like:
gst-launch-1.0 videotestsrc ! "video/x-raw,framerate=25/1,interlace-mode=interleaved" !
deinterlace fields=all ! "video/x-raw,framerate=50/1,interlace-mode=progressive" !
fakesink
This reverts commit 4065fcb80a.
flacparse should not push tags by itself, the base class is going to do that
while properly merging in upstream tags. It just didn't because of a bug in
the base class, which was hidden by this commit.
https://bugzilla.gnome.org/show_bug.cgi?id=763553
When upstream is running in bytes in push-mode, qtdemux will
convert seeks from time to bytes and send it upstream. Upstream
element will perform a byte seek and send a byte segment to qtdemux
that will convert it to time and push it downstream.
There is, however, the pending_segment variable that stores a new
segment event to be pushed before the next data. When handling seeks
as mentioned above this variable was being ignored and, if it contained
some segment event, it would override the one resulting from the seek.
This would restore a previous segment and would cause the seek segment
to be discarded downstream.
This patch fixes this issue by unrefing any pending segment as the
seek from upstream should contain the latest one that should be
used, as requested by the application.
https://bugzilla.gnome.org/show_bug.cgi?id=763165
On Windows the socket will be bound to ANY instead of the multicast group,
as binding to a multicast group does not work. Which would mean that we
override src->addr to become ANY and won't automatically join a multicast
group anymore on Windows.
On Linux we would automatically join a multicast group, keep it consistent.
https://bugzilla.gnome.org/show_bug.cgi?id=763093
Making the event itself writable is not enough, it won't make
the actual taglist in the event writable as well. Instead, just
make a copy of the taglist and then create a new tag event from
that if required, replacing the old one. Before we would
inadvertently modify taglists upstream elements might still
be holding on to. Add unit test for this as well.
https://bugzilla.gnome.org/show_bug.cgi?id=762793
In some cases the stream configuration can fail, for instance if the
stream is protected and no decryptor was found. For those situations
the demuxer shouldn't emit any data on the corresponding source pad of
the stream and bail out.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
When the cenc metadata is stored outside of the moof box and the
stream is exposed it is possible that the cenc metadata hasn't been
processed yet while the first buffer is being pushed. When this
happens the buffer can't possibly be decrypted downstream so don't
push it.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
If we have an error during fwrite call, file stays open and thus next
incoming buffer will trigger an assert when trying to opening a new
file.
This happens if we do not restart element, file is closed at stop, and
if application handles the returned GST_FLOW_ERROR to keep bin alive.
https://bugzilla.gnome.org/show_bug.cgi?id=762434
VP9_PAY_PICTURE_ID_7BITS and VP9_PAY_PICTURE_ID_15BITS are mutually
exclusive options of the picture-id-mode. We can break after the
first case.
1 or 2 bytes need to be added to the header length depending on the
PictureID size.
https://tools.ietf.org/html/draft-uberti-payload-vp9-00#section-4.2
CID 1353479
Commit 7873bede31
(https://bugzilla.gnome.org/show_bug.cgi?id=760774) changed the
behaviour of qtdemux to call gst_qtdemux_configure_stream() for
every new moof.
When playing a protected stream, gst_qtdemux_configure_stream()
calls gst_qtdemux_configure_protected_caps(). The
gst_qtdemux_configure_protected_caps() function takes the original
media format, puts this in a field called "original-media-type"
and then changes the caps to "application/x-cenc".
The gst_qtdemux_configure_protected_caps() did not handle the case
of being called multiple times, causing it to incorrectly set the
caps. The second call was causing the caps to be set to:
application/x-cenc, original-media-type"application/x-cenc"
This commit makes gst_qtdemux_configure_protected_caps() check that
the caps have already been transformed, so that it only gets
changed once.
https://bugzilla.gnome.org/show_bug.cgi?id=761769
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without tags or
with only the audio tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774