When switching fragments, hide the async-start/async-done
messages from the parent bin, as otherwise we sometimes (very rarely)
hang in PAUSED instead of returning / continuing to PLAYING
state.
1. according to RFC, T bit is only set when either the RTP packet only contains the J2K main header, or the packet contains tile parts from multiple tiles. This is now being managed correctly in the code. The second scenario cannot happen with our payloader, since tile headers are always placed in their own RTP packet, and so a packet cannot contain tile parts from multiple tiles.
However, I have added code to track if multiple tile parts are included in a single RTP packet, in case in the future we want to put header and data in same packet.
2. Old code would set the tile id to zero for all J2K packets. This is now set correctly to the appropriate tile id.
https://bugzilla.gnome.org/show_bug.cgi?id=745187
Properly handle edts segments for push-based operation seeking.
We only support edts that a single segment that has media at the end,
being preceeded by any number of gap segments.
This also allows the qt segment rate to be respected after seeks
https://bugzilla.gnome.org/show_bug.cgi?id=765669
When a packet arrives that has already been considered lost as part of a
large gap the "lost timer" for this will be cancelled. If the remaining
packets of this large gap never arrives, there will be missing entries
in the queue and the loop function will keep waiting for these packets
to arrive and never push another packet, effectively stalling the
pipeline.
The proposed fix conciders parts of a large gap definitely lost (since
they are calculated from latency) and ignores the late arrivals.
In practice the issue is rare since large gaps are scheduled immediately,
and for the stall to happen the late arrival needs to be processed
before this times out.
https://bugzilla.gnome.org/show_bug.cgi?id=765933
The access to the session hash table must happen while the session lock is
taken, otherwise another thread might modify the hash table while we're
creating the stats.
https://bugzilla.gnome.org/show_bug.cgi?id=766025
This signal allows a user to directly return a sorted list of
files to be joined, so that they don't have to follow the
filename pattern that the "location" property expects.
https://bugzilla.gnome.org/show_bug.cgi?id=753625
The wav spec tells that 'fmt' (and 'bext' if present) must come before 'data'.
There is no requirement for 'fmt' to be first. We already had a list of chunks
to skip, but it is easier to just skip any chunk while seeking for 'fmt'.
This fixes reading files generated by ProTools.
Via the MPEG-4 Part 3 spec we can support the other layers too.
Also correct the samples per frame calculation for MP3 if it's MPEG-2 or
MPEG-2.5.
https://bugzilla.gnome.org/show_bug.cgi?id=765725
We only changed them for UDP so far, which caused the wrong seqnum-base and
other information to be passed to rtpjitterbuffer/etc when seeking. This
usually wasn't that much of a problem as the code there is robust enough, but
every now and then it causes us to drop up to 32756 packets before we
continue doing anything meaningful.
https://bugzilla.gnome.org/show_bug.cgi?id=765689
set_fields() should only be called in the beginning, otherwise we will never
remember the maximum audio chunk size and write a wrong block align... which
then causes wrong timestamps and other problems.
3ea338ce27 changed avimux to do that, but it
never actually kept track of the max audio chunk for MP3 and MP2. These are
knowing the hdr.scale only after parsing the frames instead of at setcaps
time.
timescale/1 is unreliable value for framerate. Due to downstream
element usually use framerate generated by qtdemux, let it be omitted
until the framerate can be reliably calculated.
https://bugzilla.gnome.org/show_bug.cgi?id=764733
When playing a stream that has been protected by DASH CENC, playback
will fail if a seek is performed. Qtdemux produces the error "stream
is protected using cenc, but no cenc protection system information
has been found" and playback stops.
The problem is that gst_qtdemux_reset() gets called as part of the
FLUSH during a seek. This function frees the protection_system_ids
array. When gst_qtdemux_configure_protected_caps() is called after the
seek has completed, the protection_system_ids array is empty and
qtdemux is unable to create the correct output caps for the protected
stream.
This commit changes it to only free the protection_system_ids on
hard resets.
https://bugzilla.gnome.org/show_bug.cgi?id=761787
This allows disabling of sender address retrieval, which might
be useful in certain scenarios, like when the socket is connected,
or the sender address is not of interest (e.g. when receiving an
MPEG-TS stream). Disabling sender address retrieval in those
cases can have minor performance advantages.
https://bugzilla.gnome.org/show_bug.cgi?id=563323
The server can send multiple crypto sessions, one for each SSRC with its
own rollover counter. We parse this information and pass it to the SRTP
decoder via the "request-key" signal.
https://bugzilla.gnome.org/show_bug.cgi?id=730540
Otherwise we will use fields from the old caps with everything set up for the
new caps, causing crashes and worse.
Also don't do anything if the same caps are set twice.
qtdemux->streams is an array, it will never evaluate to true when comparing
to NULL. Instead we want to check the number of streams to make sure the
stream is available.
https://bugzilla.gnome.org/show_bug.cgi?id=753614
CID 1358389
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
After clearing the adapter due to a DISCONT, as might happen when some packet(s)
have been lost, the depayloader was pushing data into the adapter (which had no
header due to the clear), creating a headerless frame out of it, and sending it
downstream. The downstream decoder would then usually ignore it; unless there
were lots of DISCONTs from the jitterbuffer in which case the decoder would reach
its max_errors limit and throw an element error. Now we just discard that data.
It is probaby not worth trying to salvage this data because non-progressive
jpeg does not degrade gracefully and makes the video unwatchable even with
low packet loss such as 3-5%.
The PIFF data is stored in a custom UUID box which is parsed and the
crypto_info of the element is updated accordingly. This allows
downstream decryptors to process and decrypt the protected content.
https://bugzilla.gnome.org/show_bug.cgi?id=753614
payload_buffer hasn't been assigned a value before the jumps to
switch_failed or packet_short. So the value must be NULL. No need
to unmap and unref.
CID #1316476
Free memory of current macroblock once it isn't needed so it isn't leaked
by the call of the gst_rtp_h263_pay_B_mbfinder function.
if (!(mac = gst_rtp_h263_pay_B_mbfinder (context, gob, mac, mb))) {
CID 1212156
Make sure that all data is drained out when the reference pad
goes EOS. Fixes a problem where data that arrives on other
pads after the reference pad finishes can stall forever and
never pass EOS.
https://bugzilla.gnome.org/show_bug.cgi?id=763711
Deadlock occurs when splitting files if one stream received no buffer during
the first GOP of the next file. That can happen in that scenario for example:
1) The first GOP of video is collected, it has a duration of 10s.
max_in_running_time is set to 10s.
2) Other streams catchup and we receive the first subtitle buffer at ts=0 and
has a duration of 1min.
3) We receive the 2nd subtitle buffer with a ts=1min. in_running_time is set to
1min. That buffer is blocked in handle_mq_input() because
max_in_running_time is still 10s.
4) Since all in_running_time are now > 10s, max_out_running_time is now set to
10s. That first GOP gets recorded into the file. The muxer pop buffers out
of the mq, when it tries to pop a 2nd subtitle buffer it blocks because the
GstDataQueue is empty.
5) A 2nd GOP of video is collected and has a duration of 10s as well.
max_in_running_time is now 20s. Since subtitle's in_running_time is already
1min, that GOP is already complete.
6) But let's say we overran the max file size, we thus set state to
SPLITMUX_STATE_ENDING_FILE now. As soon as a buffer with ts > 10s (end of
previous GOP) arrives in handle_mq_output(), EOS event is sent downstream
instead. But since the subtitle queue is empty, that's never going to
happen. Pipeline is now deadlocked.
To fix this situation we have to:
- Send a dummy event through the queue to wakeup output thread.
- Update out_running_time to at least max_out_running_time so it sends EOS.
- Respect time order, so we set out_running_tim=max_in_running_time because
that's bigger than previous buffer and smaller than next.
https://bugzilla.gnome.org/show_bug.cgi?id=763711
RFC 2435 mentions in section 4.1 that U/V use table number 1, but this seems
just like an example. Some encoders are not following that and there seems to
be no reason to reject their streams.
https://bugzilla.gnome.org/show_bug.cgi?id=761345
It's not like we could accept any other caps here. The caps are decided by the
upstream caps event.
Also keep the filter order intact when filtering the results against the
filter caps.
https://bugzilla.gnome.org/show_bug.cgi?id=763326
If we don't find the index of the sample correctly in src_convert function,
we have to unref about the qtdemux before returning value.
So, I have modify it about instead pass qtdemux as a parameter into
src_convert function.
https://bugzilla.gnome.org/show_bug.cgi?id=763973
Currently, get_duration function always return the TRUE even though
it can't be set duration correctly. So, we need to add the else condition
about the fail case. Also, we already set the GST_CLOCK_TIME_NONE
in this function. So I have modify it which is related code in some
function.
https://bugzilla.gnome.org/show_bug.cgi?id=763968
In other words, gst_pad_get_current_caps should never return NULL
in a pad-added callback from the demuxer.
Added tests for the two special cases with AAC and H.264 where this
would happen every time.
https://bugzilla.gnome.org/show_bug.cgi?id=763780
Changing the input caps and not using them anymore afterwards is useless, and
it breaks negotiation in pipelines like:
gst-launch-1.0 videotestsrc ! "video/x-raw,framerate=25/1,interlace-mode=interleaved" !
deinterlace fields=all ! "video/x-raw,framerate=50/1,interlace-mode=progressive" !
fakesink
This reverts commit 4065fcb80a.
flacparse should not push tags by itself, the base class is going to do that
while properly merging in upstream tags. It just didn't because of a bug in
the base class, which was hidden by this commit.
https://bugzilla.gnome.org/show_bug.cgi?id=763553
When upstream is running in bytes in push-mode, qtdemux will
convert seeks from time to bytes and send it upstream. Upstream
element will perform a byte seek and send a byte segment to qtdemux
that will convert it to time and push it downstream.
There is, however, the pending_segment variable that stores a new
segment event to be pushed before the next data. When handling seeks
as mentioned above this variable was being ignored and, if it contained
some segment event, it would override the one resulting from the seek.
This would restore a previous segment and would cause the seek segment
to be discarded downstream.
This patch fixes this issue by unrefing any pending segment as the
seek from upstream should contain the latest one that should be
used, as requested by the application.
https://bugzilla.gnome.org/show_bug.cgi?id=763165
On Windows the socket will be bound to ANY instead of the multicast group,
as binding to a multicast group does not work. Which would mean that we
override src->addr to become ANY and won't automatically join a multicast
group anymore on Windows.
On Linux we would automatically join a multicast group, keep it consistent.
https://bugzilla.gnome.org/show_bug.cgi?id=763093
Making the event itself writable is not enough, it won't make
the actual taglist in the event writable as well. Instead, just
make a copy of the taglist and then create a new tag event from
that if required, replacing the old one. Before we would
inadvertently modify taglists upstream elements might still
be holding on to. Add unit test for this as well.
https://bugzilla.gnome.org/show_bug.cgi?id=762793
In some cases the stream configuration can fail, for instance if the
stream is protected and no decryptor was found. For those situations
the demuxer shouldn't emit any data on the corresponding source pad of
the stream and bail out.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
When the cenc metadata is stored outside of the moof box and the
stream is exposed it is possible that the cenc metadata hasn't been
processed yet while the first buffer is being pushed. When this
happens the buffer can't possibly be decrypted downstream so don't
push it.
https://bugzilla.gnome.org/show_bug.cgi?id=762516
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
Remove calls to gst_pad_has_current_caps() which then go on to call
gst_pad_get_current_caps() as the caps can go to NULL in between. Instead just
use gst_pad_get_current_caps() and check for NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=759539
If we have an error during fwrite call, file stays open and thus next
incoming buffer will trigger an assert when trying to opening a new
file.
This happens if we do not restart element, file is closed at stop, and
if application handles the returned GST_FLOW_ERROR to keep bin alive.
https://bugzilla.gnome.org/show_bug.cgi?id=762434
VP9_PAY_PICTURE_ID_7BITS and VP9_PAY_PICTURE_ID_15BITS are mutually
exclusive options of the picture-id-mode. We can break after the
first case.
1 or 2 bytes need to be added to the header length depending on the
PictureID size.
https://tools.ietf.org/html/draft-uberti-payload-vp9-00#section-4.2
CID 1353479
Commit 7873bede31
(https://bugzilla.gnome.org/show_bug.cgi?id=760774) changed the
behaviour of qtdemux to call gst_qtdemux_configure_stream() for
every new moof.
When playing a protected stream, gst_qtdemux_configure_stream()
calls gst_qtdemux_configure_protected_caps(). The
gst_qtdemux_configure_protected_caps() function takes the original
media format, puts this in a field called "original-media-type"
and then changes the caps to "application/x-cenc".
The gst_qtdemux_configure_protected_caps() did not handle the case
of being called multiple times, causing it to incorrectly set the
caps. The second call was causing the caps to be set to:
application/x-cenc, original-media-type"application/x-cenc"
This commit makes gst_qtdemux_configure_protected_caps() check that
the caps have already been transformed, so that it only gets
changed once.
https://bugzilla.gnome.org/show_bug.cgi?id=761769
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without tags or
with only the audio tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
Both Firefox and Chrome uses OPUS as the encoding in their SDP.
Adding this now defacto standard name remove the need for special
case in SDP parsing code.
https://bugzilla.gnome.org/show_bug.cgi?id=737810
This will help elements that cannot deal with multistream,
such as the RTP payloader.
The caps now do not include a "streams" field anymore, but
a "multistream" boolean, since we have no real use for knowing
the exact amount of streams.
https://bugzilla.gnome.org/show_bug.cgi?id=665078
Send GAP events for non-subtitle streams too if they lag too much
behind, but use a higher threshold than for subtitles.
This helps with fixing prerolling with a file where one of the
audio streams only has data starting from 19s onwards. It's not
a complete fix yet, it also requires changes elsewhere, such as
in baseparse, to make sure caps are propagated.
https://bugzilla.gnome.org/show_bug.cgi?id=614460https://bugzilla.gnome.org/show_bug.cgi?id=753899
Quick and dirty implementation of an RTP payloader and depayloader
for VP9. In particalur it assumes no spatial or temporal layering,
non-flexible mode, and some other bits and pieces.
https://bugzilla.gnome.org/show_bug.cgi?id=754773
Looks like it just uses the NAL enums and nothing else from
the codecparsers, and that's the only reason it had to be
moved from -good to -bad when it was originally added. We
can probably keep those NAL enums up to date enough, so let's
remove the codecparser dependency so it can be moved back into
-good.
https://bugzilla.gnome.org/show_bug.cgi?id=761606
It's not enough to have timeout or event based VPS/SPS/PPS information
sent in RTP packets. There are some scenarios when key frames may appear
more frequently than once a second, in which case the minimum timeout
for "config-interval" of 1 second for sending VPS/SPS/PPS isn't enough.
It might also be desirable in general to make sure the VPS/SPS/PPS is
available with every keyframe (packet loss aside), so receivers can
actually pick up decoding immediately from the first keyframe if
VPS/SPS/PPS is not signaled out of band.
This commit adds the possibility to send VPS/SPS/PPS with every key frame.
This mode can be enabled by setting "config-interval" property to -1. In
this case the payloader will add VPS, SPS and PPS before every key (IDR)
frame.
https://bugzilla.gnome.org/show_bug.cgi?id=757892
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
A race condition in the state change function may cause buffers to be
unreffed while they are still used by the streaming thread in
gst_rtp_h265_pay_send_vps_sps_pps() resulting in a crash. Chain up to the
parent class first in the state change function to make sure streaming
has stopped and only then free those buffers.
https://bugzilla.gnome.org/show_bug.cgi?id=741381
The payloader didn't copy anything so far, the depayloader copied every
possible meta. Let's make it consistent and just copy all metas without
tags or with only the video tag.
https://bugzilla.gnome.org/show_bug.cgi?id=751774
h264parse and gstrtph264depay do the same, let's keep the behaviour
consistent. As we now include the codec_data inside the stream, this causes
less caps renegotiation.
https://bugzilla.gnome.org/show_bug.cgi?id=753228
rtph264depay does the same and this fixes decoding of some streams with 32
SPS (or 256 PPS). It is allowed to have SPS ID 0 to 31 (or PPS ID 0 to 255),
but the field in the codec_data for the number of SPS or PPS is only 5
(or 8) bit. As such, 32 SPS (or 256 PPS) are interpreted as 0 everywhere.
This looks like a mistake in the part of the spect about the codec_data.
This causes an assertion and would lead to getting a NULL instead
of a buffer. Without proper checking this would easily lead to a
segfault.
Related to rpth264depay: https://bugzilla.gnome.org/show_bug.cgi?id=737199
type is truncated to 0-31 with "& 0x1f", but right after that it is checks if
the value is equivalent to GST_H265_NAL_VPS, GST_H265_NAL_SPS, and
GST_H265_NAL_PPS (which are 32, 33, and 34 respectively). Obviously, this will
never be True if the value is maximum 31 after the truncation.
The intention of the code was to truncate to 0-63.
After further investigation the previous commit is wrong. The code intended to
check if the type is 39 or the ranges 41-44 and 48-55. Just like gsth265parse.c
does. Type 40 would not be complete.
nal_type is the index for a GstH265NalUnitType enum. There are two types of dead
code here:
First, after checking if nal_type is >= 39 there are two OR conditionals that
check if the value is in ranges higher than that number, so if nal_type >= 39
falls in the True branch those other conditions aren't checked and if it falls
in the False branch and they are checked, they will always also be False. They
are redundant.
Second, the enum has a range of 0 to 40. So the checks for ranges higher than 41
should never be True.
Removing this redundant checks.
CID 1249684
The running average was wrong and the resulting scaling factor was only held in
place using the CLAMP. In addtion we are now convering quickly to volume
changes.
FInally now with this change, we can change the resolution defines and
everythign adjusts.
Commit bd27a1f30b added a few error handling
memory management checks. These check srccaps to see if it needs to be
unreferenced before returning, in the case of invalid_caps this goto jump
always happens before srccaps is set, so it will always be NULL in this
error label.
CID #1352035
For APP/JPG markers the size is following and we have to skip that. This is
not really a problem unless the marker contains e.g. a preview JPEG or
something else that we might interprete as another marker.
qtdemux calculates framerate using duration and the number of sample.
In case of fragmented mp4 format, however, the number of sample can
be figure out after parsing every moof box. Because qtdemux does not
parse every moof in QTDEMUX_STATE_HEADER state, it will cause incorrect
framerate calculation.
This patch will triger gst_qtdemux_configure_stream() for every new moof.
Then, framerate will be calculated by using duration and n_samples of the moof.
https://bugzilla.gnome.org/show_bug.cgi?id=760774
Based on document ISO_IEC_14496-12, edit list box can have
segment duration as zero. It does not imply that media_start equals to
media_stop. But, it just indicates a sample which should be presented
at the first. This patch derives segment duration using media_time
and duration of file. And set derived duration to segment-duration.
https://bugzilla.gnome.org/show_bug.cgi?id=760781
In case of push mode, qtdemux expose streams after got moov box.
We can not guarantee that a moov box has sample data such as sample duration
and the number of sample in stbl box for fragmented format case.
So, if a moov has no sample data, streams will not be exposed until get the first moof.
https://bugzilla.gnome.org/show_bug.cgi?id=760779
If the following conditions are met:
1) upstream and downstream caps are compatible
2) upstream is interlaced
3) downstream doesn't support progressive mode
then deinterlace will just do passthrough instead of failing to link.
This is done with the following scenario in mind:
videotestsrc ! "video/x-raw,interlace-mode=interleaved" ! deinterlace
name=dein_src ! tee name=t ! queue ! deinterlace name=dein_file ! filesink t. !
queue ! deinterlace name=dein_desktop ! autovideosink
In this case, dein_src will do the deinterlacing. However,
videotestsrc ! "video/x-raw,interlace-mode=interleaved" ! deinterlace
name=dein_src ! tee name=t ! queue ! deinterlace name=dein_file ! filesink t. !
queue ! deinterlace name=dein_desktop ! autovideosink t. ! queue !
"video/x-raw,interlace-mode=interleaved" ! fakesink
In this case, caps auto-negotiation will make dein_file and dein_desktop do
the deinterlacing, while dein_src will be passthrough.
https://bugzilla.gnome.org/show_bug.cgi?id=760995
In this mode we will passthrough all progressive caps but interlaced caps must be
caps where we actually support deinterlacing.
This is the only difference between auto and auto-strict, auto would
passthrough all unsupported interlaced caps.
https://bugzilla.gnome.org/show_bug.cgi?id=720388
Previously the result of the CAPS query and ACCEPT_CAPS depended on what kind
of caps were last set, and e.g. if we last had interlaced caps or not. That's
just broken.
Also previously the handling of non-sysmem caps features was rather random and
unusuable.
Now the behaviour is the following, depending on the mode property:
1) mode=disabled
Completely do passthrough of everything
2) mode=interlaced
Only accept formats we can actually deinterlace, and accept interlaced
and progressive content and always run the deinterlacer and output
progressive content
3) mode=auto (i.e. playbin)
Accept all progressive formats as passthrough, accept all formats that we
can deinterlace ourselves (which we do then), but also accept everything
else for which we then just passthrough. In auto mode, deinterlacing is best
effort: If we can, we deinterlace, if we can't we just output interlaced
content.
https://bugzilla.gnome.org/show_bug.cgi?id=720388https://bugzilla.gnome.org/show_bug.cgi?id=760553