Use the length of the payload for estimating the receiver bitrate so that it
matches the calculations done on the sender side. Together with the number of
packets one can scale the bitrate with the header overhead of the lower
transport.
Don't reuse the same variable we need for stats for the bitrate estimation
because we're updating it.
Refactor the bitrate estimation code so that both sender and receivers use the
same code path.
Changing it to the newest timestamp that was ever pushed will
increase the segment start in 500ms jumps, which could be just
after the next sparse stream buffer. E.g.
Video at 1.0s, sparse stream at 0.5s would jump the
sparse stream to 1.0s. Now a new sparse stream buffer could
appear that has a timestamp of 0.9s and this would be
dropped for no good reason because of bad luck.
Multiple flvparse/flvdemux instances should be able to operate without
trampling over each other by accidentally re-using the same (static)
variables. (Spotted by Mark Nauwelaerts)
For calculating the durations of each sample, we are supposed to add each
duration modulo 1<<32 so make the elapsed time counter a uint32.
Fixes#610280
Make the handing of the mime type within the "boundary" a bit less naive.
The standard for MIME allows parameters to follow the "type" / "subtype"
clause separated from the mime type by ';'.
Modifies the multipartdemuxer's header parsing so it doesnt assume
the whole line after "content-type:" is the mime type and thus makes it a bit
more resilient to finding absurd mime types in the case where parameters are
added.
Fixes#604711
ALAC codec-data apparently comes in (at least) two flavours (mov, mp4),
so use atom based parsing to retrieve required data, rather than
aiming for a specific offset.
See also #580731.
Remove some code where we pass ntpnstime around, we can do most things with the
running_time just fine.
Rename a variable in the ArrivalStats struct so that it's clear that this is the
current system time.
Don't calculate the NTP time based on the running_time of the pipeline but from
the systemclock. This allows us to generate more accurate NTP timestamps in case
the systemclock is synchronized with NTP or similar.
Used the _add_associationv variant of GstIndex since we know how many
associations we're adding. Trims up to 50% from index generation time.
Note : It would be great if the index could be generated on the fly or
on request as opposed to being fully created at startup.
If we detect backward timestamps on the server, don't try to resync when we
don't have an input timestamp (such as when using RTSP over TCP) instead, do
nothing but assume the timestamp was ok, it will correct itself when time goes
forwards.
There is no need to set the latency in the jittebuffer in _init, we will set
that later when going to PAUSED.
Set the jitterbuffer active and not buffering when starting.
When deactivating jitterbuffers when the buffering starts, keep the current
percent of the jitterbuffer and also set the jitterbuffer in the buffering state
so that we know when it's filled again.
Add property to get the buffering percentage of the jitterbuffer.
When we are in buffer mode, adjust the buffering low/high thresholds based on
the total configured latency. If we don't and there is a huge queue or element
with a big latency downstream we might drain the complete queue immediately and
start buffering again.
Return the next timestamp in the jitterbuffer.
Use the min-timestamp of the jitterbuffers to calculate an offset so that the
next timestamp is pushed with a timestamp equal to running_time.
Start producing timestamps from 0 in the buffering case too.
Keep track of the time we spend pausing the jitterbuffers when they were
buffering and distribute this elapsed time to the jitterbuffers.
Also keep the latency in nanosecond precision.
Pass the current running time to the jitterbuffer when pausing or resuming so
that it calculate the right offsets.
Small cleanups and comments.
Set the default rtspsrc latency to 2 seconds.
Add signal to pause the jitterbuffer. This will be emitted from gstrtpbin when
one of the jitterbuffers is buffering.
Make rtpbin collect the buffering messages and post a new buffering message with
the min value.
Remove the stats callback from jitterbuffer but pass a percent integer to
functions that affect the buffering state of the jitterbuffer. This allows us
then to post buffering messages from outside of the jitterbuffer lock.
Add callback for buffering stats.
Configure the latency in the jitterbuffer instead of passing it with _insert.
Calculate buffering levels when pushing and popping
Post buffering messages.
The audio packets in AVI are generally muxed ~0.5s before the
corresponding video packet. This changes causes downstream to only
receive packets with roughly corresponding timestamps.
Make sure we reset the demuxer correctly wrt parsing the index.
Don't leak pending seek events.
Rename some methods to reflect what they do and to avoid confusion with similar
method names.
Try to make the seeking threadsafe by protecting the setup code with a lock.
Make sure we post errors when a seek fails.
When we have not parsed any indexes yet, we don't know the length of the streams
and we must take the length given in the avih as a fallback.
Avoid some typechecking.
When the pad with the highest framerate goes EOS, instead of not timestamping
output buffers, intepollate timestamps and durations from the last seen ones.
Fixes#608026
As a side effect, avoid sending newsegment updates with start times
that go back and forth, which leads to bogus downstream running_time.
Also fixes seeking in bug #606744.
.. by initializing streams starting at 0, as that is basically
where we 'seek to' at the start and assume streams to start elsewhere.
Also enables newsegment update events for subtitle streams.
When we receive an UNEXPECTED from a stream, move to the next stream and only go
EOS when all streams are EOS. When selecting a stream to push, ignore streams
that went EOS.
Fixes#607949
Added the 'min-index-interval' property to matroskamux,
which determines how much time (nanoseconds) is left
between keyframes stored in the index.
Fixes#583985.
When we are in push mode, we can encounter RIFF and idx tags in the data chunk
when we are dealing with ODML files. In these cases, simply skip the chunks and
continue streaming instead of going EOS.
The profile-level-id represents restrictions on what can be sent, it does not
describe the stream. So it should be reflected in the sink caps of the
payloader, not the src caps.
https://bugzilla.gnome.org/show_bug.cgi?id=607353
Some traks might contain a redirect rtsp uri inside
hndl atom (which is a dref atom entry). This commit makes qtdemux
post a message when it finds one of these traks and there are
no other traks.
Fixes#597497
Some misinterpreted data could result in posting redirect messages
with empty redirect strings. It is better not to post them.
An example is the file on bug #597497
This avoids the "bad register name `%dil'" compilation errors on 32bit where
because of 'r' gcc puts the value in a general purpose register and then tries
to access the lower part as %dil/%sil which is not existing on 32bit. 'q' requests
a-d registers
This drops frames if they're too late anyway before blending and all
that starts but QoS events are not forwarded upstream. In the future
the QoS events should be transformed somehow and forwarded upstream.
Greatly simplify the IDIT chunk handling by using sscanf
instead of 'manually' parsing. Also replaces strncasecmp and
is_alpha/is_digit with glib versions.
Currently this only works if the kernel size doesn't change, in the future
it will be possible to change the kernel size too without draining
the complete history and without loosing anything.
Partially based on a patch by
Thiago Santos <thiago.sousa.santos@collabora.co.uk>
Make the QtDemuxSample struct smaller by keeping the duration and the pts_offset
as their 32 bit values.
Make some macros to calculate PTS, DTS and duration of a sample.
Deref the sample index less often by keeping a ref to the sample we're dealing
with.
Move sample position checks into qtdemux_parse_samples where we can protect it
with a lock.
Refactor and make an qtdemux_ensure_index function.
Rename qtdemux_do_push_seek to qtdemux_seek_offset in order to avoid confusion
with gst_qtdemux_do_push_seek.
When nsamples_out is larger than nsamples_in, using unsigned
ints lead to a overflow and the resulting value is wrong and
way too large for allocating a buffer. Use signed integers
and returning immediatelly when that happens.
Use more efficient formula that uses less multiplies.
Reduce the amount of scalar code, use MMX to calculate the desired
alpha value.
Unroll and handle 2 pixels in one iteration for improved pairing.
Convert the alpha value to 0->255 when setting and to 0->256 when using as
a scaling factor. This makes sure we can reach the full opacity value of 0xff in
all cases.
Fix some comments, clarify some FIXMEs
Remove the on-ntp-stop signal check now that the jitterbuffer is in
-good and we know that it supports this signal.
Use GstRTPBaseAudioPayload as the base class. This saves a lot of code and fixes
a bunch of problems that were already solved in the base class.
Fixes#853367
Don't make copied in the getter and setter for SDES in the RTPSource. This
avoids a couple of copies of the SDES structure when generating RTCP
packets.
Add a new spspps-interval property to instruct the payloader to insert
SPS and PPS at periodic intervals in the stream.
Rework the SPS/PPS handling so that bytestream and AVC sample code both use the
same code paths to handle sprop-parameter-sets. This also allows to have the AVC
code to insert SPS/PPS like the bytestream code.
Fixes#604913
For some reason latest gcc/binutils accept movzxb here while
movzbl would be correct and is the only thing accepted by older
gcc/binutils.
Fixes bug #604679.
This provides another 7% speedup for the time domain convolution and 1.5%
speedup for the FFT convolution on Mono input.
This optimization assumes that the compiler simplifies calculations
and conditions on constant numbers and unrolls loops with a constant
number of repeats.
This will always use time-domain convolution, which lowers the latency.
With FFT convolution it's always a multiple of the kernel length,
with time domain convolution it's only the pre-latency of the filter kernel.
This provides a great speedup, especially the relationship between kernel
length and processing size is now logarithmic instead of linear. Below a
kernel size of 32 it's a bit slower, afterwards it's much faster:
17 0.788000 -> 0.950000
33 1.208000 -> 1.146000
65 2.166000 -> 1.146000
...
4097 107.444000 -> 1.508000
For sizes smaller 32 the normal time-domain convolution is chosen,
for larger sizes the FFT convolution is automatically used.
Fixes bug #594381.
Remove some redundant calculations, move comparisions out of
inner loops, etc.
This makes the convolution about 3 (!) times faster but
processing time is of course still proportional to the
filter size.
Quicktime uses ISO 639-2 for language codes, but GST_TAG_LANGUAGE
is supposed to hold a ISO 639-1 code, so convert as needed using
the new API from -base.
See #602126.
Matroska uses three-letter ISO 639-2B codes, but GST_TAG_LANGUAGE is
supposed to contain two-letter ISO 639-1 codes, so use new language
code mapping functions in -base to convert between those two as
needed.
Fixes#505823.
When an RTSP extension returns NULL or an empty transport string, just ignore it
and try to get the next possible transport. Fixes playback of RealMedia streams.