https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
The current 'l' pointer will be NULL when the loop
is interrupted with a 'break' statement. Need to have
it advance to the next list item before interrupting.
With non-time segments, it now assumes that the arrival time of packets
is not relevant and that only the RTP timestamp matter and it produces
an output segment start at running time 0.
https://bugzilla.gnome.org/show_bug.cgi?id=766438
Some endpoints (like Tandberg E20) can send BYE packet containing our
internal SSRC. I this case we would detect SSRC collision and get rid
of the source at some point. But because we are still sending packets
with that SSRC the source will be recreated immediately.
This brand new internal source will not have some variables incorrectly
set in its state. For example 'seqnum-base` and `clock-rate` values will be
-1.
The fix is not to act on BYE RTCP if it contains internal or unknown
SSRC.
https://bugzilla.gnome.org/show_bug.cgi?id=762219
When a packet arrives that has already been considered lost as part of a
large gap the "lost timer" for this will be cancelled. If the remaining
packets of this large gap never arrives, there will be missing entries
in the queue and the loop function will keep waiting for these packets
to arrive and never push another packet, effectively stalling the
pipeline.
The proposed fix conciders parts of a large gap definitely lost (since
they are calculated from latency) and ignores the late arrivals.
In practice the issue is rare since large gaps are scheduled immediately,
and for the stall to happen the late arrival needs to be processed
before this times out.
https://bugzilla.gnome.org/show_bug.cgi?id=765933
The access to the session hash table must happen while the session lock is
taken, otherwise another thread might modify the hash table while we're
creating the stats.
https://bugzilla.gnome.org/show_bug.cgi?id=766025
The head of the queue is the oldest packet (as in lowest seqnum), the tail is
the newest packet. To calculate the fill level, we should calculate tail-head
while considering wraparounds. Not the other way around.
Other code is already doing this in the correct order.
https://bugzilla.gnome.org/show_bug.cgi?id=764889
When downstream blocks, "lost" timers are created to notify the
outgoing thread that packets are lost.
The problem is that for high packet-rate streams, we might end up with
a big list of lost timeouts (had a use-case with ~1000...).
The problem isn't so much the amount of lost timeouts to handle, but
rather the way they were handled. All timers would first be iterated,
then the one selected would be handled ... to re-iterate the list again.
All of this is being done while the jbuf lock is taken, which in some use-cases
would return in holding that lock for 10s... blocking any buffers from
being accepted in input... which would then arrive late ... which would
create plenty of lost timers ... which would cause the same issue.
In order to avoid that situation, handle the lost timers immediately when
iterating the list of pending timers. This modifies the complexity from
a quadratic to a linear complexity.
https://bugzilla.gnome.org/show_bug.cgi?id=762988
We would queue 5 consective packets before considering a reset and a proper
discont here. Instead of expecting the next output packet to have the current
seqnum (i.e. the fifth), expect it to have the first seqnum. Otherwise we're
going to drop all queued up packets.
generate_rtcp can produce empty packets when reduced size RTCP is turned on.
Skip them since it doesn't make sense to push them and they cause errors with
elements that expect RTCP packets to contain data (like srtpenc).
No need to use G_GINT64_FORMAT for potentially negative values of
GstClockTimeDiff. Since 1.6 these can be handled with GST_STIME_ARGS.
Plus it creates more readable values in the logs.
https://bugzilla.gnome.org/show_bug.cgi?id=757480
Add statitics from each rtp source to the rtp session property.
'source-stats' is a GValueArray where each element is a GstStructure of
stats for one rtp source.
The availability of new stats is signaled via g_object_notify.
https://bugzilla.gnome.org/show_bug.cgi?id=752669
By not doing this, the muxer is not effectively a rtpmuxer, rather a
funnel, since it should be a single stream that exists the muxer.
If not specified, take the first ssrc seen on a sinkpad, allowing upstream
to decide ssrc in "passthrough" with only one sinkpad.
Also, let downstream ssrc overrule internal configured one
We hence has the following order for determining the ssrc used by
rtpmux:
0. Suggestion from GstRTPCollision event
1. Downstream caps
2. ssrc-Property
3. (First) upstream caps containing ssrc
4. Randomly generated
https://bugzilla.gnome.org/show_bug.cgi?id=752694
Estimating it from the RTP time will give us the PTS, so in cases of PTS!=DTS
we would produce wrong DTS. As now the estimated DTS is based on the clock,
don't store it in the jitterbuffer items as it would otherwise be used in the
skew calculations and would influence the results. We only really need the DTS
for timer calculations.
https://bugzilla.gnome.org/show_bug.cgi?id=749536
The amount of time that is completely expired and not worth waiting for,
is the duration of the packets in the gap (gap * duration) - the
latency (size) of the jitterbuffer (priv->latency_ns). This is the duration
that we make a "multi-lost" packet for.
The "late" concept made some sense in 0.10 as it reflected that a buffer
coming in had not been waited for at all, but had a timestamp that was
outside the jitterbuffer to wait for. With the rewrite of the waiting
(timeout) mechanism in 1.0, this no longer makes any sense, and the
variable no longer reflects anything meaningful (num > 0 is useless,
the duration is what matters)
Fixed up the tests that had been slightly modified in 1.0 to allow faulty
behavior to sneak in, and port some of them to use GstHarness.
https://bugzilla.gnome.org/show_bug.cgi?id=738363
This reverts commit 05bd708fc5.
The reverted patch is wrong and introduces a regression because there
may still be time to receive some of the packets included in the gap
if they are reordered.
If we have a clock, update "now" now with the very latest running time we have.
If timers are unscheduled below we otherwise wouldn't update now (it's only updated
when timers expire), and also for the very first loop iteration now would otherwise
always be 0.
Also the time is used for the timeout functions, e.g. to calculate any times
for the next timeouts and we would otherwise pass too old times there.
https://bugzilla.gnome.org/show_bug.cgi?id=751636
This reverts commit 0c21cd7177.
If we have multiple immediate timers, we want to first handle the one with the
lowest sequence number... which would be broken now.
Instead of this we should just use a GSequence for the timers, and have them
sorted first by timestamp, and for equal timestamps by sequence number. Then
we would always only have to take the very first timer from the list and never
have to look at any others.
If update_receiver_stats() fails, we can't really do anything with this buffer
anymore and have to drop it. This happens if there's a big seqnum
discontinuity for example.
https://bugzilla.gnome.org/show_bug.cgi?id=751311
The new property allows to select the time source that should be used for the
NTP time in RTCP packets. By default it will continue to calculate the NTP
timestamp (1900 epoch) based on the realtime clock. Alternatively it can use
the UNIX timestamp (1970 epoch), the pipeline's running time or the pipeline's
clock time. The latter is especially useful for synchronizing multiple
receivers if all of them share the same clock.
If use-pipeline-clock is set to TRUE, it will override the ntp-time-source
setting and continue to use the running time plus 70 years. This is only kept
for backwards compatibility.
According to RFC 5506, reduce size packages can be sent, this
packages may not be compound, so we need to add support for
getting ssrc from other types of packages.
https://bugzilla.gnome.org/show_bug.cgi?id=750327
Otherwise we will have 10s-100s of thread wakeups in feedback profiles, create
RTCP packets, etc. just to suppress them in 99% of the cases (i.e. if no
feedback is actually pending and no regular RTCP has to be sent).
This improves CPU usage and battery life quite a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=746543