The internal clock is only used for slaving against the remote clock, while
the user-facing GstClock can be additionally slaved to another clock if
desired. By default, if no master clock is set, this has exactly the same
behaviour as before. If a master clock is set (which was not allowed before),
the user-facing clock is reporting the remote clock as internal time and
slaves this to the master clock.
This also removes the weirdness that the internal time of the netclientclock
was always the system clock time, and not the remote clock time.
https://bugzilla.gnome.org/show_bug.cgi?id=750574
Allow for sub-classes which want to collate incoming buffers or
split them into multiple output buffers by separating the input
buffer submission from output buffer generation and allowing
for looping of one of the phases depending on pull or push mode
operation.
https://bugzilla.gnome.org/show_bug.cgi?id=750033
This uses all of the netclientclock code, except for the generation and
parsing of packets. Unfortunately some code duplication was necessary
because GstNetTimePacket is public API and couldn't be extended easily
to support NTPv4 packets without breaking API/ABI.
We extend our calculations to work with local send time, remote receive time,
remote send time and local receive time. For the netclientclock protocol,
remote receive and send time are assumed to be the same value.
For the results, this modified calculation makes absolutely no difference
unless the two remote times are different.
This improves accuracy on wifi or similar networks, where the RTT can go very
high up for a single observation every now and then. Without filtering them
away completely, they would still still modify the average RTT, and thus all
clock estimations.
They don't necessarily use the same underlying clocks (e.g. on Windows), or
might be configured to a different clock type (monotonic vs. real time clock).
We need the values a clean system clock returns, as those are the values used
by the internal clocks.
If the delay measurement is too far away from the median of the window of last
delay measurements, we discard it. This increases accuracy on wifi a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
We should do some more measurements with all these and check how much sense
they make for PTP. Also enabling them means not following IEEE1588-2008 by the
letter anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
GstPtpClock implements a PTP (IEEE1588:2008) ordinary clock in
slave-only mode, that allows a GStreamer pipeline to synchronize
to a PTP network clock in some specific domain.
The PTP subsystem can be initialized with gst_ptp_init(), which then
starts a helper process to do the actual communication via the PTP
ports. This is required as PTP listens on ports < 1024 and thus
requires special privileges. Once this helper process is started, the
main process will synchronize to all PTP domains that are detected on
the selected interfaces.
gst_ptp_clock_new() then allows to create a GstClock that provides the
PTP time from a master clock inside a specific PTP domain. This clock
will only return valid timestamps once the timestamps in the PTP domain
are known. To check this, the GstPtpClock::internal-clock property and
the related notify::clock signal can be used. Once the internal clock
is not NULL, the PTP domain's time is known. Alternatively you can wait
for this with gst_ptp_clock_wait_ready().
To gather statistics about the PTP clock synchronization,
gst_ptp_statistics_callback_add() can be used. This gives the
application the possibility to collect all kinds of statistics
from the clock synchronization.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
Just create the cancellable fd once and keep it around instead
of creating/closing it for every single packet. Since we spend
most time waiting for packets, an fd is alloced and in use pretty
much all the time anyway.
We were segfaulting because g_sequence_search was returning the iter_end,
and that iterator does not contain anything and thus should not be used
directly
In basesink functions gst_base_sink_chain_unlocked(), below code is used to
checking if buffer is late before doing prepare call to save some effort:
if (syncable && do_sync)
late =
gst_base_sink_is_too_late (basesink, obj, rstart, rstop,
GST_CLOCK_EARLY, 0, FALSE);
if (G_UNLIKELY (late))
goto dropped;
But this code has problem, it should calculate jitter based on current media
clock, rather than just passing 0. I found it will drop all the frames when
rewind in slow speed, such as -2X.
https://bugzilla.gnome.org/show_bug.cgi?id=749258
Since frame->priv->discont was cleared earlier,
GST_BASE_PARSE_FLAG_LOST_SYNC was never being set.
Take the chance to refactor the frame creation a bit to
organize the flags setting and reset.
https://bugzilla.gnome.org/show_bug.cgi?id=738237
Otherwise we're going to set a rather arbitrary DTS of segment.start (usually
0) for live sources, which confuses synchronization if the source started
capturing at a later time. And it's especially wrong for raw media, for which
we should not set any DTS at all.
https://bugzilla.gnome.org/show_bug.cgi?id=747731