This improves accuracy on wifi or similar networks, where the RTT can go very
high up for a single observation every now and then. Without filtering them
away completely, they would still still modify the average RTT, and thus all
clock estimations.
They don't necessarily use the same underlying clocks (e.g. on Windows), or
might be configured to a different clock type (monotonic vs. real time clock).
We need the values a clean system clock returns, as those are the values used
by the internal clocks.
If the delay measurement is too far away from the median of the window of last
delay measurements, we discard it. This increases accuracy on wifi a lot.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
We should do some more measurements with all these and check how much sense
they make for PTP. Also enabling them means not following IEEE1588-2008 by the
letter anymore.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
GstPtpClock implements a PTP (IEEE1588:2008) ordinary clock in
slave-only mode, that allows a GStreamer pipeline to synchronize
to a PTP network clock in some specific domain.
The PTP subsystem can be initialized with gst_ptp_init(), which then
starts a helper process to do the actual communication via the PTP
ports. This is required as PTP listens on ports < 1024 and thus
requires special privileges. Once this helper process is started, the
main process will synchronize to all PTP domains that are detected on
the selected interfaces.
gst_ptp_clock_new() then allows to create a GstClock that provides the
PTP time from a master clock inside a specific PTP domain. This clock
will only return valid timestamps once the timestamps in the PTP domain
are known. To check this, the GstPtpClock::internal-clock property and
the related notify::clock signal can be used. Once the internal clock
is not NULL, the PTP domain's time is known. Alternatively you can wait
for this with gst_ptp_clock_wait_ready().
To gather statistics about the PTP clock synchronization,
gst_ptp_statistics_callback_add() can be used. This gives the
application the possibility to collect all kinds of statistics
from the clock synchronization.
https://bugzilla.gnome.org/show_bug.cgi?id=749391
Just create the cancellable fd once and keep it around instead
of creating/closing it for every single packet. Since we spend
most time waiting for packets, an fd is alloced and in use pretty
much all the time anyway.
We were segfaulting because g_sequence_search was returning the iter_end,
and that iterator does not contain anything and thus should not be used
directly
In basesink functions gst_base_sink_chain_unlocked(), below code is used to
checking if buffer is late before doing prepare call to save some effort:
if (syncable && do_sync)
late =
gst_base_sink_is_too_late (basesink, obj, rstart, rstop,
GST_CLOCK_EARLY, 0, FALSE);
if (G_UNLIKELY (late))
goto dropped;
But this code has problem, it should calculate jitter based on current media
clock, rather than just passing 0. I found it will drop all the frames when
rewind in slow speed, such as -2X.
https://bugzilla.gnome.org/show_bug.cgi?id=749258
Since frame->priv->discont was cleared earlier,
GST_BASE_PARSE_FLAG_LOST_SYNC was never being set.
Take the chance to refactor the frame creation a bit to
organize the flags setting and reset.
https://bugzilla.gnome.org/show_bug.cgi?id=738237
Otherwise we're going to set a rather arbitrary DTS of segment.start (usually
0) for live sources, which confuses synchronization if the source started
capturing at a later time. And it's especially wrong for raw media, for which
we should not set any DTS at all.
https://bugzilla.gnome.org/show_bug.cgi?id=747731
It could be triggered by:
gst-launch-1.0 videotestsrc num-buffers=20 ! videcrop bottom=214748364 ! videoconvert ! autovideosink
Spotted while testing:
https://bugzilla.gnome.org/show_bug.cgi?id=743910
The flush-stop event should not restart the task for live sources unless
the element is playing. This was breaking seeks in pause with the rtpsrc.
https://bugzilla.gnome.org/show_bug.cgi?id=635701
Otherwise baseparse will consider empty streams to be an error while
an empty stream is a valid scenario. With this patch, errors would
only be emitted if the parser received data but wasn't able to
produce any output from it.
This change is only for push-mode operation as in pull mode an
empty file can be considered an error for the one driving the
pipeline
Includes a unit test for it
https://bugzilla.gnome.org/show_bug.cgi?id=733171
check_run.c: In function 'sig_handler':
check_run.c:127:13: warning: 'child_sig' may be used uninitialized in this function [-Wmaybe-uninitialized]
killpg(group_pid, child_sig);
^
check_run.c:130:31: warning: 'idx' may be used uninitialized in this function [-Wmaybe-uninitialized]
sigaction(sig_nr, &old_action[idx], NULL);
^
Otherwise e.g. ctrl+c in the test runner exits the test runner, while the test
itself is still running in the background, uses CPU and memory and potentially
never exits (e.g. if the test ran into a deadlock or infinite loop).
The reason why we have to manually kill the actual tests is that after
forking they will be moved to their own process group, and as such are
not receiving any signals sent to the test runner anymore. This is supposed
to be done to make it easier to kill a test, which it only really does if
the test itself is forking off new processes.
This fix is not complete though. SIGKILL can't be caught at all, and error
signals like SIGSEGV, SIGFPE are currently not caught. The latter will only
happen if there is a bug in the test runner itself, and as such seem less
important.
Large scale skip is an optimization, and thus it is safer to
stop skipping than to continue. Clear skip on segments and
discontinuities, as these are points where it is possible that
the original idea of "bytes to skip" changes.