They are very confusing for people, and more often than not
also just not very accurate. Seeing 'last reviewed: 2005' in
your docs is not very confidence-inspiring. Let's just remove
those comments.
Clear the initial floating ref in the init function for
busses and clocks. These objects can be set on multiple
elements, so there's no clear parent-child relationship
here. Ideally we'd just not make them derive from
GInitiallyUnowned at all, but since we want to keep
using GstObject features for debugging, we'll just do
it like this.
This should also fix some problems with bindings, which
seem to get confused when they get floating refs from
non-constructor functions (or functions annotated to
have a 'transfer full' return type). This works now:
from gi.repository import GObject, Gst
GObject.threads_init()
Gst.init(None)
pipeline=Gst.Pipeline()
bus = pipeline.get_bus()
pipeline.set_state(Gst.State.NULL)
del pipeline;
https://bugzilla.gnome.org/show_bug.cgi?id=679286https://bugzilla.gnome.org/show_bug.cgi?id=657202
See #651514 for details. It's apparently impossible to write code
that avoids both type punning warnings with old g_atomic headers and
assertions in the new. Thus, macros and a version check.
Adds that warning to configure.ac
Includes a tiny change of the GST_BOILERPLATE_FULL() macro:
The get_type() function is no longer declared before being defined.
https://bugzilla.gnome.org/show_bug.cgi?id=611692
Based on clock implementation by Håvard Graff <havard.graff@tandberg.com>
Try to get the time on windows using the performance counters. These have a much
higher resolution and accuracy than the regular getcurrenttime(). Be careful to
fall back to regular getcurrenttime() or posix clocks when performance counters
are not available.
Move the checks for using an unscheduled entry from the unsafe GstClock to the
SystemClock object so that we can perform the correct locking.
fix a leak and potential deadlock then the async thread fails to start.
Sprinkle some G_LIKELY around because we can.
Keep a counter for the amount of outstanding wakeups that we produce and only
perform a write/read to the control socket when 1 or 0 respectively.
don't poll when waiting for the entries to be unblocked and clear their wakeup
counts, just act on the signal when the wakeup count is 0.
unscheduled entries will clear their wakeup count themselves.
Keep track of when we wakeup the async thread because the list of entries has
changed.
don't try to see if the list changed because we can't really know when one entry
is added multiple times.
Only wake up the async thread when we add an async entry to the head of the list
and the old entry was BUSY.
when an entry being waited on in the async thread is unscheduled, clear the
wakeup queue so we can continue waiting on other entries.
When an entry being waited on in the async thread is unlocked because an earlier
entry was added to the list, set the entry to OK again. This makes sure that
only the entries being waited on have the BUSY flag set and wake up the timer
poll when they are unscheduled.
Add a property to select the clock type, currently REALTIME and MONOTONIC when
posix timers are available.
Implement the systemclock with GstPoll instead of GCond. This allows us to
schedule timeouts with nanosecond precission on newer kernels and with ppoll
support. It's also resilient to changes to the systemclock because of NTP or
similar.
Original commit message from CVS:
* gst/gstsystemclock.c: (gst_system_clock_id_wait_jitter_unlocked):
Add some more docs to explain why a FIXME was wrongly added.
Original commit message from CVS:
* gst/gstbin.h:
Move priv to the right place.
* gst/gstsystemclock.c:
Add FIXME: and improve log.
* tests/check/Makefile.am:
* tests/examples/manual/Makefile.am:
Work with all types of registries.
Original commit message from CVS:
* gst/gstsystemclock.c: (gst_system_clock_id_wait_jitter_unlocked),
(gst_system_clock_id_wait_jitter),
(gst_system_clock_id_wait_async), (gst_system_clock_id_unschedule):
Fix anoying regression that survived a few releases. When adding an
async entry while blocking on a sync entry, the sync entry will unblock
but still be busy, so it should continue to wait instead of returning
_BUSY to the app.
Add some comments here and there.
* tests/check/gst/gstsystemclock.c: (mixed_thread),
(mixed_async_cb), (GST_START_TEST), (gst_systemclock_suite):
Add testcase for this.
Original commit message from CVS:
* gst/gstsystemclock.c: (gst_system_clock_init),
(gst_system_clock_start_async), (gst_system_clock_id_wait_async):
Defer starting the async system clock thread until the first async
wait is scheduled. Fixes#414986.
Original commit message from CVS:
Patch by: René Stadler <mail at renestadler dot de>
* gst/gstclock.c: (gst_clock_id_wait):
Make period ids add the interval to the origial requested time instead
of the possibly updated time which can be wrong when there are multiple
waiters for the same id. Fixes#382592.
* gst/gstsystemclock.c: (gst_system_clock_async_thread),
(gst_system_clock_id_wait_jitter_unlocked),
(gst_system_clock_id_wait_jitter):
Fix restart in the async notify thread when an async entry is added to
the front of the list. Fixes#381492.
* tests/check/gst/gstsystemclock.c: (store_callback),
(notify_callback), (GST_START_TEST), (gst_systemclock_suite):
Added test for multiple async waits.
Added test for async wait order.