gst_check_setup_sink_pad() internally uses gst_check_chain_func() so we
should call gst_check_drop_buffers() when tearing down tests to free the
buffers which have been exchanged through the pipeline.
https://bugzilla.gnome.org/show_bug.cgi?id=765903
The test rely on bus being flushed when setting the bin to the NULL state which
is not the case. This apply only when setting the pipeline state to
NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=765720
This ensures the following special case is handled properly:
1. Queue is empty
2. Data is pushed, fill level is below the current high-threshold
3. high-threshold is set to a level that is below the current fill level
Since mq->percent wasn't being recalculated in step #3 properly, this
caused the multiqueue to switch off its buffering state when new data is
pushed in, and never post a 100% buffering message. The application will
have received a <100% buffering message from step #2, but will never see
100%.
Fix this by recalculating the current fill level percentage during
high-threshold property changes in the same manner as it is done when
use-buffering is modified.
https://bugzilla.gnome.org/show_bug.cgi?id=763757
We need to clear some global state and register a new test
basetransform subclass for each test because we do things
in class_init base on global state.
https://bugzilla.gnome.org/show_bug.cgi?id=623469
The test assumed that if a buffer has the same pointer address as
before it is in fact the same mini object and has been re-used by
the pool. This seems to be mostly true, but not always. The buffer
might be destroyed and when a new buffer is created the allocator
might return the same memory that we just freed.
Instead attach a qdata with destroy notify function to buffer
instances we want to track to make sure the buffer actually
gets finalized rather than resurrected and put back into the pool.
Be notified in the application thread via bus messages about
notify::* and deep-notify::* property changes, instead of
having to deal with it in a non-application thread.
API: gst_element_add_property_notify_watch()
API: gst_element_add_property_deep_notify_watch()
API: gst_element_remove_property_notify_watch()
API: gst_message_new_property_notify()
API: gst_message_parse_property_notify()
API: GST_MESSAGE_PROPERTY_NOTIFY
https://bugzilla.gnome.org/show_bug.cgi?id=763142
Checking the current element's state when we're adding pads to
the parent element is checking the wrong thing.
Silences a 'attempting to add an inactive pad to a running element'
warning when adding a ghost pad to a running parent bin of the parent
bin of the element.
https://bugzilla.gnome.org/show_bug.cgi?id=764176
This takes readings in the form of ...
<local_1> <remote_1> <remote_2> <local_2>
... with one observation per line, and then replays it using the
netclientclock code.
The output is the statistics structure emitted by the netclientclock,
which can then be analysed and tuned once we get those readings for
potential edge-cases.
It should be possible to find some inputs with "bad" data and convert
this into a unit test for future tweaks to run against.
The alias define GST_TYPE_PARENT_BUFFER_META_API_TYPE is wrong and
breaks the usage of gst_buffer_get_parent_buffer_meta().
This patch fixes the GType alias and make another alias to keep the API
compatibility guarded by GST_DISABLE_DEPRECATED.
Also added a unit test.
https://bugzilla.gnome.org/show_bug.cgi?id=763112
When holding a regular ref it will cause the GstBus to never
reach 0 references and it won't be destroyed unless the application
explicitly calls gst_bus_remove_signal_watch().
Switching to weakref will allow the GstBus to be destroyed.
The application is still responsible for destroying the
GSource.
https://bugzilla.gnome.org/show_bug.cgi?id=762552
Showcases the regression introduced by this commit:
Commit: ab55ad7eaa
Author: Stian Selnes <stian@pexip.com>
Date: Wed Jan 27 13:20:23 2016 +0100
ghostpad: Do nothing in _internal_activate_push_default
Fixes a race where an entry is set to BUSY in
gst_system_clock_id_wait_jitter() and is UNSCHEDULED before
gst_system_clock_id_wait_jitter_unlocked() starts processing it. The
wakeup added by gst_system_clock_id_unschedule() must be cleaned up.
Two stress tests are added. One test that triggers the specific issue
described above. The second stresses the code path where a wait is
rescheduled because the poll returned early.
https://bugzilla.gnome.org/show_bug.cgi?id=761586