Not posting DEVICE_ADDED messages while a device provider is being
started makes things awkward for applications, as they have to call
get_devices() after starting the monitor.
This requires redundant code on the application side, and as far as
I understand also could cause race conditions, when a device gets
added between the calls to gst_device_monitor_start() and
gst_device_monitor_get_devices(), causing the application to "see"
the same device twice.
The virtual method named `get_caps` in both `GstBaseSrc` and
`GstBaseSink` has a `filter` parameter which can be `NULL` (the
default implementation in GstBaseSrc already considers the case).
Before this commit, there was no gtk-doc annotation representing this
fact, which caused the corresponding entry in the GIR file to also
miss this fact.
This caused bugs in other places, such inducing the Vala compiler to
introduce a wrongly assert on `(filter != NULL)` in every
implementation of the `get_caps` method implemented in Vala.
By passing NULL to `g_signal_new` instead of a marshaller, GLib will
actually internally optimize the signal (if the marshaller is available
in GLib itself) by also setting the valist marshaller. This makes the
signal emission a bit more performant than the regular marshalling,
which still needs to box into `GValue` and call libffi in case of a
generic marshaller.
Note that for custom marshallers, one would use
`g_signal_set_va_marshaller()` with the valist marshaller instead.
gst_pad_iterate_internal_links is usually used to find a single internal
link that a pad has, e.g. to find the corresponding pad of a multiqueue.
Added a helper function that will return either a single internal link,
if there's no other, or NULL.
In cases with many long-lived buffers that have qdata only very
briefly, the memory overhead of keeping an array of 16 GstQData
structs for each buffer can be significant. We free the array when
the last qdata is removed, like it was done in 1.14.
Fixes#436
This patch simply add a null check around a case where a child may have
been unparented concurrently to the deep_add_remove operation. This was
found by accident in the form of an "IS_GST_OBJECT" assertion, but had
no other known side effect in that test.
../libs/gst/check/libcheck/check.c:617:15: error: result of comparison of constant 4294967295 with expression of type 'clockid_t' is always false [-Werror,-Wtautological-constant-out-of-range-compare]
if (clockid == -1) {
~~~~~~~ ^ ~~
Silence -Wformat-nonliteral warnings from the internal copy of libcheck
../subprojects/gstreamer/libs/gst/check/libcheck/check.c:379:29: warning: format string is not a string literal [-Wformat-nonliteral]
vsnprintf (buf, BUFSIZ, msg, ap);
^~~
../subprojects/gstreamer/libs/gst/check/libcheck/check_error.c:48:21: warning: format string is not a string literal [-Wformat-nonliteral]
vfprintf (stderr, fmt, args);
^~~
../subprojects/gstreamer/libs/gst/check/libcheck/check_str.c:92:29: warning: format string is not a string literal [-Wformat-nonliteral]
n = vsnprintf (p, size, fmt, ap);
^~~
`g_object_notify()` actually takes a global lock to look up the
`GParamSpec` that corresponds to the given property name. It's not a
huge performance hit, but it's easily avoidable by using the
`_by_pspec()` variant.
Concurrent Windows' colored debug message and g_print will print
string hard to read. Instead, use gst_print* which serialize
debug output and the APIs call.
On Windows, concurrent colored gstreamr debug output and usual
stdout/stderr string will cause broken output on terminal.
Since it's OS specific behavior, that's hard to completely avoid it
but we can protect it at least among our printing interfaces side.
For buffers with multiple memory chunks, gst_buffer_map() has the side
effect of merging the memory chunks into one contiguous
chunk. Since gst_util_dump_mem() used gst_buffer_map() the internals
of the buffer could actually change as a result of printing it.
For the case of a buffer containing several memory chunks,
gst_memory_map() is now used to obtain the memory address and each
memory chunk is dumped separately preceded by a header line. The
behaviour for a buffer containing a single memory chunk is left unchanged.
Otherwise it can happen that we start waiting for another pad, while one
pad already has events that can be handled and potentially also a buffer
that can be handled. That buffer would then however not be accessible by
the subclass from GstAggregator::get_next_time() as there would be the
events in front of it, which doesn't allow the subclass then to
calculate the next time based on already available buffers.
As a side-effect this also allows removing the duplicated event handling
code in the aggregate function as we'll always report pads as not ready
when there is a serialized event or query at the top of at least one
pad's queue.
Fixes https://gitlab.freedesktop.org/gstreamer/gstreamer/issues/428
The install kwarg on configure_file() was only added in
Meson 0.50 but we're targetting older versions as well,
which caused a warning. The install kwarg is not needed
here as we specify install_dir, so we can just drop it.
Fixes#379
The documentation says that this allows the subclass to signal that it
needs more data before it can decide on caps, so let's actually
implement it that way.
If the element before the sink needs $n buffers to produce one output
buffer, we were reffing $n events and unreffing only one.
Prevent this by using g_object_set_qdata_full() to handle the event
unreffing so we're sure no ref will be lost.
This was added in 1.16 and accidentally duplicated the value of
the existing GST_MESSAGE_REDIRECT.
As the only known user of this message is GStreamer core itself,
and it is quite an obscure message, it seems best to just fix up
the enum value even if that technically breaks API.
Fixes#418
gst_ring_buffer_logger_log calls several functions while formatting
the message which may in turn log a message while we already hold
the mutex. Do all formatting first before acquiring the mutex to
avoid this and reduce the time we hold the mutex.
The records are static and so appear as false positives when using those
tracers with the leaks tracer as well.
The leaks tracer was already setting this flag on its record so let's
set it on the other ones as well.
In gst_download_buffer_wait_for_data(), when a seek is made with
perform_seek_to_offset() the `qlock` is released temporarily. Therefore,
the flushing condition can be set during this period and should be
checked.
This was not being checked before, causing occasional deadlocks when
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() was called.
GST_DOWNLOAD_BUFFER_WAIT_ADD_CHECK() assumes that the caller has already
checked that we're not flushing before, since this is done when
acquiring the lock; so if we release it temporarily somewhere, we need
to check for flush again.
Without that check, the function would keep waiting for the condition
variable to be notified before checking for flushing condition again,
and that may very well never happen. This was reproduced when during pad
deactivation when running WebKit in gdb.
sync=TRUE implementation changes the latency query of a non-live
upstream into live, though it wrongly set the upstream max latency to 0.
As non-live sources won't loose data if we wait longer, this should have
been reported as have no max latency limite (-1).
This is similar to what demuxers do, and necessary when multiple
sinks get seeked downstream of the aggregator: if we forward
duplicated seeks upstream, elements such as demuxers may drop
the flushing seeks, but return TRUE, aggregator then waits forever
for the flushing events.
Fixes#276
And change it to do nothing at all.
As debug categories don't use reference counting and they can be
retrieved from anywhere at any time by name, it is fundamentally unsafe
to free them at any point in time except for right before the end of the
process.
No code apart from a unit test seems to be currently using the function,
so deprecate it and also change it to do nothing at all.
When passing "sink_%d" twice to aggregator before it would create two
pads called "sink_0", because it failed to parse "%d" as integer and
used 0 instead then.
Instead validate that parsing was actually successful and also don't
even try to parse if the requested pad name contains a '%'.
We use to display the latency of each element in random order which is
not very convenient when comparing latency between different runs.
Sort them by "first activity" (the first latency reported for each
element) so it's consistent betwen runs.
This is the same logic when sorting and displaying element stats.
In the hotdoc inspector for example, pads are instantiated with
g_object_new, other code paths to get/set properties already make
that check.
And update doc cache