The processing deadline is the acceptable amount of time to process the media
in a live pipeline before it reaches the sink. This is on top of the algorithmic
latency that is normally reported by the latency query. This should make
pipelines such as "v4lsrc ! xvimagesink" not claim that all frames are late
in the QoS events. Ideally, this should replace max_lateness for most applications.
https://bugzilla.gnome.org/show_bug.cgi?id=640610
We need all relevant events of a segment to have consistent seqnum:
* GST_EVENT_SEGMENT
* GST_EVENT_EOS
If we are push-based and create a new segment, use the same seqnum
as the upstream event.
If we are pull-based, use the seqnum of that newly created segment
event everywhere
strncpy() is assumed to be for strings so the compiler assumes that
it will need an extra byte for the string-terminaning NULL.
For cases where we know it's actually "binary" data, just copy it
with memcpy.
https://bugzilla.gnome.org/show_bug.cgi?id=795756
GstBitWriter provides a bit writer that can write any number of
bits into a memory buffer. It provides functions for writing any
number of bits into 8, 16, 32 and 64 bit variables.
https://bugzilla.gnome.org/show_bug.cgi?id=707543
Meson supports building both static and shared libraries in a single
library() call. It has the advantage of reusing the same .o objects and
thus avoid double compilation.
https://bugzilla.gnome.org/show_bug.cgi?id=794627
And make the drop() functions expect a 0-based index too,
this addresses a longstanding FIXME. This will not break
backward compatibility, because the drop() functions
were previously only meant to be used with the index
returned by find().
https://bugzilla.gnome.org/show_bug.cgi?id=795156
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
We need different export decorators for the different libs.
For now no actual change though, just rename before the release,
and add prelude headers to define the new decorator to GST_EXPORT.
Otherwise it's possible that we won't be able to start again
depending the implementation. We do start/stop in normal use cases
whenever GST_QUERY_SCHEDULING happens before we are started.
https://bugzilla.gnome.org/show_bug.cgi?id=794149
The flushing state is handled a bit differently, there is no need
to stop flushing in start_complete. This would other result in
unlock_stop being called without unlock_start.
Unlike what the old comment says, there is no need to take the live
lock here, we are still single threaded at this point (app thread
or the state change thread). Also, we will wait for playing state
in create/getrange, no need to do that twice.
https://bugzilla.gnome.org/show_bug.cgi?id=794149
The queue gets filled by the tail, so a query will always be the tail
object, not the head object. Also add a _peek_tail_struct() method to the
GstQueueArray to enable looking at the tail.
With unit test to prevent future regression.
https://bugzilla.gnome.org/show_bug.cgi?id=762875
Position queries with GST_FORMAT_TIME are supposed to return stream
time.
gst_base_sink_get_position() estimates the current stream time on its
own instead of using gst_segment_to_stream_time(), but the algorithm
used was not taking segment.offset into account, resulting in invalid
values when this field was set to a non-zero value.
https://bugzilla.gnome.org/show_bug.cgi?id=792434
As we do that for serialized events as well, and the subclass will
most likely need to access pad->segment to make its decisions,
doing that from the sinkpad's streaming threads was racy.
Sub-class may want to decide to go passthrough/in-place by inspecting
the support meta APIs. This patch duplicates the check for this mode,
so we still don't do uneeded allocation query while we allow sub-classes
to switch the behaviour during it's own decide_allocation call.
Notice that such sub-class need to reset the class to non-passthrough in
set_caps() in order for decide_allocation to be called again. This is
needed otherwise we'd be doing an allocation query in element in which
it make no sense (notably capsfilter).
https://bugzilla.gnome.org/show_bug.cgi?id=791453
Add a gst_base_src_submit_buffer_list() function that allows subclasses
to produce a bufferlist containing multiple buffers in the ::create()
function. The buffers in the buffer list will then also be pushed out
in one go as a GstBufferList. This can reduce push overhead
significantly for sources with packetised inputs (such as udpsrc)
in high-throughput scenarios.
The _submit_buffer_list() approach was chosen because it is fairly
straight-forward, backwards-compatible, bindings-friendly (as opposed
to e.g. making the create function return a mini object instead),
and it allows the subclass maximum control: the subclass can decide
dynamically at runtime whether to return a list or a single buffer
(which would be messier if we added a create_list virtual method).
https://bugzilla.gnome.org/show_bug.cgi?id=750241
Convenience function to just grab all pending data
from the harness, e.g. if we just want to check if
it matches what we expect and we don't care about
the chunking or buffer metadata.
Based on patch by: Havard Graff <havard.graff@gmail.com>
If we're adding to the tail of the queue, it's because we're converting
a gap event, so don't block there it means we're calling from the output
thread.
https://bugzilla.gnome.org/show_bug.cgi?id=784911
Add a comment for when the state matters. Use a local var for priv in
update_time_level() to improve readability. Move the our_latency local
var below the query results checks.
We want to skip serialization for FLUSH_STOP events (apparently). We can
simplify the code to add it to the top-level conditions. There was nothing
done in the first code path if the event was FLUSH_STOP.
Just queue it like any other serialized event. This way we don't need to
check if there still are buffers in the queue.
Validated with the tests and gst-launch-1.0 pipelines.
Don't copy the whole event struct. Set the input params when we call the
forwarding helper. Initialize the internal fields and return values in the
helper.
Otherwise check_events() will not remove the GAP event (as the queue
tail is not the event anymore but the GAP buffer), then the GAP buffer
is handled, then the GAP event is handled again, ... forever.
This ensures that they really get processed in order with
buffers. Just waiting for the queue to be empty is sometimes not
enough as the buffers are dropped from the pad before the result is
pushed to the next element, sometimes resulting in surprising
re-ordering.
In the case an aggregator is created and pads are requested but only
linked later, we end up never updating the upstream latency.
This was because latency queries on pads that are not linked succeed,
so we never did a new query once a live source has been linked, so the
thread was never started.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
The function needs to be unlocked if any data is received, but only
end the first buffer processing on an actual buffer, synchronized events
don't matter on the first buffer processing.
https://bugzilla.gnome.org/show_bug.cgi?id=781673
Allowing us to tell GstPad why we are failing an event, which might
be because we are 'flushing' even if the sinkpad is not in flush state
at that point.
Until now we would start the task when the pad is activated. Part of the
activiation concist of testing if the pipeline is live or not.
Unfortunatly, this is often too soon, as it's likely that the pad get
activated before it is fully linked in dynamic pipeline.
Instead, start the task when the first serialized event arrive. This is
a safe moment as we know that the upstream chain is complete and just
like the pad activation, the pads are locked, hence cannot change.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
This fixes a race where we check if there is a clock, then it get
removed and we endup calling gst_clock_new_single_shot_id() with a NULL
pointer instead of a valid clock and also calling gst_object_unref()
with a NULL pointer later.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
Previously, while allocating the pad number for a new pad, aggregator was
maintaining an interesting relationship between the pad count and the pad
number.
If you requested a sink pad called "sink_6", padcount (which is badly named and
actually means number-of-pads-minus-one) would be set to 6. Which means that if
you then requested a sink pad called "sink_0", it would be assigned the name
"sink_6" again, which fails the non-uniqueness test inside gstelement.c.
This can be fixed by instead setting padcount to be 7 in that case, but this
breaks manual management of pad names by the application since it then becomes
impossible to request a pad called "sink_2". Instead, we fix this by always
directly using the requested name as the sink pad name. Uniqueness of the pad
name is tested separately inside gstreamer core. If no name is requested, we use
the next available pad number.
Note that this is important since the sinkpad numbering in aggregator is not
meaningless. Videoaggregator uses it to decide the Z-order of video frames.
This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Not all aggregator subclasses will have a single pad template called sink_%u
and might do something special depending on what the application requests.
https://bugzilla.gnome.org/show_bug.cgi?id=757018
Otherwise they will receive a QOS event that has earliest_time=0 (because we
can't have negative timestamps), and consider their buffer as too late
https://bugzilla.gnome.org/show_bug.cgi?id=754356
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.
This lock is not what is commonly known as a "stream lock" in GStremer,
it's not recursive and it's taken from the non-serialized FLUSH_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
steal_buffer() + unref seems to be a wide-spread idiom
(which perhaps indicates that something is not quite
right with the way aggregator pad works currently).
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Some members sometimes used atomic access, sometimes where not locked at
all. Instead consistently use a mutex to protect them, also document
that.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Whenever a GCond is used, the safest paradigm is to protect
the variable which change is signalled by the GCond with the same
mutex that the GCond depends on.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
No need to use an iterator for this which creates a temporary
structure every time and also involves taking and releasing the
object lock many times in the course of iterating. Not to mention
all that GList handling in gst_aggregator_iterate_sinkpads().
The minimum latency is the latency we have to wait at least
to guarantee that all upstreams have produced data. The maximum
latency has no meaning like that and shouldn't be used for waiting.
When iterating sink pads to collect some data, we should take the stream lock so
we don't get stale data and possibly deadlock because of that. This fixes
a definitive deadlock in _wait_and_check() that manifests with high max
latencies in a live pipeline, and fixes other possible race conditions.
This simplifies the code and also makes sure that we don't forget to check all
conditions for waiting.
Also fix a potential deadlock caused by not checking if we're actually still
running before starting to wait.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
This removes the uses of GAsyncQueue and replaces it with explicit
GMutex, GCond and wakeup count which is used for the non-live case.
For live pipelines, the aggregator waits on the clock until either
data arrives on all sink pads or the expected output buffer time
arrives plus the timeout/latency at which time, the subclass
produces a buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=741146
Otherwise the caps of the pad might change while the subclass still works with
a buffer of the old caps, assuming the the current pad caps apply to that
buffer. Which then leads to crashes and other nice effects.
https://bugzilla.gnome.org/show_bug.cgi?id=740376
Audiomixer blocksize, cant be 0, hence adjusting the minimum value to 1
timeout value of aggregator is defined with MAX of MAXINT64,
but it cannot cross G_MAXLONG * GST_SECOND - 1
Hence changed the max value of the same
https://bugzilla.gnome.org/show_bug.cgi?id=738845
Determines the amount of time that a pad will wait for a buffer before
being marked unresponsive.
Network sources may fail to produce buffers for an extended period of time,
currently causing the pipeline to stall possibly indefinitely, waiting for
these buffers to appear.
Subclasses should render unresponsive pads with either silence (audio), the
last (video) frame or what makes the most sense in the given context.
The previous implementation kept accumulating GSources,
slowing down the iteration and leaking memory.
Instead of trying to fix the main context flushing, replace
it with a GAsyncQueue which is simple to flush and has
less overhead.
https://bugzilla.gnome.org/show_bug.cgi?id=736782