This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Not all aggregator subclasses will have a single pad template called sink_%u
and might do something special depending on what the application requests.
https://bugzilla.gnome.org/show_bug.cgi?id=757018
Otherwise they will receive a QOS event that has earliest_time=0 (because we
can't have negative timestamps), and consider their buffer as too late
https://bugzilla.gnome.org/show_bug.cgi?id=754356
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.
This lock is not what is commonly known as a "stream lock" in GStremer,
it's not recursive and it's taken from the non-serialized FLUSH_START event.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
steal_buffer() + unref seems to be a wide-spread idiom
(which perhaps indicates that something is not quite
right with the way aggregator pad works currently).
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Some members sometimes used atomic access, sometimes where not locked at
all. Instead consistently use a mutex to protect them, also document
that.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Whenever a GCond is used, the safest paradigm is to protect
the variable which change is signalled by the GCond with the same
mutex that the GCond depends on.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
No need to use an iterator for this which creates a temporary
structure every time and also involves taking and releasing the
object lock many times in the course of iterating. Not to mention
all that GList handling in gst_aggregator_iterate_sinkpads().
The minimum latency is the latency we have to wait at least
to guarantee that all upstreams have produced data. The maximum
latency has no meaning like that and shouldn't be used for waiting.
When iterating sink pads to collect some data, we should take the stream lock so
we don't get stale data and possibly deadlock because of that. This fixes
a definitive deadlock in _wait_and_check() that manifests with high max
latencies in a live pipeline, and fixes other possible race conditions.
This simplifies the code and also makes sure that we don't forget to check all
conditions for waiting.
Also fix a potential deadlock caused by not checking if we're actually still
running before starting to wait.
When this is TRUE, we really have to produce output. This happens
in live mixing mode when we have to output something for the current
time, no matter if we have enough input or not.
This removes the uses of GAsyncQueue and replaces it with explicit
GMutex, GCond and wakeup count which is used for the non-live case.
For live pipelines, the aggregator waits on the clock until either
data arrives on all sink pads or the expected output buffer time
arrives plus the timeout/latency at which time, the subclass
produces a buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=741146
Otherwise the caps of the pad might change while the subclass still works with
a buffer of the old caps, assuming the the current pad caps apply to that
buffer. Which then leads to crashes and other nice effects.
https://bugzilla.gnome.org/show_bug.cgi?id=740376
Audiomixer blocksize, cant be 0, hence adjusting the minimum value to 1
timeout value of aggregator is defined with MAX of MAXINT64,
but it cannot cross G_MAXLONG * GST_SECOND - 1
Hence changed the max value of the same
https://bugzilla.gnome.org/show_bug.cgi?id=738845
Determines the amount of time that a pad will wait for a buffer before
being marked unresponsive.
Network sources may fail to produce buffers for an extended period of time,
currently causing the pipeline to stall possibly indefinitely, waiting for
these buffers to appear.
Subclasses should render unresponsive pads with either silence (audio), the
last (video) frame or what makes the most sense in the given context.
The previous implementation kept accumulating GSources,
slowing down the iteration and leaking memory.
Instead of trying to fix the main context flushing, replace
it with a GAsyncQueue which is simple to flush and has
less overhead.
https://bugzilla.gnome.org/show_bug.cgi?id=736782
Without a lock that is taken in FLUSH_START we had a rare race where we
end up aggregating a buffer that was before the whole FLUSH_START/STOP
dance. That could lead to very wrong behaviour in subclasses.
Avoiding to be in an inconsistent state where we do not have
actual negotiate caps set as srccaps and leading to point where we
try to unref ->srccaps when they have already been set to NULL.
https://bugzilla.gnome.org/show_bug.cgi?id=735042
Along with the required mandatory dependent events.
Some elements need to perform an allocation query inside
::negotiated_caps(). Without the caps event being sent prior,
downstream elements will be unable to answer and will return
an error.
https://bugzilla.gnome.org/show_bug.cgi?id=732662
We would constantly re-post the taglist because
posted_avg_rate only gets set to avg_bitrate if
parse->priv->post_avg_bitrate is true, so if it's
false the posted rate will always differ from the
current average rate and we'd queue an update,
which leads to us spamming downstream and the
application with taglist updates.
Fix this by only queuing an update if the average
rate will actually be posted.
These taglists updates could cause expensive
operations on the application side, e.g. in Totem.
https://bugzilla.gnome.org/show_bug.cgi?id=786561
buffer is not unreferened if preroll failed
:Detailed Notes:
- Problem : video freeze when switching from pause to 1/2-FF repeatedly
- RootCause : buffer leaks in basesink
- Solution : unref the buffer if prerolled failed
:Testing Preformed:
How to Test :
pause -> 1/2 FF -> resume -> pause -> 1/2 FF ...
https://bugzilla.gnome.org/show_bug.cgi?id=784932
Holding this lock on live source prevents the source from changing
the caps in ::create() without risking a deadlock. This has consequences
as the LIVE_LOCK was replacing the STREAM_LOCK in many situation. As a
side effect:
- We no longer need to unlock when doing play/pause as the LIVE_LOCK
isn't held. We then let the create() call finish, but will block if
the state have changed meanwhile. This has the benefit that
wait_preroll() calls in subclass is no longer needed.
- We no longer need to change the state to unlock, simplifying the
set_flushing() interface
- We need different handling for EOS depending if we are in push or pull
mode.
This patch also document the locking of each private class member and
the locking order.
https://bugzilla.gnome.org/show_bug.cgi?id=783301
This is something bindings can't handle and it causes leaks. Instead
move the ref_sink() to the explicit, new() constructors.
This means that abstract classes, and anything that can have subclasses,
will have to do ref_sink() in their new() function now. Specifically
this affects GstClock and GstControlSource.
https://bugzilla.gnome.org/show_bug.cgi?id=743062
An untested pointer segfaulted in webkit while playing video
on imx6 sabrelite. It turned out that the imx plugin didn't
implement the meta transform function.
The following GST_DEBUG trace was visible:
gstbasetransform.c:1779:foreach_metadata:<conv2> copy metadata
GstImxVpuBufferMetaAPI
Thread 26 vqueue:src received signal SIGSEGV, Segmentation fault.
(gdb) bt
0x00000000 in ?? ()
0x73f8d7d8 in foreach_metadata (inbuf=0xc9b020, meta=0x474b2490,
user_data=<optimized out>) at gstbasetransform.c:1781
0x73eb3ea8 in gst_buffer_foreach_meta (buffer=buffer@entry=0xc9b020,
func=0x73f8d705 <foreach_metadata>,
user_data=user_data@entry=0x474b24d4)
at gstbuffer.c:2234
https://bugzilla.gnome.org/show_bug.cgi?id=782050
This unbalanced closing parenthesis is leftover from the commit
8b739d91e7. It used to wrap the caps but we don't seem to do that in
the current code.
So, just remove it. No functionality has been changed.
https://bugzilla.gnome.org/show_bug.cgi?id=781484
Use g_object_new() instead which nowadays has a shortcut for the
no-properties check. It still does an extra GType check in the
function guard, but there's a pending patch to remove that
and it's hardly going to be a performance issue in practice,
even less so on a system that's compiled without run-time checks.
Alternative would be to move to the new g_object_new_properties()
with a fallback define for older glib versions, but it makes the
code look more unwieldy and doesn't seem worth it.
Fixes deprecation warnings when building against newer GLib versions.
https://bugzilla.gnome.org/show_bug.cgi?id=780903
If parsing returns a non-OK flow return in the middle
of processing an input buffer, don't overwrite that
if a later return is OK again - the subclass might
return not-linked in the middle, and then discard
subsequent data without pushing while returning OK.
A later success doesn't invalidate the earlier failure,
but we should continue processing after not-linked, so
as to keep parse state consistent.
https://bugzilla.gnome.org/show_bug.cgi?id=779831
We would add the offset a second time in _scan_for_start_code()
when we found a result, but it's already been added to the data
pointer at the beginning of _masked_scan_uint32_peek(), so the
peeked value would be wrong if the initial offset was >0, and
we would potentially read memory out-of-bounds.
Add unit test for all of this.
https://bugzilla.gnome.org/show_bug.cgi?id=778365
Otherwise when seeking/looping to the start when reaching the end,
the sink waits for the duration of the stream. So the user hears
nothing for the duration of the stream before it actually loop again.
See example attached to the bug for that.
Existing test:
gst-plugins-good/tests/icles/test-segment-seeks foo.flac
Without the patch the user hears a crack/cut at each seek.
https://bugzilla.gnome.org/show_bug.cgi?id=777780
This was totally non-obvious, the kind of big problem is that subclasses must
be able to unblock their streaming thread and continue exactly where they left off
on unpause!
https://bugzilla.gnome.org/show_bug.cgi?id=773912
It might've failed just because of flushing or other things, and we
should retry again on the next possibility if something ever calls in
here again.
https://bugzilla.gnome.org/show_bug.cgi?id=774623
Check the correct segment format value.
parse->segment.format is the format we're outputting in,
not the upstream format. Use parse->priv->upstream_format instead,
and make sure it's set in pull mode.
If the parser is not parsing a raw elementary stream, restrict
the position, duration and conversion query replies to
things we can sensibly answer about - especially don't do
random conversions to/from bytes.
This is cosmetic as 'late' should never be set during preroll (in pause).
Though code may evolve in the future, so this is good for preventing
potential bugs.
https://bugzilla.gnome.org/show_bug.cgi?id=772468
When the first buffer arrives, we endup calling:
->prepare()
->prepare()
->preroll()
->render()
This will likely confuse any element using this method. With this patch,
we ensure the preroll take place before the first render prepare() is
called. This will result in:
->prepare()
->preroll()
->prepare()
->render()
https://bugzilla.gnome.org/show_bug.cgi?id=772468
This reverts commit 2e278aeb71.
Some parsers, specifically audio parsers, assume to get all remaining
data on EOS and just pass them onwards. While the idea here is correct,
we will probably need a property for this on baseparse for parsers to
opt-in.
https://bugzilla.gnome.org/show_bug.cgi?id=773666
Implement handling in basesink to not unconditionally discard
out-of-segment buffers and expose it as a new property on fakesink
(not unconditionally in all basesink based sinks).
The property defaults to FALSE.
https://bugzilla.gnome.org/show_bug.cgi?id=765734
baseparse would pass whatever is left in the adapter to the
subclass when draining, even if it's less than the minimum
frame size required. This is bogus, baseparse should just
discard that data then. The original intention of that code
seems to have been that if we have more data available than
the minimum required we should pass all of the data available
and not just the minimum required, which does make sense, so
we'll continue to do that in the case that more data is available.
Fixes assertions in rawvideoparse on EOS after not-negotiated with
fakesrc sizetype=random ! queue ! rawvideoparse format=rgb ! appsink caps=video/x-raw,format=I420
https://bugzilla.gnome.org/show_bug.cgi?id=773666
The durations of the buffers are (usually) assuming that no frames are being
dropped and are just the durations coming from the stream. However if we do
trickmodes, frames are being dropped regularly especially if only key units
are supposed to be played.
Fixes completely bogus QoS proportion values in the above case.