In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
We need to sync the pad values before taking the aggregator and pad locks
otherwise the element will just deadlock if there's any property changes
scheduled using GstController since that involves taking the aggregator and pad
locks.
Also add a test for this.
https://bugzilla.gnome.org/show_bug.cgi?id=749574
Also:
- Don't modify size on early buffer
The size is the size of the buffer, not of remaining part.
- Use the input caps when manipulating the input buffer
Also store in in the sink pad
- Reply to the position query in bytes too
- Put GAP flag on output if all inputs are GAP data
- Only try to clip buffer if the incoming segment is in time or samples
- Use incoming segment with incoming timestamp
Handle non-time segments and NONE timestamps
- Don't reset the position when pushing out new caps
- Make a number of member variables private
- Correctly handle case where no pad has a buffer
If none of the pads have buffers that can be handled, don't claim to be EOS.
- Ensure proper locking
- Only support time segments
https://bugzilla.gnome.org/show_bug.cgi?id=740236
When the timeout is reached, only ignore pads with no buffers, iterate
over the other pads until all buffers have been read. This is important
in the cases where the input buffers are smaller than the output buffer.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Actually accumulate the sample counter to check the accumulated error
between actual timestamps and expected ones instead of just resetting
the error back to 0 with every new buffer.
Also don't reset discont_time whenever we don't resync. The whole point of
discont_time is to remember when we first detected a discont until we actually
act on it a bit later if the discont stayed around for discont_wait time.
https://bugzilla.gnome.org/show_bug.cgi?id=746032
This allows us to handle new segment events correctly; either by dropping
buffers or inserting silence; for example if the offset is changed on an srcpad
connected to audiomixer.
This reverts commit d387cf67df.
The analysis was wrong: The first 20ms of latency are introduced by the source
already and put into the latency query, making it only necessary to cover the
additional 20ms of audiomixer inside audiomixer.
Let's assume a source that outputs outputs 20ms buffers, and audiomixer having
a 20ms output buffer duration. However timestamps don't align perfectly, the
source buffers are offsetted by 5ms.
For our ASCII art picture, each letter is 5ms, each pipe is the start of a
20ms buffer. So what happens is the following:
0 20 40 60
OOOOOOOOOOOOOOOO
| | | |
5 25 45 65
IIIIIIIIIIIIIIII
| | | |
This means that the second output buffer (20 to 40ms) only gets its last 5ms
at time 45ms (the timestamp of the next buffer is the time when the buffer
arrives). But if we only have a latency of 20ms, we would wait until 40ms
to generate the output buffer and miss the last 5ms of the input buffer.
There's no reason why audiomixer should override the segment
base of upstream with whatever value it got from a SEEK event,
or even worse... with 0 if there was no SEEK event yet. This
broke synchronization if upstream provided a segment base other
than 0, e.g. when using pad offsets.
Also that this code did things conditional on the element's state
should've been a big warning already that something is just wrong.
If this breaks anything else now, let's fix it properly :)
Also don't do fancy segment position trickery when receiving a
segment event. It's just not correct.
Instead of using the GST_OBJECT_LOCK we should have
a dedicated mutex for the pad as it is also associated
with the mutex on the EVENT_MUTEX on which we wait
in the _chain function of the pad.
The GstAggregatorPad.segment is still protected with the
GST_OBJECT_LOCK.
Remove the gst_aggregator_pad_peak_unlocked method as it does not make
sense anymore with a private lock.
https://bugzilla.gnome.org/show_bug.cgi?id=742684
Reduce the number of locks simplify code, what is protects
is exposed, but the lock was not.
Also means adding an _unlocked version of gst_aggregator_pad_steal_buffer().
https://bugzilla.gnome.org/show_bug.cgi?id=742684
This can happen if this is a live pipeline and no source produced any buffer
and sent no caps until the an output buffer should've been produced according
to the latency.