Allowing us to tell GstPad why we are failing an event, which might
be because we are 'flushing' even if the sinkpad is not in flush state
at that point.
https://github.com/mesonbuild/meson
With contributions from:
Tim-Philipp Müller <tim@centricular.com>
Matej Knopp <matej.knopp@gmail.com>
Jussi Pakkanen <jpakkane@gmail.com> (original port)
Highlights of the features provided are:
* Faster builds on Linux (~40-50% faster)
* The ability to build with MSVC on Windows
* Generate Visual Studio project files
* Generate XCode project files
* Much faster builds on Windows (on-par with Linux)
* Seriously fast configure and building on embedded
... and many more. For more details see:
http://blog.nirbheek.in/2016/05/gstreamer-and-meson-new-hope.htmlhttp://blog.nirbheek.in/2016/07/building-and-developing-gstreamer-using.html
Building with Meson should work on both Linux and Windows, but may
need a few more tweaks on other operating systems.
Until now we would start the task when the pad is activated. Part of the
activiation concist of testing if the pipeline is live or not.
Unfortunatly, this is often too soon, as it's likely that the pad get
activated before it is fully linked in dynamic pipeline.
Instead, start the task when the first serialized event arrive. This is
a safe moment as we know that the upstream chain is complete and just
like the pad activation, the pads are locked, hence cannot change.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
This fixes a race where we check if there is a clock, then it get
removed and we endup calling gst_clock_new_single_shot_id() with a NULL
pointer instead of a valid clock and also calling gst_object_unref()
with a NULL pointer later.
https://bugzilla.gnome.org/show_bug.cgi?id=757548
Previously, while allocating the pad number for a new pad, aggregator was
maintaining an interesting relationship between the pad count and the pad
number.
If you requested a sink pad called "sink_6", padcount (which is badly named and
actually means number-of-pads-minus-one) would be set to 6. Which means that if
you then requested a sink pad called "sink_0", it would be assigned the name
"sink_6" again, which fails the non-uniqueness test inside gstelement.c.
This can be fixed by instead setting padcount to be 7 in that case, but this
breaks manual management of pad names by the application since it then becomes
impossible to request a pad called "sink_2". Instead, we fix this by always
directly using the requested name as the sink pad name. Uniqueness of the pad
name is tested separately inside gstreamer core. If no name is requested, we use
the next available pad number.
Note that this is important since the sinkpad numbering in aggregator is not
meaningless. Videoaggregator uses it to decide the Z-order of video frames.
This code will never be called as max>=min in all cases. If the upstream
latency query returned min>max, the function already returned and all
values that are added to those have max>= min.
Not all aggregator subclasses will have a single pad template called sink_%u
and might do something special depending on what the application requests.
https://bugzilla.gnome.org/show_bug.cgi?id=757018
Otherwise they will receive a QOS event that has earliest_time=0 (because we
can't have negative timestamps), and consider their buffer as too late
https://bugzilla.gnome.org/show_bug.cgi?id=754356
In the case where you have a source giving the GstAggregator smaller
buffers than it uses, when it reaches a timeout, it will consume the
first buffer, then try to read another buffer for the pad. If the
previous element is not fast enough, it may get the next buffer even
though it may be queued just before. To prevent that race, the easiest
solution is to move the queue inside the GstAggregatorPad itself. It
also means that there is no need for strange code cause by increasing
the min latency without increasing the max latency proportionally.
This also means queuing the synchronized events and possibly acting
on them on the src task.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
Before aggregator based elements always started at running time 0,
now it's possible to select the first input buffer running time or
explicitly set a start-time value.
https://bugzilla.gnome.org/show_bug.cgi?id=749966
Adding a pad will add a new upstream that might have a bigger minimum latency,
so we might have to wait longer. Or it might be the first live upstream, in
which case we will have to start deadline based aggregation.
Removing a pad will remove a new upstream that might have had the biggest
latency, so we can now stop waiting a bit earlier. Or it might be the last
live upstream, in which case we can stop deadline based aggregation.
And keep on querying upstream until we get a reply.
Also, the _get_latency_unlocked() method required being calld
with a private lock, so removed the _unlocked() variant from the API.
And it now returns GST_CLOCK_TIME_NONE when the element is not live as
we think that 0 upstream latency is possible.
https://bugzilla.gnome.org/show_bug.cgi?id=745768
One has to use the src_lock anyway to protect the min/max/live so they
can be notified atomically to the src thread to wake it up on changes,
such as property changes. So no point in having a second lock.
Also, the object lock was being held across a call to
GST_ELEMENT_WARNING, guaranteeing a deadlock.
While gst_aggregator_iterate_sinkpads() makes sure that every pad is only
visited once, even when the iterator has to resync, this is not all we have
to do for querying the latency. When the iterator resyncs we actually have
to query all pads for the latency again and forget our previous results. It
might have happened that a pad was removed, which influenced the result of
the latency query.
It was between another function and its helper function before, which was
confusing when reading the code as it had nothing to do with the other
functions.