This plugin is useful when you want to pipe arbitrary data to
a different pipeline within the same process. Buffers, events, and caps
are transmitted as-is without copying or manipulation.
"avwait-status" is posted when avwait starts or stops passing through
data (e.g. because target-timecode and end-timecode respectively have
been reached). The attached structure includes a "dropping" boolean (set
to TRUE if we are currently dropping data, FALSE otherwise), and a
"running-time" GST_CLOCK_TIME which contains the running time of the
change.
https://bugzilla.gnome.org/show_bug.cgi?id=790170
Reordering of packets is not very common in networks, and the delay
functions will always introduce reordering if delay > packet-spacing,
so by setting allow-reordering to FALSE you guarantee that the packets
are in order, while at the same time introducing delay/jitter to them.
By using the property "delay-distribution" the user can control how the
delay applied to delayed packets is distributed. This is either the
uniform distribution (as before) or the normal distribution.
"min-delay" and "max-delay" control both distributions. For the normal
distribution it defines the bounds of the 95% confidence interval.
When input is not in byte-stream format there is no need to wait for the first
buffer before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
Same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
When input is in AVC format there is no need to wait for the first buffer
before setting src caps. We already have all the information from the
input codec_data.
This allow us to already configure downstream elements allowing them,
for example, to already allocate their internal buffers as they know
the format of the input they are about to receive.
https://bugzilla.gnome.org/show_bug.cgi?id=790709
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
Exact same change as the one I just did in h264parse.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
Try prioritizing downstream's caps over upstream's if possible so the
parser can configured in "passthrough" if possible and save it from
doing useless conversions.
https://bugzilla.gnome.org/show_bug.cgi?id=790628
A deserialised timecode has a framerate of 0/1 by default. That breaks
it when comparing the frames field with another timecode (incoming from
the frame). We were setting the framerate when receiving the caps event,
but not when setting the timecode in set_property, so it was broken for
timecodes set after the caps event.
Also checking if the fps_n we got from the caps event is != 0 before
setting it - also at the caps event.
https://bugzilla.gnome.org/show_bug.cgi?id=790334
Now that timecodes support proper serialisation / deserialisation, a
timecode might have an invalid fps_n / fps_d even without using the
target-time-code-string property. Detect those cases and set fps_n/fps_d
properly.
If end_tc is NULL, it means that we don't want avwait to stop at any
timecode. When explicitly setting end_tc to NULL, there is no point in
comparing end_tc with start_tc (to see if we'll reject end_tc for being
before start_tc), so the check in question is completely disabled
instead of letting it crash.
Add support for parsing linear time code from
an audio source using libltc
https://github.com/x42/libltc
The user can now choose between 3 different and independently
running timecode sources. The old override-existing property
has been replaced by timecode-source.
https://bugzilla.gnome.org/show_bug.cgi?id=784295
This element can be configured to add jitter and/or drift to incoming
buffers' PTS, DTS, or both. Amplitude and average of jitter and drift
are configurable.
https://bugzilla.gnome.org/show_bug.cgi?id=787358
avwait can now be configured to stop when a given timecode has been
reached. It will start at the timecode indicated with start-timecode and
end at the timecode indicated with end-timecode. If end-timecode is
NULL (default), the previous functionality is preserved: keep going and
not end.
https://bugzilla.gnome.org/show_bug.cgi?id=789403
* Avoid copying the pending data and instead create a buffer directly from
that data with the appropriate offset.
* Locate the jp2k magic to determine the exact location of the (first) frame
data instead of assuming that the header is of an expected size
https://bugzilla.gnome.org/show_bug.cgi?id=786111
The jp2k specification (ITU-T T.800) specifies that the 'brat' box
has two fields and the second one (AUF2) can be set to 0 for progressive
streams.
The problem is that the mpeg-ts specification (ITU-T H.222.0 06/2012)
says that the AUF2 field is only present if the stream is interlaced
In order to cope with both situation, accept those next 32bit if the
stream is marked as progressive and those bits contain 0
https://bugzilla.gnome.org/show_bug.cgi?id=786111
Crossfading is a bit more complex than just having two pads with the
right keyframes as the blending is not exactly the same.
The difference is in the way we compute the alpha channel, in the case
of crossfading, we have to compute an additive operation between
the destination and the source (factored by the alpha property of both
the input pad alpha property and the crossfading ratio) basically so
that the crossfade result of 2 opaque frames is also fully opaque at any
time in the crossfading process, avoid bleeding through the layer
blending.
Some rationnal can be found in https://phabricator.freedesktop.org/T7773.
https://bugzilla.gnome.org/show_bug.cgi?id=784827